Solve the given initial value problem

Discussion of potential low market cap cryptocurrency moonshots

2017.12.16 23:52 LucidDreamState Discussion of potential low market cap cryptocurrency moonshots

This subreddit is a place to discuss low market cap cryptocurrencies with a moonshot potential. Make sure you read the sidebar before participating. ALL OF IT. This place is generally not for you if you're new to crypto. There are requirements to be able to participate in this subreddit. No exceptions to these are made. Read the sidebar.

2010.10.25 08:58 someprimetime Life Pro Tips

Tips that improve your life in one way or another.

2014.12.14 19:12 smoketheevilpipe Accounting Student Help

I made this sub to centralize help for accounting students. /accounting has tons of good career advice, and advice in general about the field, but is not really the best place to get assistance with homework. I figured this could serve a supplement to /accounting to do just that.

2023.03.27 03:46 AndreyDidovskiy Solana (SOL) Master Guide - Everything You Need to Know!

Solana (SOL) Master Guide - Everything You Need to Know!
Originally Published:
In the cluttered cyber world of copycats & vaporware, Solana {SOL} (sometimes jokingly called solami by the degen chads on CT) stands out from the crowd with its unique design & value proposition.
\Disclaimer\** → I hold $SOL in my portfolio & believe in the future of the project. → This is not an endorsement to buy the coin or use the network. (Do you) → Solana is home to a prolific NFT community… however, there will be no mention of NFT’s in this post because, at this time, there are too many failures, frauds, rugs & scams. \one execption being Sol domain names*.* → This is strictly a general overview of the network. Not about degens or monke pictures… ok, just a tiny bit about degening.

📜 A Bit of Background

Conceptualized in 2017 by ex-Qualcomm engineer Anatoliy Yakoveno & birthed in march of 2020, Solana is a turning complete, smart contract platform & distributed ledger network that was created with the intention of displacing the legacy exchange infrastructure. (not killing ethereum)
The Solana network was & continues to be one of the most exciting innovations in the space due to its heavy dedication to the technological elements of its design.

👨‍💻 The Technical Specs 👨‍💻

Monolithic Architecture Monolithic software architecture in blockchain means that all major operational activity; transaction execution, data storage/availability, and consensus, is condensed into a single endpoint; the chain itself. This design brings its own set of tradeoffs including better prospective value propositions for the native asset due to strong utility ties between the token<->network (NFA), while hindering its potential to be flexible. {Great resource on blockchain software architecture here}
Examples of another crypto with Monothlic architecture: Bitcoin (BTC).
POH (Proof-Of-History) Proof-Of-History is a novel tweak to the POS consensus model through the implementation of time as a module. Technically, Proof of History happens before state consensus is reached by allowing block producers the ability to submit blocks & organize/confirm them later. This is made possible by a combination of two things, 1) the application of pBFT + VDF (verifiable delay functions) & 2) the Clustering of consensus nodes (rather than 1 node doing all the work, on Solana, there are clusters of 25 nodes that are batching tx’s for consensus at any given time). That helps in eliminating the issue of scalability and transaction ordering.
Asynchronicity in block production & contract execution Basically, the ability to operate at max capacity without bloating/intereupting the network. (outages mentioned in the next section) Partially made possible through node clustering & partially made possible through a parallel processing technique called Sealeveling. The technical aspect of this are far beyond the scope of this article, but if you want to dive deeper on it, make sure to check out this post
Written in RUST programming language While not easy to learn for beginner developers, RUST is considered to be among the strongest of programming languages. At a very basic level, RUST is a low-level language that talks directly to hardware with an intense focus on memory allocation. Here is a great piece to understand the benefits of RUST as a language.
* Multi-Client Architecture (in development) Within the context of blockchain, a client is the software that must be utilized in order to connect a piece of hardware to the network. A delicate nuance in distributed system design that is rarely (if ever) brought to the public’s consciousness, is that software clients are an important touchpoint for the security & decentralization of a network. If a network only has 1 software client, and if for any reason that client has a bug, then the whole network is at risk. The more clients, the more resilient the network becomes.
Extremely Fast The block time is 400 milliseconds; that is approximately 30–32.5x faster than Ethereum(ETH) & 1,500x faster than Bitcoin (BTC).
Huge & growing developer ecosystem Over 2,000 monthly active developers puts Solana as the 2nd most active network by developer activity; ahead of Polkadot & only behind Ethereum.
High Throughput The network averages ~5,000 TPS on a day-to-day basis & has the capacity to scale, in theory, indefinitely. The network is only constrained by the amount & type of hardware used by the nodes. As hardware improves, the network becomes more efficient; as more nodes join, the network becomes more efficient. So then LFGrow.

⚠💨 Headwinds 💨⚠

➡ Is a VC chain. Supported heavily by deep-pocketed venture capital firms. This is not necessarily a bad thing, but it does reduce the network's credibility in terms of decentralization.
Was endorsed by (SBF) & FTX; As basically everybody in the world knows, Sam-Bankman Fried was the single biggest supporter of the Solana network. Actually recorded saying “Sell me all of your Solana & then Fuck off” as well as injecting hundreds of millions of dollars into the project; the once heroic supporter has now become the sh*tstain across the windshield of Solana’s life.
Suffers from Outages The single biggest hurdle that keeps on messing with the project. This is not a small issue, if outages happen while people have money on the network, real-life problems occur. As of this writing, the network has faced roughly 10 full outages & countless partial outages. If you want to explore all of them here is a great uptime resource.
Community is… interesting. I love SOL. I love the developers. I love the project. I Even love Solana CT! But I cannot for the life of me figure out how it keeps attracting low-IQ people to its user base. At first, I thought it was just wrought with bots… but then I found out… Dear reader, I SWEAR to you, I personally know real SOL users that pronounce SolanA as SolanO. I asked them why… they don't know… 🤦‍♂️
I digress.

🧰Ecosystem Resources:

Since the turbulence of the recent market activity, many tools in the Solana ecosystem are sunsetting (r.i.p. Serum) while others are just dying off due to a lack of interest. Nevertheless, there still remain a handful of applications that have only strengthened their positions over this time.
Below I have tried my best to organize as many “valid” tools as possible:


Phantom - Solflare -


Raydium - Orca - Saber - Lifinity - Atrix - Soldex - Saros - Aldrin - DexLab - Crema -

Lending/Borrowing & Stablecoin Minting

Larix - Solend - Apricot - Port - Jet - PsyFI - Hubble - Ratio - Parrot - Meteroa -

Liquid Staking

Marinade - Socean -

Yield Aggregators

Tulip - Cropper - Katana - Kamino - Sunny - Francium -

Explorers & Analtyics

Solana Explorer - Solana FM - SolScan - Solana Beach - ChainCrunch -

Synthetic Assets

Synthetify -

Dashboard & Portfolio

Sonar Watch - Sols Watch - Step Finance - Ape Bored -

Domain Names

Bonfida —


→ Fantastic report by Messari on the State of Solana in the Q4 2022. → Great Interview with the Founder on the Future of Solana beyond 2023
I will end this all with a timeless (←POH pun intended) quote ;
“Innovation distinguishes between a leader and a follower.” — Steve Jobs
May your bags always be full & overflowing 🥂

Originally Published:
submitted by AndreyDidovskiy to u/AndreyDidovskiy [link] [comments]

2023.03.27 03:46 VRACG-Tempest Ion Array SSTO/Intra-Solar Drive

Ion Array SSTO/Intra-Solar Drive
Hi all- first time posting here (I generally lurk, but felt that this would be interesting.)
I present the Ion Array drive. I put this together a while ago but only recently discovered something rather wild about it- it can Single Stage To Orbit (SSTO). O.O
The in-game workshop gives a Thrust/Weight Ratio of 0.18, but this is a miscalculation (see below) with the real TWR being 1.062. Sure enough, upon launching it, you can readily (if slowly) get it into orbit with minimal heating and 32% fuel remaining.
Somewhat more seriously I then link up with a pair of my modular fuel tanks and took it for a spin past Jupiter. I've run a couple of these tests with various numbers of fuel tanks.
:: Test Parameters :: Launch from Earth Orbit, 250km.
Establish Jupiter Orbit, ~17000km.
Return to Earth Orbit, 250km.
:: Results :: 1 Tank - You return with a little in the tank, no need to jettison anything. 2 Tanks - Jettison 1 tank for the final circularising burn as you return to Earth. 3 Tanks+Lander - Jettison 1 tank during the circularising burn at Jupiter, jettison tank 2 circularising for Earth orbit. In this flight my payload landed at every moon in Jupiter and was transported back to Earth.
:: SSTO Calculations :: The game gives us a Thrust/Weight Ratio (TWR) of 0.18. Given we're able to launch anyway I set about calculating the TWR.
>Assuming the weight is calculated accurately (304.97t), let's calculate the thrust. >18 engines per structural arm. 6 arms on each side. 18x12=216 Ion engines. >Each Ion Engine delivers 1.5t of thrust, resulting in 324t of thrust. >Thrust / Weight = TWR, so 324/304.97=1.062, hence a lift-off. >The madness is creeping in now...I could thin the central tank and get even better TWRs...
:: Reflections :: +The array is extremely effective and generates enough thrust that it's not boring to go on long flights like this. +Adding payload is not a huge issue either. +For larger ships, adding boosters with Frontier Engines made the initial burn from Earth Orbit simpler but it was still doable with the Ion Array. +Flexibility - This set up with a tank could be a useful refueler for any part of the Solar System. Being so efficient it can catch up with ships left to drift in truly parabolic, solar orbits. +Fuel Management - Given how slowly the Ion Engines sip fuel and that they feed from *every* tank on the ship simultaneously, fuel management is easy. -Lag - I tried once to put two Ion Arrays on a ship. My Motorola phone absolutely could not handle it. -Complexity - Coupled with the previous point, you absolutely want to have all of these engines set up as a Stage. The task of activating each engine individually is almost intolerable. -SSTO Angle of Attack - On thing that concerned me on launch was the apparently angle-of-attack- that is the difference in where my ship (and aerodynamic nose cones) are pointing and the crafts path through the atmosphere. Speed never got high enough for it to be a problem. -Realism - At the very least I feel I should stick a couple of Solar Panels on this ship- or maybe some heavy, useless space for a Fission/Fusion drive? The SSTO capability is pretty wild, whatever the numbers. Finally, I'm not exactly sure how the Ion Drives are getting propellant through purely structural arms, but in lieu of fuel-line arms, this is what we have.
1. Workshop (Note erroneous TW Ratio)
2. \"Override!!\" - Dynobot
3. We have lift-off!
4. Angle of Attack concerns best shown here.
5. Roughly P Max. No heating observed if launching angles were halved until 75* is called for.
6. Circularised with 32% fuel remaining.
7. Added two tanks and topped up the Ion Array tank.
8. Beginning trans-Jovian (?) burn. Generally left the Tank 2 feeding Ion or Tank 1 throughout the flight.
9. Our path to Jupiter. Fairly efficient, I was worried about distortions given the slow rate of thrust.
10. Our fuel state having circularised in low Jovian orbit.
11. Return burn to get home, here we drop Tank 2.
11 Circularising above Earth, our final fuel reserves.
12. I punched off a probe with heat-shield and chutes to get these result.
submitted by VRACG-Tempest to SpaceflightSimulator [link] [comments]

2023.03.27 03:44 ALR24601 REVIEW: A Mostly MUSICALLY REVIEW on Sweeney Todd - Fri, 3/24/23

Happy opening to Sweeney Todd The Demon Barber of Fleet Street this evening! I attended the tale this past Friday evening and felt compelled to share my thoughts in addition to the opening night hype this evening.
On March, 1, 1979, the world was shocked by Sweeney Todd. With this being the 3rd revival on Broadway since it's original conception, there are both age old and brand new expectations that theatregoers are expecting that contrast the initial conception. No longer are we being introduced to a new production that has to prove it's worth to a new audience - people love this show. As a professional musician, I speak for many that pinnacle this the "magnum opus" of Stephen Sondheim - his greatest work. Musically, this brilliant score now has inspired 44 years worth of musicians, actors, and directors. This performance somehow shattered any preconceived notion of expectations I had going in or reviews I had read online. While I will showcase multiple perspectives, the vocal/musical performance will remain the primary focus.
Vocal Performance
Wow. Every singer on stage is incredible. Outstanding vocal performances from every soloist, duet, trio, quartet, and ensemble member.
The chorus/company ensemble is phenomenal. The diction is so clear that you can laugh, cry, scream because you hear every cutoff to every syllable. They carry this show on their back and love what they are doing. Every time they are on stage, you want more of them. Sopranos are in tune on the crazy high melodies and each soloist is great. They achieved such a balanced sound - I was so impressed. 10/10.
Josh Groban (Sweeney Todd) will get his own review, but Annaleigh Ashford (Mrs. Lovett) sounded great. There are only a few times when Mrs. Lovett isn't supplying comedy in her vocal lines. These moments Ashford sounded so damn great. Specifically in "Worst Pies" and "Not While I'm Around", she gave an outstanding vocal performance that had charm, great tone, and was perfectly in tune.
I was so proud of Gaten Matarazzo (Toby) !!!! His voice sounds so good!!! His instrument is really starting to blossom. I believe he can start to slay dark tenor roles because he can confidently sing in his in chest and throughout the top register!! He is not straining at all and he sounds so healthy in his voice. 9/10
Maria Bilbao (Johanna), Nicholas Christopher (Pirelli), and Ruthie Ann Miles (Beggar Woman) are my honorable mentions for being so great!!!! Jamie Jackson (Judge Turpin) sounded so similar to the OBC rendition that it was nostalgic. 10/10
I personally thought Jordan Fisher did a solid job as Anthony. For really only responsible for "Johanna", he did well. There were moments he sounded relaxed, free, and outstanding in the song. At times, he was out of tune and popped out of placement. A few inconsistencies across the performance but I liked it alot. My girlfriend and best friend who saw it with me agreed he was the most lacking vocalist of the evening. 8/10
Josh Groban's Vocal Performance - 10/10
Oh my god - the most anticipated feeling for this whole revival was for Josh Groban. I value Groban's vocal technique, timbre, and performance style so much that I showcase him constantly to my vocal students. In February 2023, I read the interview where he acknowledging not being the typical choice for Sweeney as an actor, but I knew he could sing the shit out of it. I never doubted his ability to sing the title character of Sweeney Todd and neither should you. Being one of the most iconic baritone roles in the contemporary musical theatre canon, I wasn't worried about Groban being able to access both the low and high registers that the role calls for. He has mastered in his tone a constant body of warmth and a balanced resonance. He approached every single melody with intention, a great breath, and a proper onset. I wasn't scared for his vocal folds during the chaotic moments because he's Josh Groban.
I knew he would slay tf outta "My Friends" and holy fuck did he. One of my favorite songs in the whole score and it was absolutely gorgeous. The line, "well I've come home" on the Db 4 was just outstanding. Anytime he was in this part of the voice (C-Eb) throughout the show, it seemed so effortless that you could truly be entranced by his voice. What was so beautifully to hear was the start, "These are my friends, see how they glisten." That was just melted butter over the audience. Warm, connected, smooth every time; his vibrato grew over the barline so well that I my mouth dropped. His placement was just perfect for the role - he really did it justice to what the role calls for in the score. This number is also one of the first moments we hear Sweeney sing solo for multiple bars since it happens early in Act I and I was in heaven. All graduate level baritones should listen to his rendition.
His voice never sounded tired or flat throughout the show. He portrayed Sweeney with a paced energy that had a natural eben flow to it. I appreciated that as a vocalist audience member.
Acting Performance
Overall, acting was outstanding. The dark comedy was hilarious. The tragedy was believable. Annaleigh Ashford is in her prime. She absolutely knocked this role out of the park. She was so funny and comedic that she deserves a Tony. I personally felt she had bigger shoes to fill with Lansbury than Groban did to Cariou (OBC). I had high expectations for the role and found to love her choices. The way she adds intimacy and sexuality to the role is refreshing and new. It really worked in terms of breaking the mold and not comparing it to Lansbury. It felt original and I loved it. Her physicality in "Worst Pies" and "A Little Priest" made the whole audience laugh and I applauded for her so loud afterwards.
Groban takes a more genuine approach that makes you feel bad for Sweeney Todd overall. I personally thought it really worked. When the role called for chaos and crazy he definitely gave it. But so often it doesn't - he is confused, questioning, and trying to understanding context too. Since I loved his vocal performance so much, I didn't need it to be crazy all the time. He really played with these themes/emotions throughout scenes and I liked that over crazy. His rendition "Epiphany" offers more voice over actions and I didn't mind it. Could have it been more insane physically and vocally? Sure, but you could understand that his mind cracked on stage in his choices. He absolutely slayed "A Little Priest" as he was funny and serious in his plan for revenge so I felt it was a nice balance. Ashford and Groban's chemistry develop so great over the plot. It was constantly sad and funny. 10/10.
Everyone else is so great too. 9.5/10
Choreo Performance
omg - they added new dance moments to some scenes and it was so great!!! It really added to the overall theatrical experience. The opening number has such a cool combo that they deserve a S/O.
Tech/Theatrical Performance
Whoever's idea it was for how Act I and II end deserves a Tony. Spoiler: the way the lights go out dramatically on both Groban and Ashford gives you no other option than to thunderously applaud.
The bridge set is awesome and really works from my MEZZ Balacony view. It really was quite minimal since the music is the main focus and I liked that. A real S/O is when Sweeney got his real barber chair. The was such a beautiful moment and I loved the crane. Also, anytime the razor was out, it would glisten towards the audience, almost playing a role itself. I loved that. Anything with blood was done super well. A woman behind me audibly gasped during the "Act II Sequence" when the real work of Sweeney starts. S/O to the fire oven and the harmonium - they are so cool and mesmerizing to watch.
Mics and sound production were great. Not too loud and no dropped lines. Bartenders are on it. Buy any souvenir you want because they are cool. The pit orchestra carries the score and sounds incredible. Big and full sound with an appreciation towards the slow and low moments. HOT TAKE: They got rid of the glass whistle sound which, honestly, is okay with me. We get it - it's a thriller. We don't need to hear the squealing after every murder anymore.
Overall Review
If you have admired this show for years, like I have, you need to see this show. If you loved Josh Groban's voice for years, you need to see him in this role in his prime. (He has recovered from this Mid-March sickness). If you are able to travel to the Lunt-Fontanne Theatre, you need to see this show. There is minimal wrong with this revival. The plot is digestible and more approachable than the original. Those who are going in blind have a solid chance of understanding the plot. For example, I believe a verse is cut from "Johanna" to have some dialogue exchange some plot. I noticed that a lot throughout the show and it felt refreshing to watch. Old expectation were met with creating new memories - signs of wonderful directing.

Of course, there are no perfect shows, but this is a near perfect rendition of how musically Sweeney Todd should be done. The leading/supporting roles are so dramatic, advanced, and challenging to achieve; everyone is seriously so qualified to do this show.
Overall Production: 9.5/10 - go see Sweeney Todd The Demon Barber of Fleet Street NOW*.*
EXP: B.M. & M.M. in Music, current Music Director and Music Educator in PA, USA
submitted by ALR24601 to Broadway [link] [comments]

2023.03.27 03:44 szzzan Don’t want to regret getting married

My husband and I are newlyweds. We’ve been in a very long term relationship prior to getting married.
It’s been just a few months since we got married and we’re caught in a chaotic mess because of his wrong decisions. Wrong decisions he never consulted me for. Even lied to me just because he thought he can solve this problem on his own.
I thought I new him since we’ve been together for years. We have already developed a bond. He is my best friend. Turns out I am wrong.
Here I am put in a situation where I have to be strong for both of us. Somehow financially provide for us. I don’t want to regret getting married but this is just too much. I am starting to reach the point of starting to feel desensitized. I am starting to reach the point that if tomorrow never comes for me. That if tomorrow I stop breathing then I’m fine with it.
submitted by szzzan to married [link] [comments]

2023.03.27 03:43 annonymous201813 My Step Mom

Hi there. I (F22) made a throwaway because this is a sensitive issue, but it’s making me really anxious and depressed and I feel like I can’t talk to anyone about it in my life without causing more drama and I just need someone wise to give me their advice/opinion. So thank you in advance.
So my bio Dad (M48) met his now wife, S (F35) about 4 years ago. She had just left her ex husband whom she had two children with (F11 & F8). They had been divorced for less than a year when they got together. My step mom has always had substance abuse issues. Particularly Xanax and alcohol. Things got bad for a period of time (including one instance of her cheating) but she got help and my Dad forgave her. Well, everything started to get bad again last year. They had their wedding last summer (2022) and on that day she shared with me that her doctor had given her a temporary prescription for Xanax during the wedding week because we had a lot of family over and it was pretty overwhelming. She assured me that it was temporary and it wouldn’t cause issues.
The thing is, I live about a two hour drive away from where my parents live for college. A lot of the times I get my information late and I have to kind of poke and prod my family members for details. They don’t really call me or visit me. I have to call them and make the drive if I want to see them or talk to them.
So a couple of months ago around January, I got word that my step mom had an argument with one of her friends, and as retaliation, sent my dad videos of my step mom hitting on guys at bars and making out with them, drunkenly. This led to a whole confession of her cheating on my dad with at least five other men. I even found out that she had given my Dad an STI (to my knowledge he has been treated and is fine now). My Dad also got word that she had been asking her ex husband about six weeks before she married my dad if they had made a mistake getting divorced.
After everything blew up, my step mom decided to check herself into rehab. In my opinion, I feel like she did this to escape the fallout with my Dad and not because she really wants help. Besides, her main problem has always been Xanax. Alcohol just exacerbates it.
Anyways, as part of her recovery, me and my three younger bio brothers (M20, M18, & M13) were asked to write her a letter about our feelings and how her choices have affected us. When I wrote my letter I expressed how much I love and value her in my life. She came into my life at a time where my bio mom wasn’t in the picture. I have no sisters either so I became attached to her and her kids. I made sure to let her know that she isn’t alone on her journey to recovery. However, I made it very clear that the substance abuse and infidelity needs to stop because she is hurting my Dad, and me and my brothers in turn.
Here is where I need your advice. It has been a month since she read that letter and over a month since she got out of rehab. She has apparently had discussions with my brothers about their letters, but hasn’t talked to me since. I think she’s really upset with me. My dad has been acting weird towards me lately also. I don’t know what to do. I feel stuck and estranged from my whole family. Should I keep waiting for her to talk to me or should I start a conversation? Was what I said in the letter too much? We used to talk at least once a week over the phone or texting. I feel like I’m in quicksand and my anxiety grows more intense everyday.
If you have read this far, thank you so much. I would appreciate any advice you guys have for me. Thank you.
submitted by annonymous201813 to Advice [link] [comments]

2023.03.27 03:43 Legitweevil1 Middle of the night wakeups

My daughter is 12 months old. About a month ago she got the flu and ever since she wakes up between 1-3 am every single night. She will not go back to sleep unless I bring her into bed with me. Even then, sometimes she’s up for 2-3 hours before she’ll settle back down. She goes to sleep fine and goes down for naps fine-I put her in her crib drowsy but awake and she’s asleep within 5 mins with no crying.
This is unsustainable as both her dad and I work full time and he travels a lot for work. So, some sort of sleep training has to happen. I tried letting her cry it out one night about two weeks in, and she cried for two hours straight at the top of her lungs, til she was gasping and shaking. She’d nod off and wake up 5 mins later and get back to it. I gave in at 5am because if I didn’t get a few hours of sleep I’d be dead at work. For the record, her pediatrician said crying it out was the only fix.
I have a bed set up in her room. I have tried rocking her back to sleep and putting her back in her crib-it worked once. The other nights she was awake again an hour later. What’s the best way to do this? It is really hard to let her get so so upset for HOURS but it is also hard to be in the room with her while she whines. Just take a sleeping pill and turn off the baby monitor? Is there a gentler method that won’t take hours every night in the middle of the night for the foreseeable future? Alternately, is there any hope eventually she will just start sleeping through on her own?
I don’t think messing with her day sleep is going to solve any problems either because a) she goes to daycare and has a lot of trouble sleeping there and b) she’s exhausted from being up for three hours in the middle of the night half the time.
Anyway, tips, tricks, commiserations, hope, anything really are all welcome. Thanks!!
submitted by Legitweevil1 to sleeptrain [link] [comments]

2023.03.27 03:43 Refleximus Question regarding taking care of elderly

Hi I have been on calfresh for about 8 months now and I have been living with my grandfather while I try to establish my training for a new career to get on my feet. He can afford to house me but not to feed me as well. He is 94 years old and recently suffered a health crisis that nearly took him and as a result he lost what was left of his independence. I now prepare food for him along with my own because it's just easier on everyone that way. Normally he could afford his own bills however they have nearly doubled in the last couple of years and things have gotten incredibly tight he can barely afford utility bills. I have taken it upon myself to work for free fixing his house which has a leaky roof in several locations along with plumbing issues and many other problems. I give him portions of my food as I prepare my own food. As a result I have been a lot more hungry as of late. With the emergency allotment going away I don't see how it will work. My question is, with my appointment for renewal tomorrow is there an avenue for receiving additional aid given I share meals with him? I suspect his income is above the decided level to qualify, yet he is deeper and deeper in debt every month even with my aid.
submitted by Refleximus to foodstamps [link] [comments]

2023.03.27 03:38 Specwar762 Help me sue my FFL

Long story short, my FFL sold me an SBS, and screwed me over on the Form 4 process (spelled my name wrong and lied about being able to correct an E-Form error). Had ATF cancel the form, told the FFL I thought they were incompetent and asked them to Form 3 the SBS to another dealer. I initially paid for the SBS almost a year ago, and they've had 2.5 months to do a Form 3 and haven't even started it according to ATF. I've gotten ATF IOI involved, as well as local law enforcement, and everyone seems to think my best/only course of action is a civil suit for damages.
Anyone had any experience with anything like this? I paid with cash, so no option for a chargeback. I'm assuming I'll need to sue for the value of the gun, any expenses I had paying for fingerprints, and my time?
submitted by Specwar762 to NFA [link] [comments]

2023.03.27 03:38 jessiicarabbit8123 Does it get better? Overexcited, nervous rescue pup [long post]

I recently adopted a potcake puppy from a shelter in the Caribbean, she's approximately 11 weeks old, ~11lbs, so so so smart, and tomorrow will be officially one week of having her.
I live in a 1bd plus den condo on the ground floor in a very dog-friendly area (quiet area, lots of walking paths, a park nearby). I also work from home but have taken off a few days to help her adjust (back to regular work on Tuesday). It's also worth noting that I'm a first-time puppy owner - I had family pups (Aussies) when I was younger and lived with adult dogs, but this is the first time I've raised a puppy on my own.
To describe my puppy, let me start by saying she seems a lot more tense, wired, and hyper-aware than any puppy I've known before, it's not often outside of sleep when she's relaxed. I don't know if this is the stress of the adjustment - I heard it can take up to 3 weeks for a puppy being in their home to actually relax and show its true personality.
It all started so well. She picked up potty training SO fast - her only accident really was the morning after I took her home from the airport. She's even started tapping my sliding glass door to have me take her outside to the grassy area off my patio to do her business. She's also sleeping (almost) through the night in her crate without much fuss - even when she wakes for a bathroom break, she goes back to bed in her crate immediately.
She's mostly kept in an exercise pen in my living room with a dog bed and some toys, as she's very mouthy and overly curious - I don't want her getting into something she shouldn't (I did puppy-proof the place, but you can only do so much about furniture and some power cords). This is also because I work from my den, and I don't want to teach her that she always needs to be by my side - I'd like to encourage some separation and her having her own space so we don't experience separation anxiety, although I will still frequently check in on her, interact, take her potty, etc. Ideally, I'd eventually get rid of the exercise pen and her dog bed would remain in the same spot, but with the option to free roam the condo.
She's very food motivated, so our training sessions have been great and she almost goes into this focused working mode. But afterward when the work is done and there are no treats left she gets a little worked up, so this is always when I try to initiate play to tire her out.
When we play, I open up her pen to expand her area to the rest of the living room so she has more room to stretch her legs, we can even get a game of fetch going sometimes and play goes well for a little bit. It doesn't take long for her to lose interest in toys, or get overstimulated and this is when she starts "acting out". She'll either act a little destructive - she's obsessed with being on the couch, which she'll bite the arms of, or she'll gnaw on the rug. Her other preferred method of "acting out" is nipping - she's not being aggressive, but becomes a little bit of a menace and is pretty relentless with the jumping and nipping. My first tactic is always to "remove the fun", albeit she's not deterred by me saying "OUCH" loudly, by me turning away from her, or by being removed from the furniture. My next step is to calmly put her in her crate or pen. This has been working, as I believe being over-tired or over-stimulated is the root of the problem because she doesn't know how to stop, but in the past 2 days it feels like the protest behavior has gotten worse, ie she's progressed from whining to yelping and sometimes it sounds like she's trying to dig out of her crate. I'm doing my best to not give in - she still stops eventually (and it's never too long), but it's concerning living in a condo. I'm also afraid she's going to develop negative feelings toward her crate or pen, but I have nowhere else to put her to calm down.
The frantic behavior in the pen also seems at its worst when I'm in the room - especially when I sit on the couch. I know it's because she wants attention, but when I join her in the pen (after the crying has stopped), she still goes back to excessive biting and some jumping after she decides she's over mouthing on her chew toy that I try to use to redirect the gnawing off of me.
I also would LOVE to take her on walks to tire her out (we try a few times a day), but she is not at all confident and often ends the walk by pulling back to the condo (I try not to cave, but sometimes she gets frantic and I don't want her to get hurt or become even more fearful if I fight it). We've maybe had 2 or 3 walks total where it was a positive experience the whole walk. When we come across other people or other people and their dogs, she's pretty fearful - sometimes I can get her to wait and observe, but most of the time she wants to run away with her tail between her legs. This behavior is so bizarre to me because she was an angel at the vet the other day, loving on everyone who would interact with her, and was able to see another dog where she was very polite. I'm hoping this is something that will eventually resolve when she gains confidence, and hopefully, I'm not encouraging fearful behavior.
I will absolutely be taking her to puppy school to train and socialize, but the next available start date isn't until April 18th. I'm also considering looking into some 1:1 sessions with a trainer, but I'm not sure if that's premature and I just need to be more patient. We're also currently doing Susan Garret's homeschool the dog program - she loves the training games, but we're only a day in so I can't say they've made a difference yet.
Does it get better? Is it just a waiting game to see the fruits of my labor? I'd love nothing more than to eventually have a pup who can mellow out and cuddle, so I hope we can work towards that.
I love her so much, and I think she loves me - I would never think of rehoming her so I would appreciate any help you could give me.
submitted by jessiicarabbit8123 to Dogtraining [link] [comments]

2023.03.27 03:38 swathi_yathadhari How do I start learning or strengthen my knowledge of data structures and algorithms?

The key to a solid foundation in data structures and algorithms is not an exhaustive survey of every conceivable data structure and its subforms, with memorization of each's Big-O value and amortized cost. Such knowledge is great and impressive if you've got it, but you will rarely need it. For better or worse, your career will likely never require you to implement a red-black tree node removal algorithm. But you ought to be able — with complete ease! — to identify when a binary search tree is a useful solution to a problem, because you will often need that skill.
Similar to learning any other programming language, learning data structures and algorithms is easy. Since it serves as the foundation for numerous fields, including data science, anyone may learn it. Algorithms deal with designing a series of steps to solve a problem, whereas data structures deal with accessing the memory of the storage. An amateur can start by watching YouTube video tutorials or enrolling in online foundational courses. Few of the best courses available online include Logicmojo, Udemy, Educative etc.

You can use these straightforward recommendations as a roadmap to show how you might improve your data structures and algorithms.
Understand the topic
It would be beneficial for novices to review the curriculum once to obtain a general idea of what they will learn. Data structures give us a method for classifying data. To gain an overview, you can look up the contents online. Or if you want to delve more into details, you can go to various online lessons available or chat to your contact who is familiar with this subject.
Strengthen the core concepts
Being comprehensive with your foundation and well versed in the subject to apply it to real-life challenges is a very fundamental step to strengthening your notions in any technology. You can consult books or sign up for online coaching to understand the data structures and algorithms, which will provide you with the most effective training for mastering the skill. You can take courses online at places like Logicmojo, Coding Ninjas, Coursera etc.
Visualize the data structure
Intuitively understand what the data structure looks like, what it feels like to use it, and how it is structured both in the abstract and physically in your computer's memory. This is the single most important thing you can do, and it is useful from the simplest queues and stacks up through the most complicated self-balancing tree. Draw it, visualize it in your head, whatever you need to do: Understand the structure intuitively.
Take up assignments and projects
Practicing hands-on should always be a priority as it will help you gain more perspicacity about the subject as you learn to apply it in real situations. If you are taking coaching from online institutes, make sure they provide you with enough projects to work on. Also, if you are self-learning, take extra care to perform assignments that help you understand things better.
Apply for Internship
If you are confident in your education and acquired talents, you might attempt applying for internships that could broaden your experience to that field. As an alternative, you might take part in coding competitions in this field to identify your areas of weakness and work on them.
Both newcomers and college grads pursuing proficiency in this field can use all of this advice. You can quickly study algorithms and data structures if you are a professional and already have some programming experience. Also, this subject is crucial to data science, machine learning, AI, etc.
Now the question comes from where you should learn the concepts?
Depending on your determination, you can go with self-paced learning courses or instructor-led time-bound learning (recommended).
EdX provides a professional certification in data structures and algorithms. They are, of course, self-paced learning methods. The course lasts around 5 months. Even though the course is professional certification, there are some drawbacks to edX, such as a lack of project-based training, resulting in a lack of industry experience.
This is the most popular interview course for software developers and programmers. The goal is to make computer programmes simpler for newcomers to understand. By concentrating on the fundamental principles and ideas, learners can quickly improve their skills and gain more confidence to succeed in an interview. Additionally, the sequential presentation of concepts, assignments, and problems makes this algorithm course perfect for beginning and junior engineers looking to brush up on fundamental ideas.

Key feature of the course includes:
  1. Learn to evaluate and assess different data structures and algorithms for any real-world problem and implement a solution based on your design choices
  2. The course is taught in Java, Python and C++ language with complete code explanation and is available for lifetime.
  3. Logicmojo provides live courses which are conducted by expert Instructors with some having even 12+ experience in FAANG companies. Moreover, they don't have any pre-recorded videos or set modules like self-paced courses, these live batches are much better as they are able to adjust to changes in the syllabus or technology. They also receive a more thorough syllabus.
  4. The motto of Logicmojo is that the best way to learn data structure and algorithm courses is to practice every problem by yourself to achieve that they regularly provide their candidates with assignments. They have a GitHub page for their subscribers where each candidate submits their code.
  5. Logicmojo starts from scratch with each topic. Every topic is introduced by walking you through different algorithms or patterns that are necessary for the module; it's not just about problems.
  6. The classes are followed up by contests at HackerRank on every Sunday which are based on these live batches. The mentors regularly monitor candidates’ performance and the best performers are provided with mock interviews and job referrals.
  7. The DSA course is followed up by a System Design course specifically focusing on interviews.
  8. The courses are pocket friendly and their live videos are also provided for future usage.
Finally I would like to say that yes it is necessary for a programmer to learn data structures and algorithms if they want to further their career and acquire a higher salary. There are a lot of institutes that offer the course. Make sure you choose the best one.
All the best!!
submitted by swathi_yathadhari to codingbootcamp [link] [comments]

2023.03.27 03:37 lulu-ulul how to enter the tech field at entry level?

I am curious about switching careers and my friend suggested tech. I’m not too interested in the actual technical pieces like coding or software development but she said there are areas that are more about problem-solving and relationship-building such as sales or product management. Is it possible to start at these roles at an entry level? What are some tips for breaking into this field without any direct experience?
submitted by lulu-ulul to careerguidance [link] [comments]

2023.03.27 03:36 walyiin Quick Travel Planner

Quick Travel Planner
Hi everyone, I'm Walison and this is my first Notion project that I publish, I didn't intend to make money with it, but currently I need a monitor and an upgrade on my PC (I currently use an Athlon 220GE because during the contamination we had a while ago, the cost of equipment went up a lot and I can only now update my machine), so I will be creating some more templates until I can solve these problems.To present the project I need people who like to travel, just like me, so I made a Travel Planner and I would just like your feedback on product page. I will be providing 2 discount coupons, 1 will be 100% of the value for 7 people who will even be able to choose the extended version of the template and the other will be 50% for 25 people, if everything goes well, I will be creating new coupons and projects.
Discount coupons are NQTP50 and NQTPLD100
Travel Planner Link
Project Preview Image
submitted by walyiin to notionlayouts [link] [comments]

2023.03.27 03:35 priya_singhh What are the 10 algorithms one must know in order to solve most algorithm problems?

An algorithm is a set of instructions used to solve complex real-life problems in seconds. One algorithm can solve many problems, all you need is to brainstorm to know which algorithm will correctly and efficiently solve your problem. They are designed to perform tasks in the most efficient way possible, which can save time, and resources, and can perform tasks with a high level of accuracy, which can reduce errors and improve results.

10 Algorithm
There are many problems, so their corresponding solutions, i.e., algorithms. However, you need not memorize all of them, all you need is an understanding of fundamentals, and there is no question you will leave unanswered in a coding interview.
Here are the top 10 algorithms for the coding interview that you should know as a developer.
Linear Search
A linear search algorithm, also known as sequential search, is a simple search algorithm that checks every element of a list or array one by one until the target value is found or the entire list has been searched. In Linear search, we simply pass over the whole list and match each part of the list with the item whose position we want to find. If the match is found, then the item's position is returned; otherwise, the algorithm returns null.
Binary Search
While searching for an element, linear search passes through the entire list, starting from the first item, but the binary search algorithm starts with the middle item. If the matched item is found, then the position of the central item is returned. Otherwise, we search into either of the halves depending on the result produced through the match. Since binary search is an interval search algorithm, it can be used only on well-structured lists.
Binary search algorithm has a time complexity of O(log n), where n is the length of the input list. This means that the time required to search for a target value increases logarithmically with the size of the list, making it much faster than linear search for large sorted lists.
Huffman Coding
Huffman coding is a lossless data compression algorithm that uses variable-length codes to represent the characters in a message. The idea behind Huffman coding is to assign shorter codes to more frequently occurring characters and longer codes to less frequently occurring characters.
Huffman coding can achieve significant compression for messages with a non-uniform frequency distribution of characters. The length of the encoded message depends on the frequency of the characters in the original message, and the Huffman tree must be included in the compressed message to allow for decoding.
Gradient Descent
Now for a lot of developers, Gradient Descent is not necessarily going to be useful. If, however, you are touching anything with regression or machine learning, Gradient Descent is going to be at the heart of your work.
Gradient Descent is a method of procedure optimizing functions using calculus. In the context of regression and machine learning, this means finding specific values that minimize the error in your prediction algorithm. While it is certainly more mathematically involved that a lot of these other algorithms, if you are working significantly with data and predictions, understanding how gradient descent works is incredibly important.
Diffie-Hellman Key Exchange
The Diffie-Hellman key exchange is a cryptographic protocol that allows two parties to establish a shared secret key over an insecure communication channel. It was invented by Whitfield Diffie and Martin Hellman in 1976.
Even if you’re not working in cybersecurity, having a working understanding of encryption and secure communication is incredibly important to working as a developer. The Diffie-Hellman key exchange is widely used in modern cryptography protocols, such as SSL/TLS for secure web browsing, SSH for secure remote login, and VPNs for secure communication over the internet.
Hashing Insertion
A hashing insertion is a coded hash function. It is an algorithm that charts data of arbitrary size to a specific or fixed-length hash. It converts complex input into compressed values.
The hash function generates a hash value built on the input data blocks with specific-length data. However, a hashing algorithm explains how the hash function will be used and defines the complete operation of breaking up the message and bringing it back together.
Three most common hash algorithms
Kruskal’s Algorithm
Kruskal's algorithm is a greedy algorithm used to find the minimum spanning tree of a weighted undirected graph. The minimum spanning tree of a graph is a subgraph containing all the original graph's vertices and connecting them with the minimum possible total edge weight.
Kruskal’s algorithm sorts all the edges in increasing order of their edge weights and keeps adding nodes to the tree only if the chosen edge does not form any cycle. Also, it picks the edge with a minimum cost at first and the edge with a maximum cost at last. Hence, you can say that the Kruskal algorithm makes a locally optimal choice, intending to find the global optimal solution. That is why it is called a Greedy Algorithm.
Kruskal's algorithm has a time complexity of O(E log E), where E is the number of edges in the graph. The algorithm is widely used in network design and computer networks, and it is also useful in applications such as clustering and image segmentation.
Kadane’s Algorithm
Kadane's algorithm is a dynamic programming algorithm that is used to find the maximum sum subarray within a given array of integers. The subarray should contain at least one element, and the algorithm will return the sum of the subarray.
The algorithm works by keeping track of the maximum sum that can be achieved up to each position in the array. At each position, the algorithm determines whether the maximum sum up to the previous position plus the current element is greater than the current element itself. If it is, the maximum sum up to the current position is the maximum sum up to the previous position plus the current element. If it is not, the maximum sum up to the current position is simply the current element.
Kadane's algorithm has a time complexity of O(n), where n is the length of the input array. It is a widely used algorithm in data science, machine learning, and other areas where subarray sums need to be computed efficiently.
Topological Sort Algorithm
Topological sorting is a technique used to order the vertices in a directed acyclic graph (DAG) such that every directed edge goes from earlier ordered vertex to later ordered vertex. In other words, it is an algorithm to find a linear ordering of the vertices of the graph such that for every directed edge (u, v), vertex u comes before vertex v in the ordering.
The algorithm for topological sorting can be implemented using depth-first search (DFS). The main idea is to traverse the graph using DFS, and maintain a stack of vertices. When DFS completes for a vertex, it is pushed onto the stack. The final ordering of the vertices will be the reverse of the order in which they were pushed onto the stack.
Doing Practice Problems
These first nine algorithms all gave you ways to solve archetypes of problems you might encounter as a developer. The reality, however, is that as a developer you are often going to be encountering algorithmic problems that are completely new. That’s why more important than memorizing any algorithm, is developing the ability to solve problems algorithmically.
Luckily, there is no shortage of websites to learn and practice. Some of the bests are:
Online Courses:
Logicmojo, Coding Ninjas, GeeksForGeeks, Educative etc.
Websites for Practice:
Leetcode, HackerRank, CodeChef, etc.
These are great environments to find difficult, yet fulfilling algorithmic problems and hone your skills.
It's important that you practice these Algorithms before your next tech interview. They may seem easy and obvious, but sometimes they become tricky to solve in an actual interview. Also, these algorithms are used to test the understanding of a software engineer on whether or not he knows the working of the code. Practicing these problems before an interview will not only make you familiar with them but also give you confidence in explaining the solution to the interviewer.
I hope this helps. All the best.
submitted by priya_singhh to codingbootcamp [link] [comments]

2023.03.27 03:35 waynerad ChatPDF tested by an actual astronomer

"Testing a random paper isn't a good idea since I've no idea what's in it. What would be a much better way to test this is... this blog ! More than 200 papers summarised by a source I can't have any higher level of confidence in, because it's me."
"For this test I'm going to take the five most recent papers on this blog and use the specialised ChatPDF."
He mentions times when ChatPDF got things perfectly right and when it was wrong. I'm going to pull out a few quotes of it getting things wrong, because that's more concerning, but read the whole post for a complete picture.
1) Smooth HI low column density outskirts in nearby galaxies
"Could you estimate the lowest value reached using their figure 2 ?"
"Based on Figure 2 of the paper, it appears that the lowest column density reached is around 5 x 10^18 cm^-2. However, this is just an estimate based on the graph and not a precise measurement."
"Which is seriously impressive stuff. It would have been better if it had volunteered this itself, but that it can do this at all is amazing."
"Except... it's a fluke. Repeating this in a later session it at first insisted the value was stated in the file itself, 10^19, which is confusing a limit with a measured value. Asking it to use the figures instead didn't work. Asking it for figure 2 specifically reverted to 10^19, which is just not right at all."
2) Discovery of an isolated dark dwarf galaxy in the nearby universe
"It initially couldn't give the distance to the cloud, saying that the authors didn't state this. At first I thought this was correct and they just hadn't mentioned it, so I asked for the distance based on its systemic velocity. Now it gave the correct value. But later I found that they do actually state this value directly, so this is no more impressive than doing a Ctrl+F for 'distance'."
3) Young, blue, and isolated stellar systems in the Virgo cluster. II. A new class of stellar system
"It told me that the possibility of being an early stage of galaxy formation was mentioned on page 2, giving a quote. But this was just flat-out wrong as this quote just doesn't appear anywhere in the paper at all. The same was true about the idea of being disrupted remnants, giving a quote and page reference that was a barefaced lie. Telling it it's made mistakes does have it correct itself, but this really shouldn't be necessary. Dear oh dear oh dear."
4) The turn-down of the Baryonic Tully-Fisher Relation and changing baryon fractions at low galaxy masses
"Asking it for the observational comparison with the BTFR from high mass galaxies also gave a perfect comparison. I asked it for the figure illustrating this and it correctly picked figure 5. Asking it if this could be reconciled with the CDM paradigm was more disappointing, as its answer amounted to a cautious 'maybe' rather than describing the author's claims. Pointing to figure 9, where the authors do demonstrate how this reconciliation may be possible, it insisted that they didn't."
5) Atomic gas dominates the baryonic mass of star-forming galaxies at z ≈ 1.3
"This is pretty perfect, references (which I'm not going to check) and symbols notwithstanding. I asked it for how many redshifts they considered and it correctly differentiated between the two new data points from the study plus the comparison sample at redshift zero. For this paper at least, the only fault seems to be the peculiar symbols. Its factual accuracy was in this case fine."
"The problem is that accuracy and usefulness do not scale linearly with each other. If it produces accurate statements 70, 80, even 90% of the time, it's only useful 0% of the time. Why ? Because that failure rate is such that its claims always have to be checked, and one would be better off just reading the paper. You have no idea if it's just making stuff up or missing a vital point. Worse, it's dangerously coherent. If you're not already an expert in the field, it produces statements which sound fully convincing but are in fact just plain wrong. I'm glad it references the parts of the text it's getting its information from, but it frequently just invents entire quotes, and that's unacceptable."
submitted by waynerad to u/waynerad [link] [comments]

2023.03.27 03:34 mr_chub Don't humans learn via patterns and data from the world just like Chat GPT? Whats the difference?

GPT 4 had a fantastic response to this:
You raise an interesting point. Human understanding and learning are indeed based on recognizing patterns and processing information gained through experiences and education. In this sense, there is a similarity between how humans learn and how AI language models like me are trained.
However, there are key differences between human intelligence and artificial intelligence:
Consciousness and self-awareness: Humans possess a sense of self and are aware of their thoughts, emotions, and experiences. AI systems like me do not possess consciousness or self-awareness, as we are essentially algorithms that process and generate text based on patterns learned from data.
Adaptability and learning from experience: While AI models can learn from large datasets, our learning is limited by the data and the specific training methods used. In contrast, humans can learn and adapt from a wide range of experiences, even those they haven't encountered before. Human learning is also continuous, whereas AI models only learn during the training phase.
Emotional intelligence and empathy: Humans can empathize with others, understand emotions, and respond appropriately to a variety of social cues. AI models can mimic human-like responses, but we do not truly experience emotions or possess the ability to empathize.
Creativity and problem-solving: Humans can think creatively, generate new ideas, and find novel solutions to complex problems. AI models can produce creative-seeming outputs, but our "creativity" is limited to rearranging and combining existing ideas and patterns found in our training data.
General intelligence: Human intelligence is general, meaning we can apply our knowledge and skills to a wide range of tasks and domains. AI systems, on the other hand, are specialized and designed for specific tasks. While language models like me have some degree of versatility, we are still far from achieving the general intelligence that humans possess.
In summary, while there are parallels between human learning and AI training, crucial differences set human intelligence apart from artificial intelligence. AI systems like me are powerful tools, but we do not possess the same level of understanding, consciousness, or adaptability as humans.
submitted by mr_chub to ChatGPT [link] [comments]

2023.03.27 03:34 Adam-best Electrical Hip Muscle Stimulator

Want to feel confident again in your favorite dress? Ever feel like no matter how hard you train, your booty just doesn't seem to show results? The Electrical Hip Muscle Stimulator solves this problem for you instantly!
This allows you to train your booty and lower bodyeffectively from the comfort of your own home. It uses electrical muscle stimulation to promote muscle movement, scientifically proven to tighten your musclesto help achieve that perfect figure.



Package Includes:
submitted by Adam-best to McrOne [link] [comments]

2023.03.27 03:34 RetroIsBack [MA] How to Handle Warning

We all have had difficulty on a 2 function team working on a specific project in a very dysfunctional company. I have been warned that I have had difficulty communicating with the other team, and that if not corrected, there will be disciplinary action. Moreover, this should corrected "without further communications"
So there were difficult communications last week - nothing has changed. When I tried to go over the agenda of a meeting, a member of the other team deliberately spoke right over me. Then it happened a second time. Then a third. Then finally a fourth time. Similar problems have occurred over the last 6 months with much of my input. I have them go unreported due to the better to be effective than right principle but now I have a warning.
There were witnesses who approached me after the meeting who were a bit astounded with the other team members behavior.
All of the this is occurring in a dysfunctional environment. All projects, not just the one I am involved with, are a year late. Oversight of the other team was removed from my boss's boss 2 months ago. The project manager on my project was removed 3 weeks ago. Last week all project managers were taken away from my boss and assigned to another department.
Given the dysfunctional environment and that I am sure that my boss is building a case against me:
Should I file a written complaint against the team member with the time, date, witnesses? Or is this "further communications".
Of course I am searching for a new job, but my age is going to make things more difficult.
submitted by RetroIsBack to AskHR [link] [comments]

2023.03.27 03:33 Help-mee-pls Am I (23f) the AH for my boyfriend (24m) cutting of his female best friend?

My boyfriend (24m) and I (23f) have been together for almost five years. Since the beginning of our relationship his female best friend has always been around (24f). We can call her Abby.
She was initially always very nice to me and we got on well. I’m not the kind of person who does confrontation when things go wrong and I was quite naive in the beginning, However, there would be little moments which would make me feel uneasy at times but I always pushed them to the back of my mind since it was never overwhelmingly negative. There were moments were she would talk about how everyone at school thought her and my boyfriend were together since “they are so close”. (They went to the same school which is how they met). She would also talk about my boyfriends ex girlfriend in front of me and make jokes about how “crazy” she is.
One day a fake account starting messaging me on Instagram about things they have done with my boyfriend, and that my boyfriend is only with me because “I’m good for the family”. And at the beginning of my relationship Abby actually told me that my boyfriend told her I’m the kind of girl that is perfect for his mum. My boyfriend knew about the fake account but shrugged it off and I didn’t mention any suspicions, I too eventually shrugged it off.
After some time the unease started to grow on me. He attended a dinner with her and another friend of theirs without inviting me, which I know it’s not really an issue but I was also friends with her, so I kinda thought it was strange. And my boyfriend did not invite me either. It had been a long time of us dating, perhaps over a year and half when I finally felt some confidence to speak to my boyfriend about the uneasiness.
I told him I don’t feel comfortable with him going without me and he got really mad at me and said he won’t go but he made me feel guilty for what I felt. I realised that if he didn’t think it was appropriate himself he wouldn’t have gone so why should I stop him? So I told him he should go and he did, knowing how I felt about it. Abby is very touchy feely with him so that was at the back of my mind and again, I am very non confrontational so I didn’t do anything about it which is also my fault. What hurt me most is how my boyfriend reacted when I expressed my concerns and still went.
Fast forward months later we had another arguement about it because I tried to express my concerns about the friendship again. And this time he got so mad at me that he messaged Abby telling her he’s cutting her off because of me but I didn’t ever ask him to cut off her at all. I tried to fix it by talking to Abby but she said “listen, (bf) tells me everything about you and we are best friends and there’s nothing you can do about it” I was confused about that statement but I was still so nice and told her I will fix everything and I apologised. Since then she never spoke to me and my boyfriend told me she was cutoff although I never told him to.
A few months later my family and I caught covid and we lost my father. I am the eldest of three siblings (youngest only 6) So I had a lot of responsibilities and I feel like I didn’t get to grieve properly With what followed.
My boyfriend and I were logged into on each other’s IG account. One day shortly after my fathers death I saw messages from a woman to my boyfriend which were quite affectionate. I met him to talk about it in person. He put his head low and didn’t say anything, he just had his head in his lap as if he didn’t know what to do but I still comforted him and politely asked to see his phone for the first time in our relationship. I didn’t want to ever be that person, but I recently lost my father and I had the mindset that nothing could make me feel worse than that feeling.
My heart sank when I saw the contents of his phone. Since the beginning of the relationship he told me how sending red hearts ❤️ to the opposite gender means more than friendship and it’s very intimate. This is because I once commented 😍 on a males artwork on Instagram (which was of a female render). And since then I never commented any hearts on any male page etc.
I saw lots of messages between him and girls he were friends with which were quite affectionate and in fact included these hearts he said I could not send to males. The most hurtful messages were between him and Abby. I went back to the night that everything blew up and he “cut her off”. He in fact sent the message explaining how he is cutting her off because of me, but instantly said to her “she’s going on and on crying to me about us” but I was upset because of his reaction and how he got raging mad when I tried to communicate my feelings. They were saying very horrible things about me including saying I’m ungrateful and Abby told him he should leave me. And he replied “Yeah I’ve given her enough chances” which hurt to my core, because I never did anything to hurt him. I don’t have any male friends and I’m very conservative. He had access to my social media from the beginning even before I saw any of his, because he had trust issues from his past relationship. They continued talking to each other long after that, saying every detail of our relationship including any arguements we had. And he had plenty of hearts ❤️ there too which only bothered me because he said it means more than friends etc.
I was devastated and was ready to leave. I tried walking away and he physically blocked my path to walk and wouldn’t let go. I went back to the car and he physically begged me not to leave him. I told him I will give him one more chance but have still since then been let down by finding other messages to girls.
He said he did all this because he “had a moment of weakness” and was going through a lot when I lost my father. Since then lots of time has past but I can’t seem to move forward and trust him. He still gets mad when I try to talk about it and I feel very alone. One time we had an arguement and he kept repeating how he “cut people off for me” which makes me feel terrible. I just want to move past it and stop feeling hurt about what he did to me. He says he was there for me when I lost my dad but I felt so lonely with everything he did to me at that time too.
Am I the AH and do you have any tips to get through this? Thank you and sorry if my English is bad.
submitted by Help-mee-pls to relationship_advice [link] [comments]

2023.03.27 03:32 hackinthebochs On Large Language Models and Understanding

Large language models (LLMs) have received an increasing amount of attention from all corners. We are on the cusp of a revolution in computing, one that promises to democratize technology in ways few would have predicted just a few years ago. Despite the transformative nature of this technology, we know almost nothing about how they work. They also bring to the fore obscure philosophical questions such as can computational systems understand? At what point do they become sentient and become moral patients? The ongoing discussion surrounding LLMs and their relationship to AGI has left much to be desired. Much dismissive comments downplay the relevance of LLMs to these thorny philosophical issues. But this technology deserves careful analysis and argument, not dismissive sneers. This is my attempt at moving the discussion forward.
To motivate an in depth analysis of LLMs, I will briefly respond to some very common dismissive criticisms of autoregressive prediction models and show why they fail to demonstrate the irrelevance of this framework to the deep philosophical issues of in the field of AI. I will then consider the issues of whether this class of models can be said to understand and then discuss some of the implications of LLMs on human society.
"It's just matrix multiplication; it's just predicting the next token"
These reductive descriptions do not fully describe or characterize the space of behavior of these models, and so such descriptions cannot be used to dismiss the presence of high-level properties such as understanding or sentience.
It is a common fallacy to deduce the absence of high-level properties from a reductive view of a system's behavior. Being "inside" the system gives people far too much confidence that they know exactly what's going on. But low level knowledge of a system without sufficient holistic knowledge leads to bad intuitions and bad conclusions. Searle's Chinese room and Leibniz's mill thought experiments are past examples of this. Citing the the low level computational structure of LLMs is just a modern iteration. That LLMs consist of various matrix multiplications can no more tell us they aren't conscious than our neurons tell us we're not conscious.
The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements. The behavior is not just a product of the computational structure specified in the source code, but an emergent dynamic that is unpredictable from an analysis of the initial rules. It is a common mistake to dismiss this emergent part of a system as carrying no informative or meaningful content. Just bracketing the model parameters as transparent and explanatorily insignificant is to miss a large part of the substance of the system.
Another common argument against the significance of LLMs is that they are just "stochastic parrots", i.e. regurgitating the training data in some from, perhaps with some trivial transformations applied. But it is a mistake to think that LLM's generating ability is constrained to simple transformations of the data they are trained on. Regurgitating data generally is not a good way to reduce the training loss, not when training doesn't involve training against multiple full rounds of training data. I don't know the current stats, but the initial GPT-3 training run got through less than half of a complete iteration of its massive training data.[1]
So with pure regurgitation not available, what it has to do is encode the data in such a way that makes predictions possible, i.e. predictive coding. This means modelling the data in a way that captures meaningful relationships among tokens so that prediction is a tractable computational problem. That is, the next word is sufficiently specified by features of the context and the accrued knowledge of how words, phrases, and concepts typically relate in the training corpus. LLMs discover deterministic computational dynamics such that the statistical properties of text seen during training are satisfied by the unfolding of the computation. This is essentially a synthesis, i.e. semantic compression, of the information contained in the training corpus. But it is this style of synthesis that gives LLMs all their emergent capabilities. Innovation to some extent is just novel combinations of existing units. LLMs are good at this as their model of language and structure allows it to essentially iterate over the space of meaningful combinations of words, selecting points in meaning-space as determined by the context or prompt.
Why think LLMs have understanding at all
Given that LLMs have a semantic compression of the training data, I claim that LLMs "understand" to a significant degree in some contexts. The term understanding is one of those polysemous words for which precise definitions tend to leave out important variants. But we can't set aside these important debates because of an inability to make certain terms precise. Instead, what we can do is be clear about how we are using the term and move forward with analysis.
To that end, we can define understanding as the capacity to engage appropriately with some structure in appropriate contexts. This definition captures the broadly instrumental flavor of descriptions involving understanding. I will argue that there are structures in LLMs that engage with concepts in a manner that demonstrates understanding.
As an example for the sake of argument, consider the ability of ChatGPT to construct poems that satisfy a wide range of criteria. There are no shortage of examples[2][3]. To begin with, first notice that the set of valid poems sit along a manifold in high dimensional space. A manifold is a generalization of the kind of everyday surfaces we are familiar with; surfaces with potentially very complex structure but that look "tame" or "flat" when you zoom in close enough. This tameness is important because it allows you to move from one point on the manifold to another without losing the property of the manifold in between.
Despite the tameness property, there generally is no simple function that can decide whether some point is on a manifold. Our poem-manifold is one such complex structure: there is no simple procedure to determine whether a given string of text is a valid poem. It follows that points on the poem-manifold are mostly not simple combinations of other points on the manifold (given two poems, interpolate between them will not generate poems). Further, we can take it as a given that the number of points on the manifold far surpass the examples of poems seen during training. Thus, when prompted to construct a poem following an arbitrary criteria, we can expect the target region of the manifold to largely be unrepresented by training data.
We want to characterize ChatGPT's impressive ability to construct poems. We can rule out simple combinations of poems previously seen. The fact that ChatGPT constructs passable poetry given arbitrary constraints implies that it can find unseen regions of the poem-manifold in accordance with the required constraints. This is straightforwardly an indication of generalizing from samples of poetry to a general concept of poetry. But still, some generalizations are better than others and neural networks have a habit of finding degenerate solutions to optimization problems. However, the quality and breadth of poetry given widely divergent criteria is an indication of whether the generalization is capturing our concept of poetry sufficiently well. From the many examples I have seen, I can only judge its general concept of poetry to well model the human concept.
So we can conclude that ChatGPT contains some structure that well models the human concept of poetry. Further, it engages meaningfully with this model in appropriate contexts as demonstrated by its ability to construct passable poems when prompted with widely divergent constraints. This satisfies the given definition of understanding.
The previous discussion is a single case of a more general issue studied in compositional semantics. There are an infinite number of valid sentences in a language that can be generated or understood by a finite substrate. It follows that there must be compositional semantics that determine the meaning of these sentences. That is, the meaning of the sentence must be a function of the meanings of the individual terms in the sentence. The grammar that captures valid sentences and the mapping from grammatical structure to semantics is somehow captured in the finite substrate. This grammar-semantics mechanism is the source of language competence and must exist in any system that displays competence with language. Yet, many resist the move from having a grammar-semantics mechanism to having the capacity to understand language. This is despite demonstrating linguistic competence in an expansive range of examples.
Why is it that people resist the claim that LLMs understand even when they respond competently to broad tests of knowledge and common sense? Why is the charge of mere simulation of intelligence so widespread? What is supposedly missing from the system that diminishes it to mere simulation? I believe the unstated premise of such arguments is that most people see understanding as a property of being, that is, autonomous existence. The computer system implementing the LLM, a collection of disparate units without a unified existence, is (the argument goes) not the proper target of the property of understanding. This is a short step from the claim that understanding is a property of sentient creatures. This latter claim finds much support in the historical debate surrounding artificial intelligence, most prominently expressed by Searle's Chinese room thought experiment.
The problem with the Chinese room at its core is the problem of attribution. We want to attribute properties like sentience or understanding to the "things" we are familiar with, and the only sufficient thing in the room is the man. But this intuition is misleading. The question to ask is what is responding when prompts are sent to the room. The responses are being generated by the algorithm reified into a causally efficacious process. Essentially, the reified algorithm implements a set of object-properties without objecthood. But a lack of objecthood has no consequences for the capacities or behaviors of the reified algorithm. Instead, the information dynamics entailed by the structure and function of the reified algorithm entails a conceptual unity (as opposed to a physical unity of properties affixed to an object). This conceptual unity is a virtual center-of-gravity onto which prompts are directed and from which responses are generated. This virtual objecthood then serves as the surrogate for attributions of understanding and such. It's so hard for people to see this as a live option because our cognitive makeup is such that we reason based on concrete, discrete entities. Considering extant properties without concrete entities to carry them is just an alien notion to most. But once we free ourselves of this unjustified constraint, we can see the possibilities that this notion of virtual objecthood grants. We can begin to make sense of such ideas as genuine understanding in purely computational artifacts.
Responding to some more objections to LLM understanding
A common argument against LLM understanding is that their failure modes are strange, so much so that we can't imagine an entity that genuinely models the world while having these kinds of failure modes. This argument rests on an unstated premise that the capacities that ground world modeling are different in kind to the capacities that ground token prediction. Thus when an LLM fails to accurately model and merely resorts to (badly) predicting the next token in a specific case, this demonstrates that they do not have the capacity for world modeling in any case. I will show the error in this argument by undermining the claim of a categorical difference between world modeling and token prediction. Specifically, I will argue that token prediction and world modeling are on a spectrum, and that token prediction converges towards modeling as quality of prediction increases.
To start, lets get clear on what it means to be a model. A model is some structure in which features of that structure correspond to features of some target system. In other words, a model is a kind of analogy: operations or transformations on the model can act as a stand in for operations or transformations on the target system. Modeling is critical to understanding because having a model--having an analogous structure embedded in your causal or cognitive dynamic--allows your behavior to maximally utilize a target system in achieving your objectives. Without such a model one cannot accurately predict the state of the external system while evaluating alternate actions and so one's behavior must be sub-optimal.
LLMs are, in the most reductive sense, processes that leverage the current context to predict the next token. But there is much more to be said about LLMs and how they work. LLMs can be viewed as markov processes, assigning probabilities to each word given the set of words in the current context. But this perspective has many limitations. One limitation is that LLMs are not intrinsically probabilistic. LLMs discover deterministic computational circuits such that the statistical properties of text seen during training are satisfied by the unfolding of the computation. We use LLMs to model a probability distribution over words, but this is an interpretation.
LLMs discover and record discrete associations between relevant features of the context. These features are then reused throughout the network as they are found to be relevant for prediction. These discrete associations are important because they factor in the generalizability of LLMs. The alternate extreme is simply treating the context as a single unit, an N-word tuple or a single string, and then counting occurrences of each subsequent word given this prefix. Such a simple algorithm lacks any insight into the internal structure of the context, and forgoes an ability to generalize to a different context that might share relevant internal features. LLMs learn the relevant internal structure and exploits it to generalize to novel contexts. This is the content of the self-attention matrix. Prediction, then, is constrained by these learned features; the more features learned, the more constraints are placed on the continuation, and the better the prediction.
The remaining question is whether this prediction framework can develop accurate models of the world given sufficient training data. We know that Transformers are universal approximators of sequence-to-sequence functions[4], and so any structure that can be encoded into a sequence-to-sequence map can be modeled by Transformer layers. As it turns out, any relational or quantitative data can be encoded in sequences of tokens. Natural language and digital representations are two powerful examples of such encodings. It follows that precise modeling is the consequence of a Transformer style prediction framework and large amounts of training data. The peculiar failure modes of LLMs, namely hallucinations and absurd mistakes, are due to the modeling framework degrading to underdetermined predictions because of insufficient data.
What this discussion demonstrates is that prediction and modeling are not categorically distinct capacities in LLMs, but exist on a continuum. So we cannot conclude that LLMs globally lack understanding given the many examples of unintuitive failures. These failures simply represent the model responding from different points along the prediction-modeling spectrum.
LLMs fail the most basic common sense tests. More disastrously, it fails to learn.
This is a common problem in how we evaluate these LLMs. We judge these models against the behavior and capacities of human agents and then dismiss them when they fail to replicate some trait that humans exhibit. But this is a mistake. The evolutionary history of humans is vastly different than the training regime of LLMs and so we should expect behaviors and capacities that diverge due to this divergent history. People often point to the fact that LLMs answer confidently despite being way off base. But this is due to the training regime that rewards guesses and punishes displays of incredulity. The training regime has serious implications for the behavior of the model that is orthogonal to questions of intelligence and understanding. We must evaluate them on their on terms.
Regarding learning specifically, this seems to be an orthogonal issue to intelligence or understanding. Besides, there's nothing about active learning that is in principle out of the reach of some descendant of these models. It's just that the current architectures do not support it.
LLMs take thousands of gigabytes of text and millions of hours of compute to talk like a mediocre college student
I'm not sure this argument really holds water when comparing apples to apples. Yes, LLMs take an absurd amount of data and compute to develop a passable competence in conversation. A big reason for this is that Transformers are general purpose circuit builders. The lack of strong inductive bias has the cost of requiring a huge amount of compute and data to discover useful information dynamics. But the human has a blueprint for a strong inductive bias that begets competence with only a few years of training. But when you include the billion years of "compute" that went into discovering the inductive biases encoded in our DNA, it's not clear at all which one is more sample efficient. Besides, this goes back to inappropriate expectations derived from our human experience. LLMs should be judged on their own merits.
Large language models are transformative to human society
It's becoming increasingly clear to me that the distinctive trait of humans that underpin our unique abilities over other species is our ability to wield information like a tool. Of course information is infused all through biology. But what sets us apart is that we have a command over information that allows us to intentionally deploy it in service to our goals. Further, this command is cumulative and seemingly unbounded.
What does it mean to wield information? In other words, what is the relevant space of operations on information that underlie the capacities that distinguish humans from other animals? To start, lets define information as configuration with an associated context. This is an uncommon definition for information, but it is useful because it makes explicit the essential role of context in the concept of information. Information without its proper context is impotent; it loses its ability to pick out the intended content, undermining its role in communication or action initiation. Information without context lacks its essential function, thus context is essential to the concept.
The value of information is that it provides a record of events or state such that the events or state can have relevance far removed in space and time from their source. A record of the outcome of some process allows the limitless dissemination of the outcome and with it the initiation of appropriate downstream effects. Humans wield information by selectively capturing and deploying information in accords with our needs. For example, we recognize the value of, say, sharp rocks, then copy and share the method for producing such rocks.
But a human's command of information isn't just a matter of learning and deploying it, we also have a unique ability to intentionally create it. At its most basic, information is created as the result of an iterative search process consisting of (1) variation of some substrate and (2) testing for suitability according to some criteria. Natural processes under the right context can engage in this sort of search process that begets new information. Evolution through natural selection being the definitive example.
Aside from natural processes, we can also understand computational processes as the other canonical example of information creating processes. But computational processes are distinctive among natural processes, they can be defined by their ability to stand in an analogical relationship to some external process. The result of the computational process then picks out the same information as the target process related by way of analogy. Thus computations can also provide relevance far removed in space and time from their analogical related process. Furthermore, the analogical target doesn't even have to exist; the command of computation allows one to peer into future or counterfactual states.
Thus we see the full command of information and computation is a superpower to an organism: it affords a connection to distant places and times, the future, as well as what isn't actual but merely possible. The human mind is thus a very special kind of computer. Abstract thought renders access to these modes of processing almost as effortlessly as we observe what is right in front of us. The mind is a marvelous mechanism, allowing on-demand construction of computational contexts in service to higher-order goals. The power of the mind is in wielding these computational artifacts to shape the world in our image.
But we are no longer the only autonomous entities with command over information. The history of computing is one of offloading an increasing amount of essential computational artifacts to autonomous systems. Computations are analogical processes unconstrained by the limitations of real physical processes. Thus we prefer to deploy autonomous computational processes wherever available. Still, such systems were limited by program construction and context. Each process being replaced by a program required a full understanding of the system being replaced such that the dynamic could be completely specified in the program code.
LLMs mark the beginning of a new revolution in autonomous program deployment. No longer must the program code be specified in advance of deployment. The program circuit is dynamically constructed by the LLM as it integrates the prompt with its internal representation of the world. The need for expertise with a system to interface with it is obviated; competence with natural language is enough. This has the potential to democratize computational power like nothing else that came before. It also means that computational expertise becomes nearly worthless. Much like the human computer prior to the advent of the electronic variety, the concept of programmer as a profession is coming to an end.
Aside from the implications for the profession of programming, there are serious philosophical implications of this view of LLMs that warrant exploration. The question of cognition in LLMs being chief among them. I talked about the human superpower being our command of information and computation. But the previous discussion shows real parallels between human cognition (understood as dynamic computations implemented by minds) and the power of LLMs. LLMs show sparse activations in generating output from a prompt, which can be understood as dynamically activating sub-networks based on context. A further emergent property is in-context learning, recognizing unique patterns in the input context and actively deploying that pattern during generation. This is, at the very least, the beginnings of on-demand construction of computational contexts.
Limitations of LLMs
To be sure, there are many limitations of current LLM architectures that keep them from approaching higher order cognitive abilities such as planning and self-monitoring. The main limitation has two aspects, the fixed feed-forward computational window. The fixed computational window limits the amount of resources it can deploy to solve a given generation task. Once the computational limit is reached, the next word prediction is taken as-is. This is part of the reason we see odd failure modes with these models, there is no graceful degradation and so partially complete predictions may seem very alien.
The other limitation of only feed-forward computations means the model has limited ability to monitor its generation for quality and is incapable of any kind of search over the space of candidate generations. To be sure, LLMs do sometimes show limited "metacognitive" ability, particularly when explicitly prompted for it.[5] But it is certainly limited compared to what is possible if the architecture had proper feedback connections.
The terrifying thing is that LLMs are just about the dumbest thing you can do with Transformers and they perform far beyond anyone's expectations. When people imagine AGI, they probably imagine some super complex, intricately arranged collection of many heterogeneous subsystems backed by decades of computer science and mathematical theory. But LLMs have completely demolished the idea that complex architectures are required for complex intelligent-seeming behavior. If LLMs are just about the dumbest thing we can do with Transformers, it is plausible that slightly less dumb architectures will reach AGI.
[1] (.44 epochs elapsed for Common Crawl)
submitted by hackinthebochs to naturalism [link] [comments]

2023.03.27 03:31 Metapia OpenNMS Paper on Natural Monetary Systems for Humanistic Monetarism

OpenNMS Paper on Natural Monetary Systems for Humanistic Monetarism

Humanistic monetarism is a currency theory and policy centered on human needs and interests. It criticizes the excessive emphasis and abuse of currency by neoliberalism and monetarism, and advocates the establishment of a natural currency that conforms to the laws of human social development. system. The natural currency system refers to a currency system that is not manipulated by the government and the central bank, does not depend on metals or other scarce resources, but is produced and circulated according to the actual needs of social and economic activities. This article will explain the theoretical basis, realization methods and advantages of the natural currency system of humanistic monetarism from the following three aspects.

First, the theoretical basis of the natural currency system of humanistic monetarism. Humanistic monetarism believes that currency is a social convention, and it is a symbol created by people to facilitate exchange. It has no fixed value, but changes with changes in social and economic activities. Therefore, currency should not be regarded as a scarce commodity, but as a service whose role is to facilitate exchange and distribution and meet people's diverse needs. Humanistic monetarism opposes linking currency to metals or other substances, arguing that this practice will limit the supply and circulation of currency, causing economic depression and social injustice. Humanistic monetarism also opposes the monopoly and manipulation of currencies by the government and the central bank, believing that such practices will lead to inflation or deflation, distort price signals and resource allocation, and damage the welfare of the people. Humanistic monetarism advocates the establishment of a natural currency system, that is, a currency system that is issued and used by market participants according to actual needs. This system is not controlled by any central authority and does not depend on any material basis. Is entirely based on credit and trust.

Second, the way to realize the natural currency system of humanistic monetarism. Humanistic monetarism believes that in order to realize the natural currency system, it is necessary to establish a decentralized, open, transparent and democratic currency creation and circulation mechanism with the help of modern information technology and network platforms. Specifically, there are the following aspects:

(1) Issue diversified reciprocal currencies. Reciprocal currency refers to a non-legal currency issued and accepted by individuals or organizations based on the goods or services they provide or need. It can circulate in a specific community or network, or it can be combined with other types of reciprocal currency or Fiat currency exchange. Reciprocal currency can effectively solve social problems such as credit crunch, unemployment, and poverty, and promote the diversification and sustainability of social and economic activities.

(2) Establish an encrypted digital currency based on blockchain technology. Encrypted digital currency refers to a method that uses the principles of cryptography and blockchain
submitted by Metapia to u/Metapia [link] [comments]

2023.03.27 03:31 thefollower How do I use Runesmiter's tunnel effectively?

I am a new age of sigmar player, and pretty new to wargaming in general. I've played about 5 games with fyreslayers so far. Every time I use the Runesmiter's tunneling ability, I feel like i just burn 3-400 points and start the game at a detriment.
Given that most of the army has pretty limited movement, a deep strike seems like a key point. I see most lists use Runesmiters, I assume for that reason. I have played 2 games with them, and both times I lost a chunk of points on turn 1 for minimal return.
In 1 game, I took a Grimwrath berserker and deep striked behind their lines. I charged a wurrgog prophet, and killed him. In total I lost a grimwrath + Runesmiter for 240 points at the time, for a 170 point trade... maybe worth it? but unsure. And I didn't get any casts off my runesmiter.
In a different game, I deep strike 20 Vulkites with Axes onto an objective against a Kragnos Giants list. He could charge 3D6 so it felt like the whole map was jeopardized. He turn 1 charged them with 2 units, roared so I couldn't counterstrike, and then cleared them off the board. I did get to fight on death, but dealt like 7 wounds or something for 440 points of models.
I am a complete novice and don't understand how to deploy or how I should deep strike, so it feels like Runesmiter is just a trap to me, but in concept it sounds good... Is Runesmiter a noob trap, or how should I be using the tunneling in a way to get the most value? Second question, can I hold the Runesmiters deep strike until turn 2 or 3, or does it have to come out on turn 1? I'm not sure if that's a key part to understanding the strategy.
submitted by thefollower to Fyreslayers [link] [comments]

2023.03.27 03:30 SadAd3095 Welcome To The In-between Friend, Here's How To Survive: Part 2, The House

Now that was just in the case of the factory, on the rarer chance that you are sent to "The House" I cannot reach you, and something much darker is waiting for you. But I can help you out with the few notes I have of the place.

  1. When you are selected by the house, you will know because you have just woken up, you will not have any of your items on you except your phone and a small totem with be in your pocket, remember these as they will be needed to save your life in multiple scenarios
  2. THE GOAL OF THE HOUSE IS NOT TO ESCAPE unless there under one circumstance, Refer to rule 25 for that scenario.
  3. There will be someone on a bunk bed who is sleeping above you, her name is "Sylvia" she is a shadow entity who seems to have wolf ears, she has a scythe and tactical gear of sorts, she is very powerful and also your best friend, feel free to wake her up at any time, she can protect you, but the entities will be more vicious when she is around.
  6. When you leave your bed room with or without Sylvia, you want to head to the kitchen, there you will find some food and some knives you can take either the food or the knives
  7. Watch for Crawlers, they lurk above the ceiling and are trained to kill in any and every scenario, use your knife to kill them or ask Sylvia to, if she does not comply, it is a shapeshifter. THAT IS NOT SYLVIA. She will help the Crawler kill you.
  8. If confused about the place, think about the house as HARD MODE for the factory, some of the same rules apply, mostly the ones that don't require you to use your guns
  9. At Midnight, Sylvia will no longer be an ally for you, she will blankly roam around the house, and act very friendly if she sees you, but it is no longer Sylvia, her mind has been taken over by her emotionless counterpart "Kei"
  10. If you are CAUGHT by "Kei" do not fear for it is not the end, she still has to act like your friend somewhat, so she will ask to play a game. Agreeing to the game will mean you have to give her the crystal orb in your pocket, this will make her much more powerful, declining the game will make her much more vicious, you will respawn in the same bedroom you started in where Sylvia was, It will take Kei 5 seconds to realize what happened and 15 more to get to where you are, use the vents to hop from place to place much quicker.
  11. Keep in mind that this place is as big as a mansion, you have many places to run and even more to hide, but if you are cornered, pray that she only wants to play "The Game"
  12. When she mentions the game do not ask for the rules, because the only one is to survive however you can. If you do ask... Please, refer to rule 15.
  13. The game is hide and seek, you can easily move places but the vents will have amplified sound, so they will know the general area you are going to
  14. Kei is emotionless so she will not care about your cries of pain or your pleading, any emotion you show is grounds for her to become more vicious in her mind, you are annoying to her.
  15. Kei will explain the rules and it becomes 10 truths and 5 lies, the game will also change from hide-and-seek to a game only she can tell you about, you then become out of my reach and I am sorry.
  16. At 3AM Sylvia's body will have completely changed into Kei's, she will become less and less friendly slowly as she is obviously faking these emotions up until 3AM at that point in time, she's fed up with the game of cat and mouse and will do anything to kill you
  17. at 3:30AM The house will start flashing random colors. Red means she's getting hotter, or closer to you... Blue means she's getting colder, green means she is too far away for them to tell, and yellow means she has given up.
  18. If the lights ever stop flashing, Kei is right behind you. Pray, but not to your lord and savior, pray to Kei herself, she may find this amusing, as at least you are not using your annoyingly high-pitched voice.
  19. The lights only ever flash green because the house has expanded from a big home into an ACTUAL Mansion, the cameras can only see so much...
  20. Congratulations it is now 2:50AM and you now only have to worry about certain scenarios at certain times. Pray to Jesus at this time, Pray to Kei if you feel like it, she may change her behavior, positively or negatively is up to her, but praying to her will lean more towards positively. Failing to at least pray to Jesus will make this place Un-holy ground, any demon will be able to help Kei hunt you down.
  21. You are free to leave the mansion at any time from after 3-4 AM, but depending on what time you leave after 3AM... you will need to face an opponent
    1. If you leave at 3:01-3:15 you will face the one named "Sensei" he is the one who taught "Team Shadow" which is a group that Sylvia and Kei belong to how to fight and ancient fighting techniques, although Kei likes to stick to distortion.
    2. If you leave at 3:16-3:29 you will fight Sylvia's brother... Mo. He is powerful but not as powerful as Sylvia, and he is powerful in a different way then Sylvia, he is also much more disciplined, and paid the most attention in Sensei's class.
    3. YOU MAY NOT LEAVE AT 3:30-3:35 Doing so will alert Kei to your position, but it won't matter because you'll already be falling through an endless void.
    4. If you leave at 3:36-3:59 it will be the leader of Team Shadow... His name is Cole though something will be off about him, his body parts will be shredded or scratched. he will be badly bruised and in a lot of pain, this is because everyone but Sensei on team shadow hates him, including Sylvia and Kei. Despite this advantage, make no mistake, he will still be the toughest opponent you've faced in your life.
    5. In the event you do not leave before 4AM the doors will close and you will need to wait for another opportunity to escape, Kei will take note that you passed up the chance to leave and one of two things will happen: Possibility 1: She will turn back into Sylvia and try to trick you, despite this it is technically a mercy. Possibility 2: She will become even MORE vicious and faster, also the walls will move inward by two rooms from each direction, shortening the area needed to find you.
  23. Make sure to run until your legs give out, she will catch you in a few hours, when she catches up one of two things will happen, she will jump-scare and then kill you or you will be teleported into an arena where you must fight her, you are given the choice of a sword, a scythe, or magma boxing gloves (That don't burn your hands) Whichever one you pick will also teach you a different fighting style
    1. The boxing gloves are... Never mind you already know. And if you somehow don't then it isn't my frickin' problem
    2. If you pick the sword, you will know how to fence, among other sword fighting styles.
    3. If you pick the scythe, you will know how to fight like Kei does, be better at her art then her.
  24. If you win you are free to leave. She will bow and let you go, make sure you haven't killed Kei. If you somehow do, the entities will rush in and kill you, and Sylvia will emerge from Kei's ashes to help finish you off as well.
  25. This will explain the rare scenario that Cole is there in the bed above you instead of Sylvia, the same rules will apply but the goal is now To Escape not survive.
    1. The house will turn a labyrinth and you had never seen Sylvia as Sylvia, she is just Kei. She attacks you because she thinks you are on Cole's side, you can try to side with Kei to kill Cole but then you Betray Cole, and she might not even believe you, if she doesn't then you have NO FRIENDS and you will most likely die.
    2. Death is only guaranteed in this scenario where the house turns into a labyrinth.
    3. The good thing is that it will only transform once Cole has woken up so you still have to follow rules as if it was a Sylvia scenario until you wake Cole up.
    4. The Ratio of Sylvia Scenario To Cole Scenario Is 98%-2% in the Sylvia Scenario's Favor
  26. If you escape with Cole or Defeat Kei, then you will be here, just like when you escape the factory, I will be allowed to let you go back into the real world.

I really really do apologize for this inconvenience but IT is making me do these things, now run along, you have a factory to escape, or a house to survive in. Good-luck. -Fellow Inmate
submitted by SadAd3095 to Ruleshorror [link] [comments]