Eyebrow wax near me

Bleach Art

2018.01.18 18:34 Onlyhereforthelaughs Bleach Art

A place for bleach art of all kinds. Shirts, shoes, hoodies, jeans, bedsheets, whatever.

2023.03.27 03:35 danmanic Websites for finding spots for specific activities in your area?

I would like to find spots in my area designed for when or if I feel like taking part in something specific like roller skating or paintball just as examples. I tried doing the obvious choice of google searching something like "blank near me" but I found the results to be quite vague and not so helpful. So I want to know if there happens to be a website or websites that are made to narrow my search down and give me definite results that happen to feel like they were curated for me and what I am looking for specifically.
submitted by danmanic to website [link] [comments]

2023.03.27 03:34 throwawaynofap270323 Just deleted my collection

Well, I did it. Deleted it and emptied the trash can. Nearly 2000 pictures.
Praise be to God for giving me the strength. I attended mass today and I just knew as I sat there listening to the letter from the bishops on sexuality that I had to do this. My life has been almost stuck in a standstill in a place I don't want to be since the end of 2020 (long story), and I have begged God for help while constantly straying from his path, keeping sinful materials a click away on my harddrive.
How do I feel now? Relieved. Yes, anxious because I felt so tempted while doing it, and uncertain. I think I'm not the only one to have developed a weird, unnatural attachment to a collection like that. But... I feel so much lighter. I listened to hymns to get me through it.
I don't know what the future bears. I truly don't. But now I've cut out one thing that led me astray, one thing that led me into temptation. I'm sure temptations will follow anyway, but at least tonight, I did the right thing, thanks be to God for helping and strengthening me.
submitted by throwawaynofap270323 to NoFapChristians [link] [comments]

2023.03.27 03:33 nugget-bae Support Agent Scam.

Thank you to this sub for making me aware of this scam. I am nearing 1k deliveries and have finally had a scammer try it on me. I read about it on here, but didn't really know what it was all about.
For new people or those who aren't in the know. The scam is to social engineer your Dasher login information through the ruse of being a Support Agent that is helping you cancel an order. First the scammer places a small order (a drink or an inexpensive item).This is how they have your name, a way to call you, along with the name and address of the order. The guy I spoke with sounded very professional. He asked if he was speaking to (my name) and that the order for (the restaurant) I was in route to was canceled. I didn't answer him. I just said. "What is this in regards to?" and each time I said that he gave a little more information to get me comfortable. The name of the customer. Then the address. I didn't fall for it, but I can see how someone else could. It's actually pretty slick. If you fall for it then they will collect your Dasher login information and lock you out and change your banking information and steal your earnings.
I did the order and went to a fake address and then had support cancel it. I could have probably saved time and just unassigned, but it was slow and I wanted my pay.
Beware. Just know if a order is canceled before you arrive to the restaurant then it will just disappear from your app. You will not be contacted or notified. If it's canceled after you have arrived then you will receive a text about it.
submitted by nugget-bae to doordash [link] [comments]

2023.03.27 03:33 vocaliser Drain Flood and Resolution

I wanted to post about this in case drain newbies (like I was) weren't aware this could happen. On the eighth day out from my SMX there was very little fluid in the drain bulb all day. I optimistically thought that the drain was therefore ready to come out. Nope. That night I woke up at 2:30 a.m. in a pool of liquid inside my pajama top. Freak out time! I checked the situation in the bathroom mirror. The drain had not come out of me, so what the hell happened?
Called in to the surgeon's office, got the answering service, and the surgeon on call got back to me. The drain had gotten blocked with solids, probably little globs of fat, right at the top near where it exited my body. Thus the fluid that was supposed to be drained built up inside instead. It wasn't an external seroma. He said that it sometimes happens, not to worry, just clean the area, strip the drain,* and apply fresh gauze. I didn't have to go in to the hospital to have the drain insertion point checked. Whew.
*Drain stripping: If you see solids like tissue or fat in the drain tube, you should (with sanitized hands or alcohol wipes) stretch the tube gently to loosen it, being careful not to pull on the insertion point itself. I also gently roll it between my thumb and index finger. Bubbles are okay but don't let solids clog the tube.
Edit to clarify: The flood did not come from the surgical incision, which was fine. It came from the small incision made for the drain, which was sutured in.
submitted by vocaliser to breastcancer [link] [comments]

2023.03.27 03:32 hackinthebochs On Large Language Models and Understanding

Large language models (LLMs) have received an increasing amount of attention from all corners. We are on the cusp of a revolution in computing, one that promises to democratize technology in ways few would have predicted just a few years ago. Despite the transformative nature of this technology, we know almost nothing about how they work. They also bring to the fore obscure philosophical questions such as can computational systems understand? At what point do they become sentient and become moral patients? The ongoing discussion surrounding LLMs and their relationship to AGI has left much to be desired. Much dismissive comments downplay the relevance of LLMs to these thorny philosophical issues. But this technology deserves careful analysis and argument, not dismissive sneers. This is my attempt at moving the discussion forward.
To motivate an in depth analysis of LLMs, I will briefly respond to some very common dismissive criticisms of autoregressive prediction models and show why they fail to demonstrate the irrelevance of this framework to the deep philosophical issues of in the field of AI. I will then consider the issues of whether this class of models can be said to understand and then discuss some of the implications of LLMs on human society.
"It's just matrix multiplication; it's just predicting the next token"
These reductive descriptions do not fully describe or characterize the space of behavior of these models, and so such descriptions cannot be used to dismiss the presence of high-level properties such as understanding or sentience.
It is a common fallacy to deduce the absence of high-level properties from a reductive view of a system's behavior. Being "inside" the system gives people far too much confidence that they know exactly what's going on. But low level knowledge of a system without sufficient holistic knowledge leads to bad intuitions and bad conclusions. Searle's Chinese room and Leibniz's mill thought experiments are past examples of this. Citing the the low level computational structure of LLMs is just a modern iteration. That LLMs consist of various matrix multiplications can no more tell us they aren't conscious than our neurons tell us we're not conscious.
The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements. The behavior is not just a product of the computational structure specified in the source code, but an emergent dynamic that is unpredictable from an analysis of the initial rules. It is a common mistake to dismiss this emergent part of a system as carrying no informative or meaningful content. Just bracketing the model parameters as transparent and explanatorily insignificant is to miss a large part of the substance of the system.
Another common argument against the significance of LLMs is that they are just "stochastic parrots", i.e. regurgitating the training data in some from, perhaps with some trivial transformations applied. But it is a mistake to think that LLM's generating ability is constrained to simple transformations of the data they are trained on. Regurgitating data generally is not a good way to reduce the training loss, not when training doesn't involve training against multiple full rounds of training data. I don't know the current stats, but the initial GPT-3 training run got through less than half of a complete iteration of its massive training data.[1]
So with pure regurgitation not available, what it has to do is encode the data in such a way that makes predictions possible, i.e. predictive coding. This means modelling the data in a way that captures meaningful relationships among tokens so that prediction is a tractable computational problem. That is, the next word is sufficiently specified by features of the context and the accrued knowledge of how words, phrases, and concepts typically relate in the training corpus. LLMs discover deterministic computational dynamics such that the statistical properties of text seen during training are satisfied by the unfolding of the computation. This is essentially a synthesis, i.e. semantic compression, of the information contained in the training corpus. But it is this style of synthesis that gives LLMs all their emergent capabilities. Innovation to some extent is just novel combinations of existing units. LLMs are good at this as their model of language and structure allows it to essentially iterate over the space of meaningful combinations of words, selecting points in meaning-space as determined by the context or prompt.
Why think LLMs have understanding at all
Given that LLMs have a semantic compression of the training data, I claim that LLMs "understand" to a significant degree in some contexts. The term understanding is one of those polysemous words for which precise definitions tend to leave out important variants. But we can't set aside these important debates because of an inability to make certain terms precise. Instead, what we can do is be clear about how we are using the term and move forward with analysis.
To that end, we can define understanding as the capacity to engage appropriately with some structure in appropriate contexts. This definition captures the broadly instrumental flavor of descriptions involving understanding. I will argue that there are structures in LLMs that engage with concepts in a manner that demonstrates understanding.
As an example for the sake of argument, consider the ability of ChatGPT to construct poems that satisfy a wide range of criteria. There are no shortage of examples[2][3]. To begin with, first notice that the set of valid poems sit along a manifold in high dimensional space. A manifold is a generalization of the kind of everyday surfaces we are familiar with; surfaces with potentially very complex structure but that look "tame" or "flat" when you zoom in close enough. This tameness is important because it allows you to move from one point on the manifold to another without losing the property of the manifold in between.
Despite the tameness property, there generally is no simple function that can decide whether some point is on a manifold. Our poem-manifold is one such complex structure: there is no simple procedure to determine whether a given string of text is a valid poem. It follows that points on the poem-manifold are mostly not simple combinations of other points on the manifold (given two poems, interpolate between them will not generate poems). Further, we can take it as a given that the number of points on the manifold far surpass the examples of poems seen during training. Thus, when prompted to construct a poem following an arbitrary criteria, we can expect the target region of the manifold to largely be unrepresented by training data.
We want to characterize ChatGPT's impressive ability to construct poems. We can rule out simple combinations of poems previously seen. The fact that ChatGPT constructs passable poetry given arbitrary constraints implies that it can find unseen regions of the poem-manifold in accordance with the required constraints. This is straightforwardly an indication of generalizing from samples of poetry to a general concept of poetry. But still, some generalizations are better than others and neural networks have a habit of finding degenerate solutions to optimization problems. However, the quality and breadth of poetry given widely divergent criteria is an indication of whether the generalization is capturing our concept of poetry sufficiently well. From the many examples I have seen, I can only judge its general concept of poetry to well model the human concept.
So we can conclude that ChatGPT contains some structure that well models the human concept of poetry. Further, it engages meaningfully with this model in appropriate contexts as demonstrated by its ability to construct passable poems when prompted with widely divergent constraints. This satisfies the given definition of understanding.
The previous discussion is a single case of a more general issue studied in compositional semantics. There are an infinite number of valid sentences in a language that can be generated or understood by a finite substrate. It follows that there must be compositional semantics that determine the meaning of these sentences. That is, the meaning of the sentence must be a function of the meanings of the individual terms in the sentence. The grammar that captures valid sentences and the mapping from grammatical structure to semantics is somehow captured in the finite substrate. This grammar-semantics mechanism is the source of language competence and must exist in any system that displays competence with language. Yet, many resist the move from having a grammar-semantics mechanism to having the capacity to understand language. This is despite demonstrating linguistic competence in an expansive range of examples.
Why is it that people resist the claim that LLMs understand even when they respond competently to broad tests of knowledge and common sense? Why is the charge of mere simulation of intelligence so widespread? What is supposedly missing from the system that diminishes it to mere simulation? I believe the unstated premise of such arguments is that most people see understanding as a property of being, that is, autonomous existence. The computer system implementing the LLM, a collection of disparate units without a unified existence, is (the argument goes) not the proper target of the property of understanding. This is a short step from the claim that understanding is a property of sentient creatures. This latter claim finds much support in the historical debate surrounding artificial intelligence, most prominently expressed by Searle's Chinese room thought experiment.
The problem with the Chinese room at its core is the problem of attribution. We want to attribute properties like sentience or understanding to the "things" we are familiar with, and the only sufficient thing in the room is the man. But this intuition is misleading. The question to ask is what is responding when prompts are sent to the room. The responses are being generated by the algorithm reified into a causally efficacious process. Essentially, the reified algorithm implements a set of object-properties without objecthood. But a lack of objecthood has no consequences for the capacities or behaviors of the reified algorithm. Instead, the information dynamics entailed by the structure and function of the reified algorithm entails a conceptual unity (as opposed to a physical unity of properties affixed to an object). This conceptual unity is a virtual center-of-gravity onto which prompts are directed and from which responses are generated. This virtual objecthood then serves as the surrogate for attributions of understanding and such. It's so hard for people to see this as a live option because our cognitive makeup is such that we reason based on concrete, discrete entities. Considering extant properties without concrete entities to carry them is just an alien notion to most. But once we free ourselves of this unjustified constraint, we can see the possibilities that this notion of virtual objecthood grants. We can begin to make sense of such ideas as genuine understanding in purely computational artifacts.
Responding to some more objections to LLM understanding
A common argument against LLM understanding is that their failure modes are strange, so much so that we can't imagine an entity that genuinely models the world while having these kinds of failure modes. This argument rests on an unstated premise that the capacities that ground world modeling are different in kind to the capacities that ground token prediction. Thus when an LLM fails to accurately model and merely resorts to (badly) predicting the next token in a specific case, this demonstrates that they do not have the capacity for world modeling in any case. I will show the error in this argument by undermining the claim of a categorical difference between world modeling and token prediction. Specifically, I will argue that token prediction and world modeling are on a spectrum, and that token prediction converges towards modeling as quality of prediction increases.
To start, lets get clear on what it means to be a model. A model is some structure in which features of that structure correspond to features of some target system. In other words, a model is a kind of analogy: operations or transformations on the model can act as a stand in for operations or transformations on the target system. Modeling is critical to understanding because having a model--having an analogous structure embedded in your causal or cognitive dynamic--allows your behavior to maximally utilize a target system in achieving your objectives. Without such a model one cannot accurately predict the state of the external system while evaluating alternate actions and so one's behavior must be sub-optimal.
LLMs are, in the most reductive sense, processes that leverage the current context to predict the next token. But there is much more to be said about LLMs and how they work. LLMs can be viewed as markov processes, assigning probabilities to each word given the set of words in the current context. But this perspective has many limitations. One limitation is that LLMs are not intrinsically probabilistic. LLMs discover deterministic computational circuits such that the statistical properties of text seen during training are satisfied by the unfolding of the computation. We use LLMs to model a probability distribution over words, but this is an interpretation.
LLMs discover and record discrete associations between relevant features of the context. These features are then reused throughout the network as they are found to be relevant for prediction. These discrete associations are important because they factor in the generalizability of LLMs. The alternate extreme is simply treating the context as a single unit, an N-word tuple or a single string, and then counting occurrences of each subsequent word given this prefix. Such a simple algorithm lacks any insight into the internal structure of the context, and forgoes an ability to generalize to a different context that might share relevant internal features. LLMs learn the relevant internal structure and exploits it to generalize to novel contexts. This is the content of the self-attention matrix. Prediction, then, is constrained by these learned features; the more features learned, the more constraints are placed on the continuation, and the better the prediction.
The remaining question is whether this prediction framework can develop accurate models of the world given sufficient training data. We know that Transformers are universal approximators of sequence-to-sequence functions[4], and so any structure that can be encoded into a sequence-to-sequence map can be modeled by Transformer layers. As it turns out, any relational or quantitative data can be encoded in sequences of tokens. Natural language and digital representations are two powerful examples of such encodings. It follows that precise modeling is the consequence of a Transformer style prediction framework and large amounts of training data. The peculiar failure modes of LLMs, namely hallucinations and absurd mistakes, are due to the modeling framework degrading to underdetermined predictions because of insufficient data.
What this discussion demonstrates is that prediction and modeling are not categorically distinct capacities in LLMs, but exist on a continuum. So we cannot conclude that LLMs globally lack understanding given the many examples of unintuitive failures. These failures simply represent the model responding from different points along the prediction-modeling spectrum.
LLMs fail the most basic common sense tests. More disastrously, it fails to learn.
This is a common problem in how we evaluate these LLMs. We judge these models against the behavior and capacities of human agents and then dismiss them when they fail to replicate some trait that humans exhibit. But this is a mistake. The evolutionary history of humans is vastly different than the training regime of LLMs and so we should expect behaviors and capacities that diverge due to this divergent history. People often point to the fact that LLMs answer confidently despite being way off base. But this is due to the training regime that rewards guesses and punishes displays of incredulity. The training regime has serious implications for the behavior of the model that is orthogonal to questions of intelligence and understanding. We must evaluate them on their on terms.
Regarding learning specifically, this seems to be an orthogonal issue to intelligence or understanding. Besides, there's nothing about active learning that is in principle out of the reach of some descendant of these models. It's just that the current architectures do not support it.
LLMs take thousands of gigabytes of text and millions of hours of compute to talk like a mediocre college student
I'm not sure this argument really holds water when comparing apples to apples. Yes, LLMs take an absurd amount of data and compute to develop a passable competence in conversation. A big reason for this is that Transformers are general purpose circuit builders. The lack of strong inductive bias has the cost of requiring a huge amount of compute and data to discover useful information dynamics. But the human has a blueprint for a strong inductive bias that begets competence with only a few years of training. But when you include the billion years of "compute" that went into discovering the inductive biases encoded in our DNA, it's not clear at all which one is more sample efficient. Besides, this goes back to inappropriate expectations derived from our human experience. LLMs should be judged on their own merits.
Large language models are transformative to human society
It's becoming increasingly clear to me that the distinctive trait of humans that underpin our unique abilities over other species is our ability to wield information like a tool. Of course information is infused all through biology. But what sets us apart is that we have a command over information that allows us to intentionally deploy it in service to our goals. Further, this command is cumulative and seemingly unbounded.
What does it mean to wield information? In other words, what is the relevant space of operations on information that underlie the capacities that distinguish humans from other animals? To start, lets define information as configuration with an associated context. This is an uncommon definition for information, but it is useful because it makes explicit the essential role of context in the concept of information. Information without its proper context is impotent; it loses its ability to pick out the intended content, undermining its role in communication or action initiation. Information without context lacks its essential function, thus context is essential to the concept.
The value of information is that it provides a record of events or state such that the events or state can have relevance far removed in space and time from their source. A record of the outcome of some process allows the limitless dissemination of the outcome and with it the initiation of appropriate downstream effects. Humans wield information by selectively capturing and deploying information in accords with our needs. For example, we recognize the value of, say, sharp rocks, then copy and share the method for producing such rocks.
But a human's command of information isn't just a matter of learning and deploying it, we also have a unique ability to intentionally create it. At its most basic, information is created as the result of an iterative search process consisting of (1) variation of some substrate and (2) testing for suitability according to some criteria. Natural processes under the right context can engage in this sort of search process that begets new information. Evolution through natural selection being the definitive example.
Aside from natural processes, we can also understand computational processes as the other canonical example of information creating processes. But computational processes are distinctive among natural processes, they can be defined by their ability to stand in an analogical relationship to some external process. The result of the computational process then picks out the same information as the target process related by way of analogy. Thus computations can also provide relevance far removed in space and time from their analogical related process. Furthermore, the analogical target doesn't even have to exist; the command of computation allows one to peer into future or counterfactual states.
Thus we see the full command of information and computation is a superpower to an organism: it affords a connection to distant places and times, the future, as well as what isn't actual but merely possible. The human mind is thus a very special kind of computer. Abstract thought renders access to these modes of processing almost as effortlessly as we observe what is right in front of us. The mind is a marvelous mechanism, allowing on-demand construction of computational contexts in service to higher-order goals. The power of the mind is in wielding these computational artifacts to shape the world in our image.
But we are no longer the only autonomous entities with command over information. The history of computing is one of offloading an increasing amount of essential computational artifacts to autonomous systems. Computations are analogical processes unconstrained by the limitations of real physical processes. Thus we prefer to deploy autonomous computational processes wherever available. Still, such systems were limited by program construction and context. Each process being replaced by a program required a full understanding of the system being replaced such that the dynamic could be completely specified in the program code.
LLMs mark the beginning of a new revolution in autonomous program deployment. No longer must the program code be specified in advance of deployment. The program circuit is dynamically constructed by the LLM as it integrates the prompt with its internal representation of the world. The need for expertise with a system to interface with it is obviated; competence with natural language is enough. This has the potential to democratize computational power like nothing else that came before. It also means that computational expertise becomes nearly worthless. Much like the human computer prior to the advent of the electronic variety, the concept of programmer as a profession is coming to an end.
Aside from the implications for the profession of programming, there are serious philosophical implications of this view of LLMs that warrant exploration. The question of cognition in LLMs being chief among them. I talked about the human superpower being our command of information and computation. But the previous discussion shows real parallels between human cognition (understood as dynamic computations implemented by minds) and the power of LLMs. LLMs show sparse activations in generating output from a prompt, which can be understood as dynamically activating sub-networks based on context. A further emergent property is in-context learning, recognizing unique patterns in the input context and actively deploying that pattern during generation. This is, at the very least, the beginnings of on-demand construction of computational contexts.
Limitations of LLMs
To be sure, there are many limitations of current LLM architectures that keep them from approaching higher order cognitive abilities such as planning and self-monitoring. The main limitation has two aspects, the fixed feed-forward computational window. The fixed computational window limits the amount of resources it can deploy to solve a given generation task. Once the computational limit is reached, the next word prediction is taken as-is. This is part of the reason we see odd failure modes with these models, there is no graceful degradation and so partially complete predictions may seem very alien.
The other limitation of only feed-forward computations means the model has limited ability to monitor its generation for quality and is incapable of any kind of search over the space of candidate generations. To be sure, LLMs do sometimes show limited "metacognitive" ability, particularly when explicitly prompted for it.[5] But it is certainly limited compared to what is possible if the architecture had proper feedback connections.
The terrifying thing is that LLMs are just about the dumbest thing you can do with Transformers and they perform far beyond anyone's expectations. When people imagine AGI, they probably imagine some super complex, intricately arranged collection of many heterogeneous subsystems backed by decades of computer science and mathematical theory. But LLMs have completely demolished the idea that complex architectures are required for complex intelligent-seeming behavior. If LLMs are just about the dumbest thing we can do with Transformers, it is plausible that slightly less dumb architectures will reach AGI.
[1] https://arxiv.org/pdf/2005.14165.pdf (.44 epochs elapsed for Common Crawl)
[2] https://news.ycombinator.com/item?id=35195810
[3] https://twitter.com/tegmark/status/1636036714509615114
[4] https://arxiv.org/abs/1912.10077
[5] https://www.lesswrong.com/posts/ADwayvunaJqBLzawa/contra-hofstadter-on-gpt-3-nonsense
submitted by hackinthebochs to naturalism [link] [comments]

2023.03.27 03:32 aaaaaaaaaaaaaaaaa85 MY EX ISN’T COMING BACK TO SCHOOL AFTER THIS YEAR

I (17m) had never experienced love, until she came along. She was like obsessed with me. I loved her and would have done anything. She got tired of me after nearly 2 months. I’ve been deeply depressed for MONTHS. I think we’re at like month 4. She’s been obsessed with this other guy now after telling me that she had been through a lot in the past and that she just didn’t feel ready for a relationship. Also she wanted to be friends with me, then proceeded to be the least responsive human being I have ever met. I’m so sick of her ass, hopefully one day she’ll stop going boy to boy, causing some of them to hate themselves. I have deemed her “The Depression Fairy”.
I rolled her love note into a blunt and smoked it. It tasted fucking terrible, and made my mouth dry. Don’t smoke, fellow kids.
submitted by aaaaaaaaaaaaaaaaa85 to self [link] [comments]

2023.03.27 03:31 VGProfessor Game #3: Super Mario Bros.: The Lost Levels

Been playing this one off and on throughout the year, purely from how frustrating it can get. The convenience of having both the NES/Famicom and SNES (All-Stars) versions of the game on the Switch was a massive plus though.
Let’s start with the NES version; wow it was difficult. Well, I say difficult, but what I really mean is kinda cheap. There were plenty of times where trial and error was the only thing being tested and not my platforming skills. Hidden blocks with poisonous mushrooms is one thing, piranha plants/bullet bills/bowser fire etc spawning right on top of me was a whole different challenge entirely. Nevertheless, I got through it. One improvement over the OG Super Mario Bros. was that you could continue from the beginning of the world, rather than having to start from the very beginning of the game. Each world offered more of a challenge than the last, and getting through 8-4 was a genuinely hype moment for me. Then of course there’s world 9, which offers hardly any challenge at all.
The SNES version from All-Stars is significantly easier for one particular reason; continues start you on the same level, rather than bringing you back to the beginning of the world. The game is still frustrating and at times, I felt like I played worse because there wasn’t as much consequence looming over me. When I got to worlds A-D in the post-game I was expecting a significant ramp up in challenge, but it was honestly pretty similar. In fact, worlds C-3 and C-4 are the exact same as 7-3 and 7-4 with very slight additions. Completing D-4 was satisfying, but not nearly as satisfying as getting through 8-4.
All in all, very challenging but in the wrong ways. I enjoyed beating this game, I’m not sure I enjoyed playing this game. 4/10. But at least I have the footage for my channel.
Speaking of frustrating, time to move on to my next game: Metal Gear Rising Revengeance.
submitted by VGProfessor to 12in12 [link] [comments]

2023.03.27 03:30 Aggravating-Aioli-69 New business

i live in Illinois. I paid to have a small website made for a new business. I hired someone known to my community. He convinced me to add a few other services and advice. Nearly a year later he introduced me to a woman whom he said would finish everything. He then ghosted me. She took over for months and he eventually ghosted me as well.
I want to file a small claims suit. Are there any pitfalls or helpful tips for doing so?
submitted by Aggravating-Aioli-69 to legaladvice [link] [comments]

2023.03.27 03:29 Potential-Youth6878 Family Response to Weight Loss

I recently saw some family members that I haven't seen in months. One person remarked, "Oh you're so skinny, your face is so thin, do you like it that thin?!" Seriously love this person more than words can express, but why would this be the way to congratulate me on getting healthy. I still have weight to loose, so I would definitely not say that I am anywhere near being skinny and that there would be any cause for concern.
submitted by Potential-Youth6878 to Mounjaro [link] [comments]

2023.03.27 03:29 itstommygun My son’s (11yo) classmates keep asking him if he is gay, insisting that he is.

Let’s get a few things out of the way first: 1. Nothing would change about how I feel about him if he was gay. 2. I really don’t think he is. Things he has said off-the-cuff leads me to be 99% sure he isn’t. 3. And he doesn’t ever come across as gay in any interaction I’ve ever seen him in with his friends. He actually comes across as a fairly masculine, but sweet and tender to other people, 11 year old boy.
My wife and had some conversations with him about it. We’ve talked about some of the standard - “Some kids are just mean”, “some kids have troubles of their own so they pick on others”, “some kids say things like ‘don’t be gay or ‘you’re so gay dude’ just to be mean or pick on others and to be playful” and lots of other convos around this. We definitely talk about trying to not let what other say hurt us and that we have some control over that.
The way these kids have been talking to him has obviously getting to him because he’s come home pretty upset about it, nearly in tears.
Any suggestions anyone has would be appropriated.
submitted by itstommygun to Parenting [link] [comments]

2023.03.27 03:26 Mammothunter A bunch of weird skin symptoms/throat/mouth symptoms. PLEASE HELP!!!

I am a 35-year-old, female smoker. I have a chronic injury which I have to take Lyrica and targin(slow release oxycodone) for pain. For about 7 to 8 months now, I have had a persistently ref throat but it is not sore and I don’t have swollen lymph nodes. I also have red splotches on the inside of my mouth like on the inside of my cheeks and some cha going blood vessels looking very dark and purple. About two weeks ago, I suddenly noticed a little pimple on my hand, which I squeezed. Suddenly the backs of my hands went bright red and itchy like I was having an allergic reaction. I took an antihistamine and it calmed down. Then I noticed that the pimple started spreading. First of all on the hand where they appeared, then up my arm, and eventually over my entire body. I have been treated for folliculitis with doxycycline and prednisone orally. When that did nothing I was treated with steroid cream for eczema. In amongst this, I went to the ER three times. I have seen five doctors. I have seen 4 dermatologists. Nobody has been able to tell me what this is. I thought to myself that it might be staphylococcus and demanded to be treated with antibiotics, especially for staph. So I have just completed a course of flucloxacillan in combination with the steroid cream. This seems to have calmed it down a bit. But it has not cleared it up completely or stop new ones from appearing. I feel like I have a “flare” every day, once or twice a day either in the afternoon or evening, where my skin will become tingly/itchy/burning in random places and new ones will pop up. My skin will also have a mottled appearance on and off but present most of the time . Mostly on the hands and feet, but also in other places. Around the same time as the skin rash, my left pinky toe swelled up, it looks like it has a bit of a blister near the nail (but under the skin kind of) and there’s pin prick like red/purple dots on it. And the nail on the toe next to the pinky fell off and hasn’t grown back. I have no idea if all of these symptoms are connected, but I thought I would be as thorough as I can be. To this day (approx 2 weeks later) I’m still getting the new little blistepimples all over my body, the knuckles on the back of my hands like the base of the fingers before the knuckle is still swollen and red, my pinky toe is still swollen and my throat is still red and mouth has red splotches (this is a much longer symptom at 7-8 months). Also, looks like there are little white patches/spots on my body as well as res ones. Does anybody have any idea what the symptoms may indicate? Has anybody else experienced anything like this?
submitted by Mammothunter to DiagnoseMe [link] [comments]

2023.03.27 03:25 RaCoonsie Rant on insurance

I own a 2014 holden commodore. Im a silver member with racv insurance and paid $581 yesterday to renew third party fire and theft insurance. I thought hmmmm that seems like a lot.. what did i pay last year?.. it was $397. I called them wanting an explanation and was advised that everything had gone up due to inflation and that it may also have been because more people who own the same car made more claims. I said that i didnt make a claim so why would I have to pay so much more but it was due to risk blah blah blah. I expressed that i expected some increase like everything but a nearly 50% increase seemed a bit unfair. Ive really noticed inflation hitting me hard in everything the last month but cant believe how bad its getting!
submitted by RaCoonsie to AusFinance [link] [comments]

2023.03.27 03:25 icypremium Thoughts on CCO (Midwestern) at Downer's Grove?

Hi! I recently got an offer of acceptance here and would like to hear others' opinions on it before I accept. Please let me know what you guys think!
I'm leaning towards it because it has the cheapest tuition and is near Canada (I'm a Canadian student).
Thank you in advance :)
submitted by icypremium to PreOptometry [link] [comments]

2023.03.27 03:24 whereowerewolf I absolutely hate my inner child?

Ok to start off, my therapist is great and has never actually said the phrase “inner child”. But we do often talk about different “parts”, in kind of an ifs-type framework (I think?), where it’s normal for people to have different parts internally that sometimes feel and react and interact in different ways. The idea I guess is that understanding different parts more deeply and treating them with love and compassion is like, maybe a way to stop hating myself and move forward in life as a less tormented person. Again, I’m paraphrasing and my T would probably wince at my word choices here (similar with “inner child”), but that’s the basic background on our work for this question.
Anyway - we’ve recently been talking about a “younger part”, who holds a lot of my anxiety, fear, whatever. Mostly this happens in contrast with an older sort of hyper-competent part, which is how I present externally and who is more or less responsible for me appearing relatively “successful” and “stable” even though I know I’m actually a ridiculously f*cked up mess internally and behind the scenes. This makes sense to me as a framework because it’s literally how I lived as a kid in some scary and chaotic circumstances - I’d just tell myself to pretend I was a grown up who knew what to do and as long as I could project that, even if it was fake, as long as other people bought it, everything would be fine. That kind of worked and kind of screwed me over, hence all the therapy.
Anyway again - my T recently has been asking me things about this child part like, “what do you think she’s feeling?” “What do you think she needs?” Etc. With other parts I can kind of feel into that and give answers that seem true to at least that part of myself, but any time I try to do that with this child part I just get so angry. Angry AT the child. Or like, the fact of its existence? Like I don’t care how it feels, it shouldn’t even be here. If its so goddamn needy why hasn’t it just gone and died yet. I feel like every bad thing in my life is this kid’s fault and I don’t want it around at all. I hate it, I feel rage towards it and can’t feel anything from its perspective. I purely and completely hate this part. Blind rage is all I feel.
This emotional experience is really intense and scary for me. I am not an angry person - I don’t typically experience that emotion, ever, to like a weird degree. I am great with actual children in the real world, genuinely very patient and caring; this has always come naturally. I never have and never would speak or act towards another person of any age the way I feel like doing to this part.
I haven’t explained this to my T because of how intense it is. My reaction scares me. I know I need to talk about this, and I want to bring it up again soon I think. I guess I’m wondering if anyone else has had similar experiences and/or has more insight on what this says about me. I read other people posting about reparenting or finding validation / healing and having really reparative experiences with younger parts or child selves in therapy but I am only ever disgusted and enraged when we go anywhere near that. I don’t want to heal it, I want to obliterate it. How terrifying and messed up will I sound?
submitted by whereowerewolf to TalkTherapy [link] [comments]

2023.03.27 03:24 Fac3puncher [WTS] Truglo Glock sights, ZEV connector, Tandemkross hivegrip, rare Magpul MIAD, PWS comp, Lancer mags, SRO cover

TRUGLO TFO Tritium and Fiber-Optic Handgun Sights for Glock Pistols, a year and about 150 rounds old, tritium is bright, plastic punch included $45 shipped

ZEV Technologies PRO Starter Spring Kit: ZEV Pro Connector, 2lb Striker Spring, FPS Spring, Trigger Spring, and instruction sheet. Opened but never installed. $20 shipped

Tandemkross HiveGrip for Ruger MKIV 22/45, some wear to rubber near safety, $20 shipped

PWS FSC556 Mod 1 compensatoflash hider for AR-15, 200 rounds through it, was on a safe queen for years, $40 shipped

Strike JellyFish Cover for Trijicon SRO, minor signs of use but basically perfect, better than just leaving that expensive optic naked, $10 add-on or $12 shipped

Magpul MIAD in OD green, very early version of MIAD that includes a front strap with an integrated trigger guard, these were discontinued and unavailable by like 2007 afaik. Original Magpul packaging included. A collectible? Who knows. Try to find another one though. $35 shipped.

4 Lancer 20 round clear mags. Each was loaded and fired exactly once. Nothing wrong with these mags, just not my style. $50 shipped.

Make me an offer for multiple items!
Paypal G&S is fine. Everything will ship out from Florida tomorrow morning.

submitted by Fac3puncher to GunAccessoriesForSale [link] [comments]

2023.03.27 03:24 ErikBRak1m Problems Running the Game Post Sim Update 12?

Hi all.
I'm on Xbox and the game seems to have encountered some kind of problem when doing Sim Update 12 and the new "add-on support" that came with it. The game ended up not being able to run normally: Checking for updates would hang until I switched back and forth between another app on the Xbox and back, which would then bring you the loading screen that comes before the main menu screen (currently showing New Zealand World Update), which would then hang at 100%. Switching back and forth between apps would then bring up the main menu, but with no connection to the Marketplace, and the usual trick of toggling online services on and off has no effect on that.
I was able to load a couple of flights in this semi-working state, but then the game hangs at 92% when trying to return to the main menu afterwards. I have since attempted two complete uninstall/reinstalls, with the first one resulting in another uninstall/reinstall having to be done when the game crashed while in middle of reinstalling content (at probably almost 90% completion, too). And this has just happened AGAIN tonight while I am about 2/3 of the way redownloading my content for the SECOND time. There was no crash the second time, but I would periodically quit the game properly and restart it after loading a lot of content (trying to give it a break, if you will, as it was just starting to behave a little sluggishly). Sadly, after wasting my entire weekend attempting two reinstalls, the game is once again in a more or less useless state.
Has anyone else experienced anything like this after Sim Update 12? I have a lot of content, but I don't see why that should prevent the game from running. This is now beyond just being disappointing and frustrating for me. Microsoft's near-non-existent technical support, as completely expected, has been of absolutely ZERO help.
CTD has always come with the territory when it comes to MSFS, but whatever has happened in this latest update is a lot worse than that. With CTD, you could go back and try again. In this case, the game is pretty much screwed, because I can't even get the game to load properly.
Has anyone else had any similar experiences on Xbox or with MSFS on PC since the new update? Please share if you can.
submitted by ErikBRak1m to MicrosoftFlightSim [link] [comments]

2023.03.27 03:23 MechanicalBot1234 Tryst with Tradition: An intense pilgrimage experience.

Tryst with Tradition: An intense pilgrimage experience.
About our Tirumala walk
Now that our kid's exams are over, we decided to go on a short pilgimage to Tirumala. (A temple town situated on seven hills)
This was planned a month ago, and for those who don't know, Darshan ticket (or a 5-10 second appointment with God almighty) must be made in advance through a biometric based advance reservation system, that requires fast fingers to get a booking done when they open the slots every month. Our Darshan slot was 5 PM yesterday.
We had reached Tirupathi (base of the hills) the previous night.
On the day of the Darshan, early in the morning we got ready, had breakfast and were planning to reach the temple on the hills. We were contemplating if we should take public transportation or taxi or just walk the way up the seven hills.
We hailed a cab and that guy advised about options we had, the two paths to walk, the short one or the long one. He suggested us to take the short walk. But on a whim, my spouse decided that we should take the long one. We decided that to be a divine decision, we took it.
The cab driver dropped us at the entrance, and pointed in the direction we need to walk.
As it is customary, traditional Hindus don't wear shoes on the seven hills. We got advice to leave our shoes in the hotel room and we did. We have never walked barefoot in the last 25 years.
We were excited about the adventure that was ahead of us! At that time, we did not know what an intense experience this was going to be.
The long route was about 9 kms, with about 3550 steps in between that traverses the seven hills.
Hindus believe, God's own mother walked this path several thousand years ago.
The entrance was filled with crowd of people, stalls, shop selling food to spiritual offerings. We navigated them all and came to the first step
There we saw a crowd of people seeking blessings and worshipping the first step with Sandal paste, turmeric and Saffron, lighting camphor, incense sticks and paying their respect through prostrations.
Orange, saffron, yellow and red, was everywhere. The first few steps were exciting but already we felt humbled by the spirituality around us.
Hinduism doesn't profess a book or a code of conduct. It is a library of ever changing ideas around a few basic beliefs, and all promote a 1:1 spiritual connection with God almighty.
People who looked economically poor, all around us, were clearly richer than us in spirituality and and passion, and their 1-1 connection with almighty humbled us.
We saw young kids teaching us the seriousness of their prayers, elderly people crying in emotion on the journey they were about to make.
We saw people decorating every step (yes all 3550 steps) with turmeric, sandal and kumkum. Yellow, gold, saffron was strewn everywhere. Our feet was yellow and red.
We saw people lighting lamps at every step because they believe God herself walked this path.
We saw many families where husband and wife were jointly decorating each step or mother and kids lighting each step.
We saw many parents carrying kids, even days old new born babies for their rendezvous with lord almighty up the hills.
All through the climb, people were chanting, "Govinda Govinda" loudly, inspiring us and electrifying the entire atmosphere.
The first hour of climb was exciting, we stopped once and replenished ourselves with some fluids and carried on.
Every step was a spiritually stimulating experience.
During the second hour we stopped more. We were about 1500 steps done and we thought we were doing great.
At 1750 steps, we were reasonably tired, we stopped and took break. I remember seeing a sign 1800 steps done and 1750 steps to go and told my family we are 50 percent done.
Then we saw a sign we have covered 2 kms and we have 6.4 kms to go.
It was daunting!
We continued on, we stopped at temples along the way, took more breaks and continued ahead.
After 4 hours of uphill walking, were getting tired and got our initial doubt if we had chosen the wrong route for our physical ability. The easier one would have been done by now.
But the spiritual experience was already overwhelming and was reassuring that we are doing the right thing.
After about 5 hours of climbing the hills, we were still seeing many new born babies being carried up, young toddlers walking up, one out of shape lady heaving and crying walking up, an elderly man (possibly in this seventies) carrying a bag walking up.
I even saw a 110 year old woman, partially blind in one eye and fully blind in another.
People were all visibly exhausted but we're carrying on.
We still saw people painting and decorating each step, each pillar, lamps, colors, lights everywhere.
We saw so many elderly people pulling themselves up, loudly chanting, spiritually energizing and uplifting those around themselves.
Then we saw the unbelievable! We saw several people walking up, on their knees!
Young men, one little girl and many middle aged people were walking on their knees climbing 7 hills.
They were crying in pain! I could see that their spiritual grit is far deeper than mine. Their faces told me stories.
It broke me! They broke my ego right there!! I am nobody in front of these people!
We decided to walk slower and give these pilgrims some support! A crowd was with us screaming the war cry, "Govinda, Govinda". After about 15-20 minutes, all of us lost more energy and the scream faded.
That's when, three kids thought it was time for their moment and decided to yell the movie war cry, "Jai Bahubali!" Loudly.
Their mother chided them not to do that, but it was a much needed comic relief for the crowd around us. We all burst into laughter and gained some energy and kept going.
Near the end, everyone of us was in deep physical pain, but it was a.deeply emotional and spiritual experience. We end the climb with more ceremonial prayers at many points.
We walked uphill 6 hours. The only time I had gone up in the last 10 years is when I pull gas lever on my work seat. This was an experience beyond comparison.
We ate lunch at 3:30 pm, checked in to the long 2.5 hour wait line for our Darshan and got a full 20 second darshan of God almighty.
And as almost everyone would say, those few seconds you forget everything and just watch silently! That's what we did for 20 seconds, until the usher said times up.
I could not capture everything in words, I am adding some pictures.

submitted by MechanicalBot1234 to IndiaSpeaks [link] [comments]

2023.03.27 03:23 ThrowRA9026 My 18f boyfriend 19m of four years was texting another girl and I don’t know what to do...

I’m sorry this is long and probably not well written I tried my best, the bolded section is the incident for which I seek guidance the rest is context.
I (female 18) and my boyfriend N (male 19) have been dating for what will be 4 years at the end of April. About 6 months ago after we graduated high school he asked me to move in to his dads house with him I agreed with some apprehension. We were good friends before we started dating freshman year, we both had friends but didn’t hang out with them much and primarily hung out with each other while we were in school. When I moved in with him I moved an hour away from my family friends and everything else, I’ve become really isolated socially I haven’t really made any friends where we live although I go to college and work. My social isolation led him to feel suffocated and as though I don’t exist without him, which to an extent has become true I’ve been leaning on him a lot. he asked for space and expressed concern that he doesn’t know if he loves me or just loves the idea of having me around. Is worried they because neither of us have been with anyone else sexually or emotionally that he may be missing out on something better. I told him I wanted to take time apart and revisit if we both felt that we wanted to reconnect (I would move out). He insisted he wanted to be with me and we agreed to work on it and I have been trying to find hobbies I enjoy as well as make friends. N has gone out with his dad and dads friends (let’s call him T) a few times recently. T has now encouraged a girl to ask N out (well aware of our relationship) to which he says he declined and told her he was with me. Well last night they went out again, I stayed in and was planning to go to bed early as I had an early shift. N tells me when I get home that last night T was super drunk and made friends with a family including girl 22 who was making eyes at N, sitting near them across the bar. T was going to pay family’s tab (apparently about $200) but forgot his wallet and asked N’s dad to pay and T would venmo him for it. Ns dad agrees only for T to tell N if he doesn’t get girl 22s number he won’t pay him back. N agrees to “help his dad out” and gets her number the lot of them hang out the rest of the night. N tells me this story today. Half an hour later he gets a text saying “I just took a fat nap” from an unknown number with our area code. I asked him who it was and he said “no idea” I forget about it until he gets a call from T and he gets up from next to me I thought that was strange but whatever. He went upstairs and left his phone (not really a clear boundary set against or for this but we know each others passwords) so I looked at his messages nothing there I recovered the deleted text thread (we have iPhones) and seen he and said number had sent 200 ish messages. I read through them he clearly gave her the wrong impression (no mention he’s not single) and was kind of flirty but nothing inherently making a move. He blocked her number and deleted the thread after the text I saw as far as I know he doesn’t know how to recover messages. Do I bring it up if so how? Is this cheating? If I hadn’t seen the text would he have deleted the thread? I don’t know what to do I know I invaded his privacy but with everything else we’ve been going through I had to know. Please help. Thank you for bearing with me and getting to the end ❤️
submitted by ThrowRA9026 to relationship_advice [link] [comments]

2023.03.27 03:22 nickofthenairup What are your thoughts on a used 2014 model in my area?

My wife and I have a 2021 RAV4 for longer trips. I WFH and wife commutes. Our daily driving is probably 20-25 miles.
The car would be garaged. Temperatures here: In the winter, low 35/ high 55-65. In the summer, low 80/high 100+ Speed limits all 55
Granted my wife is a teacher so less driving in the brutal summer heat
There are a couple of 2014 models with 6 bars left near me listed for $4500-$5500. We’d be also apply for a $1000 rebate from pg&e. Seems like a good deal…
Does this sound feasible? How long do you think a used one would last us?
submitted by nickofthenairup to leaf [link] [comments]

2023.03.27 03:22 ttbruno11 [WTS] Custom, handmade slipjoint trapper.

Hello swappers! Anyone on this fine site wanna give an unknown maker a shot? I finished this trapper up yesterday. Knife specs are as follows:
Lock type: slipjoint Blade length: 3 1/4" (82.55mm) Closed length: 4 1/8" (104.85mm) OAL: 7 3/8" (187.32mm) Handle thickness: 1/2" (12.86mm) Weight: 3.9 ounces
Blade steel is O1 tool steel. Liners, bolsters and pins are 416 stainless. Handle material is my own custom blend of black denim and red burlap micarta with red G10 spacers and red, white and black stacked G10 dividing line. Inlaid diamond is 416 stainless. The entire knife started life as all raw materials. Everything is done completely by hand with basic tools. The inlay is cut and filed by hand. I hand milled the pivot and bearing surfaces as well. Vine file work on the inside of the back spring. Pull is an 8 (intentionally) with a great walk and talk. Knife is solid. I wouldn't intentionally list it on this sub if I wasn't totally confident in my work for you savages to roast me haha. Take a look at the pics and video and see for yourself.
SV: 350 with the understanding that I'm an unknown maker (for now) and that if you aren't completely satisfied with the knife (within 24hrs of delivery) you can send it back and I will refund your money (minus paypal fees and you pay to ship it back)
Each knife I make is stamped with a number on the inside of one liner and signed on the other (inconspicuously) and will come in a custom eva case wrapped in wax paper, with a signed COA with all the knife details.
submitted by ttbruno11 to Knife_Swap [link] [comments]

2023.03.27 03:21 throw-famadvice My family are bigots, my sisters are queer.

before i start i want to say hello and thank you to this community, morgan and all her amazing co-hosts. i hope it is okay to share this here - it is a lot but i am so lost right now…
context: i (27f) have two sisters (29 and 19). we were raised by both our parents in a christian community, and were constantly bombarded with sexist, racist, and homophobic propaganda. i left the church in 2016 and moved out with two roommates a year later, one of whom was my best friend, a gay man who was one of the motivating factors in questioning my religious beliefs. obviously, my parents didn’t approve. i then met and started living with my now husband (28), which again was an issue. at this time (2018), my sisters and i were barely speaking. i had treated my younger sister like garbage for a good 10 years of her life (typical middle child crap). in 2020, my younger sister and i reconciled and she came out to me as gay. initially i was terrified, as i was about to move 8 hours away with my now husband and didn’t want to leave her with our homophobic parents (she was 1.5 years away from graduating high school). ultimately i told her i would be there for her no matter what and would help navigate issues with the family as best as i could. she was already expressing to our parents a lack in belief in god and going to church less and less before our reconciliation. the night before i moved, my father came and told me the parable of the prodigal son, alluding that i was lost and needed to be found. he then proceeded to tell me he was disappointed in me that i would “risk my younger sister’s soul” and influence her to leave the church. i was devastated just by the way he was looking at me, and after i moved we didn’t talk for nearly 2 years, only speaking again when my now husband and i got engaged and married in late 2021.
last summer (2022) my older sister and i also reconciled and she came out to me as a demisexual lesbian. i had had my suspicions, as she and her roommate just seemed like they were more but i never wanted to push the issue. after she came out, she then said that she still holds her faith higher than her desires, and believes acting on her sexuality would be immoral. she is still in that “better safe than sorry” mentality.
i don’t know what to do or say, i have no idea how to be encouraging and supportive. i am in agony over what both my sisters must go through daily being surrounded by our family and the community’s bitter lack of understanding. my parents never cease to bring up their religion to me over our rare phone calls and visits, and this past christmas i had a full-on meltdown during church service as it was virtual and i was roped into going. i know i need to have a conversation about boundaries with my family, but my husband and i rely on them financially (rent, auto insurance) and i’m almost certain that conversation will lead to me needing to go no contact. factor in that i need to be an ally to my sisters right now, and i am at a loss for what to do.
i know this was a long post and i am grateful to anyone who stayed until the end. i am mostly just seeking community and any encouragement someone has to offer, either for me or my sisters. thank you again to the two hot takes community, and much love to you all.
submitted by throw-famadvice to TwoHotTakes [link] [comments]

2023.03.27 03:21 N64wasDOPE Modded CA-53W

Modded CA-53W
I loved the negative display but without a backlight it was damn near unreadable for me. So, I swapped the polarizer and now I love it!
submitted by N64wasDOPE to casio [link] [comments]