Its interesting that being unable to find a legal route to dig up dirt on archive.is, they're going the route of CSAM allegations.
I first heard of this technique on a discussion on Lowendtalk from a hoster discussing how pressure campaigns were orchestrated.
The host used to host VMs for a customer that was not well liked but otherwise within the bounds of free speech in the US (I guess something on the order of KF/SaSu/SF), so a given user would upload CSAM on the forum, then report the same CSAM to the hoster. They used to use the same IP address for their entire operation. When the host and the customer compared notes, they'd find about these details.
Honestly at the time I thought the story was bunk, in the age of residential proxies and VPNs and whatnot, surely whoever did this wouldn't just upload said CSAM from their own IP, but one possible explanation would be that the forum probably just blocked datacenter IPs wholesale and the person orchestrating the campaign wasn't willing to risk the legal fallout of uploading CSAM out of some regular citizen's infected device.
In this case, I assume law enforcement just sets up a website with said CSAM, gets archive.is to crawl it, and then pressurize DNS providers about it.
Exactly, there was a dude in the 18th century that came up with distributing power into three monoliths (writing code, executing code and catching exceptions[0]) and at a similar time the bones of the modern education and corporate system was created[1]. Since then we've been refining these structures without really changing the underlying assumptions.
Blabla..
The way I see forward is to build a social network that is local first (kind of like these ideas with having a "revolution of the personal server" a la urbit). We need to realize that "voting" and "money" and "shares" or "securities" etc. are all the same thing. Compositional game theory and interesting models of MPC like statebox are steps on the road towards solving "fake news" and other trust-topology based problems but I think even if we don't solve these things theoretically we can still get closer to the ideal by just relying on some heuristics (like maidsafe and parity are doing).
Anyway the key idea is: if we can create a "democratic" form of governance - for corporations (or startups) - that outcompetes traditional forms of organizing such organizations then that form of organization will take over everything. The key idea for this to happen is for the organizational scheme to be compositional so that organizations can be assembled ad-hoc to compete with a google or an amazon, then we'll have something more flexible and that thing will win in the long run.
I mean routing traffic to Tor entry guards through VPN services. The Tor Project does indeed not recommend that. They argue that using a VPN service is risky, because it can log everything. Where access to entry guards is blocked, they recommend using bridges (of one sort or another) run by Tor volunteers.
I don't agree with that argument. Because ISPs can already do that. And for most people, their ISP is far more likely to be cooperating with their local adversaries than some random VPN service is.
And for what it's worth, one of Tor's inventors (Paul Syverson) has agreed publicly that there are reasons to access Tor through VPNs. Basically, when you don't want your ISP to know that you're using Tor. Indeed, if I were a CIA agent using Tor in Iran, I probably wouldn't want the ISP to know that I was using Tor.
But I don't trust VPN services either. So I use nested VPN chains. That's basically the same approach that Tor itself uses, routing traffic through multiple (three) relays. So no one relay (or for me, VPN service) knows both who I am, and what I'm doing online.
There's also the issue of trusting the Tor network. Some argue that it's compromised by US TLAs. So with a nested VPN chain between me and entry guards, I'm less concerned that some TLA is running them. But even if that's just paranoia, there have been bugs that deanonymized users.
For example, some years ago, CMU researchers exploited the "relay-early" bug to allow malicious entry guards and exit relays to exchange information, and so learn that they were routing the same circuit. That allowed said CMU researchers to deanonymize Tor users. The FBI learned of this, and subpoenaed the data. And lots of people went to jail over it. Mostly drug dealers and child pornographers, but whatever.
However, routing VPN services through Tor is a totally different matter. If you do that, your anonymity depends entirely on how anonymously you've obtained, paid for, and used the VPN service. If you used an email address that's linked to you, you're screwed. If there's a money trail in paying for the VPN service, you're screwed. If you ever use the VPN account without Tor, you're screwed.
And even if you manage all that anonymously, the very fact of using a VPN through Tor decreases your anonymity. That's because Tor by default switches circuits at ten minute intervals. But when a VPN is connected through a Tor circuit, that circuit is pinned. So by using a VPN through Tor, you've blocked one way it increases anonymity.
The keyword is "PD 诱骗头", which is a small converter to trick the power supply to output at full capacity. Not sure if it's safe, but it's tempting to replace the big power adapter that comes with NUC with a small GaN USB C PD adapter.
There is a setting you can set to disable it and make the provider treat all traffic as if it is non-tethered.
adb shell settings put global tether_dun_required 0
Considering how knowledgeable the HN crowd is on all things networking, it surprises me to see so much uncertainty on something so easy to check in the code!
> So, since I'm away from my own home and have some time to spare, what does that mean in terms of KirinDave's idea to keep entry-level jobs for Americans? I'll be blunt: that's stupid. Americans, who have to earn a certain wage to maintain even the lowest standard of living, can never be as cost-effective as people to whom that same wage looks like a king's ransom.
That's uh.. not my idea? You mention me a lot here but you've comically misrepresented my position, which you can't really know because the only thing I've articulated on this subject in this thread is that Tesla tried and failed spectacularly to automate, and that the American (really Victorian) idea of trying to replace all human labor with machines (or at least make it LOOK like it's all machines) hasn't fared very well overall despite it being a near religious conviction by many western industrialists.
Meanwhile, China's labor policies (which in my personal belief system lead to reprehensible limitations on human freedom and rights) lead to a country that is industrially crushing the rest of the world to a degree that we haven't seen since post-WW2 where American interests could push their products on others because everyone else's industrial bases were smoking craters due to war.
> That's where "merit based" immigration comes in. It's about making the best and the brightest from all over the world part of our ecosystem instead of a competing one. It's a deliberate brain drain.
You misspelled "richest", because that's actually the goal. We're getting tepid, moderately talented but quite wealthy Elon Musks. Once, we were significantly better at the "brain drain" and it showed during WW2. And perhaps that's for the best, because the circumstances that lead to the genuine brain drains of WW2 were quite dire.
> It leaves the low-skill workers where they are, unable to compete job by job but also bereft of any leadership that would enable them to compete industry by industry.
But at the end of the day, the notion that the folks you are leaving behind are not essential to society as a whole is a capitalist dogma not supported by reality. The idea that society isn't built on essential but socially dismissed and denigrated labor like construction, cleaning , and even sex work is a fantasty. It's the Victorian desire to have a "dumb waiter" all over again, only on a much larger scale. And this myth largely persists because western industrialists imagine a fantasy world in which Taiwan and China are full of magic robots and not millions of skilled (and socially oppressed) humans.
> It's unfair - one might even say cruel - to them, but global fairness is not a goal.
Indeed. The goal is to realize Burke's dream and roll back every social contract change post-French Revolution. I'm well read on the capitalist ethos.
> It's an effective way to promote Trump's and KirinDave's goal of propping up uneducated Americans' wages.
Trump's goal is to prop up the wages of under-educated Americans? What? The man and his party's policies have rendered over half of the farmers in America desperate, directly threatened the livelihood of many American manufacturing workers with his pointless tariffs, and outright declared war on American unions. He is as anti-worker as they come. The fact that you think otherwise is a reflection of the phenomenal branding engine.
> I just happen to think that's a lousy goal, not least because it's not sustainable. The longer we put it off, the worse the eventual result will be.
I'm not sure what you're advocating at all. The completely abandonment and disenfranchisement of every skilled laborer that doesn't meet your definition of "educated?" The radical purging or forced labor of anyone who doesn't meet specific economic or physical standards? Many US States already do that via their justice system. Does that not already satisfy you?
If you want to talk about unsustainable systems, we need only look at the current political environment in America, where voters are so disenfranchised, misinformed and manipulated that a toxic hyperindustrialst grifter can hold power just because folks are so pissed at the status quo that hope of massive disruption is all they can hold on to.
Eventually, folks are gonna stop believing in all the delays. Already, fascists interests in the US are gathering people who've hit that breaking point and organizing them to point at perceived enemies. Already, we see people who question truths known and trivially observable for thousands of years because they're so convinced the entire system is out to get them. Already, folks know that their punishments for challenging the rule of law are only truly enforced towards the poor and the forgotten.
We don’t block archive.is or any other domain via 1.1.1.1. Doing so, we believe, would violate the integrity of DNS and the privacy and security promises we made to our users when we launched the service.
Archive.is’s authoritative DNS servers return bad results to 1.1.1.1 when we query them. I’ve proposed we just fix it on our end but our team, quite rightly, said that too would violate the integrity of DNS and the privacy and security promises we made to our users when we launched the service.
The archive.is owner has explained that he returns bad results to us because we don’t pass along the EDNS subnet information. This information leaks information about a requester’s IP and, in turn, sacrifices the privacy of users. This is especially problematic as we work to encrypt more DNS traffic since the request from Resolver to Authoritative DNS is typically unencrypted. We’re aware of real world examples where nationstate actors have monitored EDNS subnet information to track individuals, which was part of the motivation for the privacy and security policies of 1.1.1.1.
EDNS IP subsets can be used to better geolocate responses for services that use DNS-based load balancing. However, 1.1.1.1 is delivered across Cloudflare’s entire network that today spans 180 cities. We publish the geolocation information of the IPs that we query from. That allows any network with less density than we have to properly return DNS-targeted results. For a relatively small operator like archive.is, there would be no loss in geo load balancing fidelity relying on the location of the Cloudflare PoP in lieu of EDNS IP subnets.
We are working with the small number of networks with a higher network/ISP density than Cloudflare (e.g., Netflix, Facebook, Google/YouTube) to come up with an EDNS IP Subnet alternative that gets them the information they need for geolocation targeting without risking user privacy and security. Those conversations have been productive and are ongoing. If archive.is has suggestions along these lines, we’d be happy to consider them.
PM is a super heavily overloaded term that you'll run across in pretty much any company larger than small, definitely not specific to Google (disclaimer: I work at Google and don't speak for the company, but I'm brand new so my conception of the categories comes from what I've seen elsewhere). Engineers tend to raise an eyebrow at the entire job category because they're thinking of bad experiences with one particular flavor of PM (and often a particularly poor specimen at that), but that can be unfair, a lot of what they do is absolutely critical to business.
The acronym simply means either product manager, project manager, or program manager, but the responsibilities can be any/all of the following, and probably more, depending on the company:
- Full product owner, "the buck stops here" person. Many different possible titles here other than PM (usually "project manager" if that's the title), like general manager (GM), different flavors of producer, product owner (PO), VP of X, etc.
- Feature designer/owner/manager (product manager): writing specs (junior), pitching specs (mid-level), setting and selling product vision (senior), driving strategy (staff/exec, though at that level all roles start to get blurry). Decent business school grad representation here, you have to be good at strategy, negotiation, communication, and Powerpoint; having good product ideas helps a lot, too, but not if you can't sell them. A persuasive PM of this type is a force to be reckoned with and will get a lot of executive attention, which can be very good or very bad depending on whether the strategy they're into pans out or not. This is probably the type of PM the parent was talking about "armies of".
- Development director (project manager or program manager): a whip-cracker at worst or an impenetrable shitshield at best, they manage the practicalities of running a project, like project scoping, meeting and managing hard/soft deadlines, handling support, cross-team communications, processes, compliance, etc., often stepping in as an all-purpose guard against randomization so that others can focus on producing the product. Many PMs of this type ended up being the go-to folks for GDPR compliance over the past couple years, so it's not just internal process stuff that they deal with.
- Task checker: can blend into to the development director mentioned above, but at some companies this sort of PM will mainly focus on tracking tasks that are in-progress, getting estimates, watching velocity, and sending reports up the chain. Some devs find this role pointless and annoying, but it really depends on how good they are - if they're solid, they'll find a lot of ways to improve things instead of just tracking them.
- Scrum Master (project manager): Big-S Scrum is falling out of favor so it's not as fashionable to have this title anymore, but within some processes a similar role still exists as a type of project manager. In a nutshell, Scrum is a simple yet effective "People Over Process" process that consists of a bi-weekly no-laptops-allowed retrospective meeting where you get the team together to have a free-ranging discussion where everyone feels heard, so that you can all decide together whether the team should estimate engineering tasks in terms of hours or hats. You need a trained and certified Master because otherwise people new to Scrum might not know to pick hats. A more seasoned Scrum Master will also schedule a quarterly retrospective-retrospective where the team discusses a strategy for what guidelines to put in place for the next retrospective so that the team can decide on hats faster and leave more time for the less settled question about whether or not the Fibonacci sequence is the right way to count hats or if a size-based approach will make people feel better supported in their work.
- Monetization designer (product manager): mainly at game companies, PM is often a super different role, probably best described as the profitability-focused counterpart to a game designer. They focus on setting prices, managing game economies, speccing and evaluating A/B tests, inventing loot boxes, etc. Ideally PM and designer would be one and the same, and the game design would be holistic with the monetization, with a designer that has serious Excel/analytics chops and deep inspiration and sensitivity about gameplay, but that's a more rare combination than you'd think, so a lot of companies split them out.
I'm probably missing some other ways this acronym is overloaded, but I think this covers most of it.
I'm the CEO of a company (LBRY) that deals directly with these issues. I'm approaching six figures in legal bills and have spent hundreds of hours on this issue.
Here's the truth: no one knows. Not a single person. No one knows how even existing laws like the DMCA and CDA in the US apply to decentralized platforms. E.g. The DMCA and CDA both Utilize the term "service provider" - who is a service provider in a decentralized network? Under existing law, a service provider is supposed to at least be running a server.
I've mentioned this in Hacker News threads before, but here are the few things to bare in mind.
Firstly, the SME exemption (as well as the specific exceptions for Wikipedia and Github) was put in at the last moment in an attempt to win enough votes to get the Directive through the plenary following widespread opposition. It only exists in the European Parliament text, which is currently being negotiated with the original Council text. The news from the trilogues is that there's lots of lobbying to get it removed.
The exemptions sound good for Github and Wikipedia on paper: but Github and Wikipedia are the services that currently exist, and have been effectively grandparented in. It doesn't speak for all the other potential services that we can't describe, because they don't exist yet -- and won't exist if there's a liability regime that would have stopped Github and Wikipedia in their tracks if it had written before.
Thirdly, we know from experience that these exemptions don't work. In the European Parliament text, we have a blanket liability regime, which means that rightsholders can sue you, or your provider, by default. You can argue back, "oh it's okay, we're covered by this exemption", but you have to prove that you're really covered by the exemption -- and the expression of that exemption in each of the 28 implementations of the Directive in the member states. It creates a default of liability, and then fences of a small section of the Internet where you may be protected.
In the mean time, you -- as a person who hasn't paid up for a licensing arrangement with the major rightsholders -- will be on the receiving end of repeated orders to take down content, with the understanding if you don't successfully argue against that exemption, or the moment you cross that SME line, you'll be liable to an unbounded extent.
Why would you take that risk? How could you protect yourself against that risk?
Also note that many of these exemptions are expressed in the Recitals, which merely indicate the spirit of the law, the specific rules for transposition in the Articles themselves. Generally speaking, if you are threat-modelling new law, you might as well ignore the Recitals, because there is a substantial lobbying and legal community who have a big financial incentive to guide lawmakers, and deploy lawsuits in such a way as to sideline those non-binding commitments.
The exemption language was an attempt by lawmakers taken aback by the force of the opposition to Article 13 and 11, to both win over votes in the Parliament, and stop GitHub, Wikimedia and its users from complaining. The reason why Github, and Wikimedia and their users continue to complain is that they don't believe they can act in the future as though these exemptions will really work for them — and they have an interest in maintaining the rest of the Internet ecosystem, which will be still left out in the cold.
Blu Tack was invented in Leicester, and Bostik still make it there. It's reusable and doesn't set or cure, though it deteriorates a bit after a decade in a drawer. Can repair pretty much nothing, but handy for sticking posters to walls. :)
Sugru is/was a silicone putty that cures after a few minutes in air, and expires in a few months in the packet. Seemed terrible value for money based on the tiny packet I bought so I'd never buy again.
At least with two part putties you can keep plenty in a drawer, for years, they don't cure until you've mixed them and cost a fraction of Sugru.
Edit: Well this provoked quite the discussion. If you're looking for repair putty the two main UK brands I know are: Araldite (Now a US brand, mainly DIY and automotive, also epoxy adhesives), and Milliput.com (Epoxy modelling clay, perfect for repairs to plastics, steel etc. Comes in a few grades and colours), with shelf life in decades. Milliput are Welsh. :)
If you have T-Mobile, I recommend logging into your account and going to Profile > Privacy and Notifications > Advertising & Insights and disabling everything. Obviously, as a consumer I don't know exactly how this data is being collected, but if the the carriers are sharing individual level data, this is hopefully the opt-out.
EDIT: This is the text from one of the settings:
With your consent, T-Mobile, affiliates, and ad providers use your web browsing and app usage data along with advertising identifiers to deliver relevant mobile advertising and to learn more about your preferences. Advertising identifiers used can include Android and iOS Advertising IDs, browser mobile cookies, and device identifiers.
> A system typically used by marketers and other companies to get location data from major cellphone carriers...
Wait, what? Carriers are selling personally identifiable location information? I knew they were selling aggregate data, but how are they legally selling location numbers tied to actual phone numbers?
I dug into my carrier's privacy policy, and it looks like this is true. They say you'll be asked for consent before it happens, but what mechanism does the carrier even have to request that consent? I've certainly never seen an opt-in prompt for anything like that before, but according to my carrier's site, there are at least two companies that are accessing or have accessed my location data through my carrier. That is not okay.
If an app or service I use wants access to my location, they can go through my phone's location services API, which requires my affirmative consent. It is completely unacceptable that they can bypass me and get it directly from my carrier.
I think content creators are owed not being gas-lighted by Youtube. Whatever you may think of these rules, the truth is that these will only be applied selectively, and then YouTube will deny that they are applying them selectively, and lie to us by saying that this is a platform for all.
The truth is, there is a whitelist that big corporations and celebrities are on, where they always get monetized no matter what.
For example, when Logan Paul infamously uploaded video of a corpse in the Japanese "suicide forest"[1] he continued to monetize the video of a corpse, in blatant violation of YouTube policy, until he chose to take the video down on his own. His video was never flagged. His video was never demonetized. Despite a torrent of complaints and blatant violation of policy. However, people who criticized Logan Paul's actions not just had their videos demonetized, some of them had their entire channels deleted. The reason for this is that suicide is not advertiser friendly, so if auto-generated captions talked about it, they were demonetized. And if the title of your video is too similar to a much more popular video, then it is considered misleading metadata, and that can get your channel that you've had for years nuked. So reality tv show buffoon gets to make thousands of dollars off his video where he gawks at a dead body, but people making sincere criticism of this behavior are severely punished, and so have to tiptoe around criticizing him.
But, I think the most egregious hypocrisy on YouTube's part is their handling of mass shootings. The YouTube channels for CNN and Fox News were heavily monetized during the Las Vegas shooting. I mean, these corporate YouTube channels increased the number of ads because they knew lots of people would be watching them. But at the same time, any smaller channels that even mentioned guns or shootings had large swaths of their videos demonetized.
Dr Pepper and Coca Cola are thrilled to appear in ads every five minutes during a mass shooting, care of CNN[2]. But they act as if their sacred brand is too good to appear on a smaller, more authentic channel that wants to have serious discussion about issues. You can see a more complete account of this double standard here. [3]
This narrative about caring where ads appear is nothing more than a dishonest tactic to increase their leverage, as well as punish independent media that might rock the boat and provide an alternative viewpoint.
I know people who have spent years making a living from YouTube ad revenue, making frankly innocuous videos, like alternative history animations, who have had their YouTube careers virtually ruined, and have had to put in a huge amount of effort to please the YouTube algorithm, while big corporations and celebrities can revel in clickbait and salacious fetishization of violence for money. [4]
The most interesting and unique content on YouTube is what is being demonetized. The content that is being most monetized is what is already monetized on cable television. YouTube piggybacked on small independent creators to build their platform, and then drove them away by gas-lighting them with inconsistent demonetization policies to make room for reality tv channels and Fox News. Egregious.
If money is involved, and YouTube is being dishonest about what their real policies are, then I think this easily falls under antitrust laws, if not outright criminal fraud. They are lying when they say that they are an open platform. They are actual a platform where they mislead people into working to create content, and then unfairly picking the winners through back-channel agreements.
And, on top of YouTube's borderline fraudulent practices with demonetization, they are also the largest video platform in the world, and are increasingly a de-facto "channel" on nearly all smart tvs, through the ubiquity of their app. So, I really really hope that the federal government comes down on them hard. I hope that they are forced to have transparent and equally applied policies, and I hope that there is a pathway for small YouTube creators to get some justice.
Having witnessed many cases of individuals who help the poor, donate to cancer/heart patients and then turn on a dime and advocate the carpet bombing of rebel held civilian areas, I'm starting to feel empathy is not the issue. The empathy circuits are intact in most human beings. The problem is dehumanization.
Empathy is proportional to how "human" we perceive someone/something to be. We experience the most empathy for close family, friends and individuals we have met in person. As the distance grows the empathy response reduces. It further reduces if the victims are portrayed as uncivilized or savage. It further reduces when the victims are numerous (hundreds of thousands).
At a really high, generalized level (with contract law being a subset), I think the legal system (and in many ways society in general) has yet to really get a grasp on the concept of resource exhaustion attacks. We all know about DDOS as it applies to digital systems, but individual humans and the social systems we make also all have actual, hard limits on how much information we can possibly process, store, and communicate/move around. A lot of the foundations of law and debate date back to long before we entered the sharp part of our current information production J-curve, when it was much more possible for a single human to be more generalist and it was simultaneously simply more difficult and expensive to pump out hundreds or thousands of pages of contract and law. But those days are past, and at some point, to the extent we want (and we should want it) law to work for humans, there need to be actual principles around the fact that if something exceeds the time/memory/intelligence a human could reasonably apply to it then it shouldn't matter if "they could in theory". Let alone if they really couldn't, even in theory (not enough seconds left in their lives). Organizations may be able to act as superhumans here, able to subdivide work and specialize towards a single goal, but for regular actual humans resource cost should be a fundamental consideration of legal validity.
I'd like to see this improved for both sides of the equation though FWIW. I agree that as a user/consumer, the EULA/TOS situation is intolerable and that there's a lot of sketchy or outright bullshit stuff in many (maybe most) of them. But I do think some of the contents are also genuinely guarding against liability that frankly really just shouldn't exist by default either. If someone writes up some software or makes a service and just puts it out there on the open market with no promises, they shouldn't need any sort of contract at all saying they won't be liable if it's used for life-safety critical applications for example (EULAs/TOS are full of this, right up to "this is not for nuclear reactors you fucking idiot" clauses). The law should provide for minimal basic standards and simple money-back guarantees for lack of performance, but just as users shouldn't have to read a 50 page EULA that somewhere buried within tries to take lots of their rights and lay claim to as much of their information as possible and such, so should developers not have to worry about being sued because there was a bug unless they've actually affirmatively promised there wouldn't be a bug. Major liability should tie into things application promises and SLAs, which would be more generally negotiated between entities who can reasonably handle the increased information and legal complexity and think it all through.
I do really hope we see some standardization all the way around. It seems like there could be some real win/wins in this area, and that it's not necessarily that politicized either.
"[Facebook] systems were so laxly designed as to actively encourage vast amounts of data to be sucked out, via API, without the check and balance of those third parties having to gain individual level consent."
That is a gross oversimplification of the issue. There were controls in place to stop excessive data collection.
In fact, the only app in this situation that was allowed to "suck out" "vast amounts of data" was the Obama For America app. According to Carol Davidsen, Obama's Former Campaign Director "We ingested the entire U.S. social graph" [1], despite the fact that less than 1 million people actively authorized the app to access their data. Approximately 99.5% of the hundreds of millions of people whose data Obama took, with Facebook's blessings (actively allowing it to bypass its data collection limits for apps), never knew about or authorized Obama to have or use their data.
So only one app was "actively encouraged" to suck out vast amounts of data in the history of the existence of the API. All the rest of them were subject to relatively strict controls, requiring months or years to collect even a small fraction of the data that the Obama app was allowed to collect. The API was not a data free-for-all, except in one unique case with the explicit authorization of Facebook.
> I have funds to invest but every time I look into getting into I cannot bring myself to do it because I cannot stop my brain thinking it's gambling. Without insider knowledge I don't understand how I could beat the market short term.
If you have programming skill and a good understanding of statistics, do the following:
1. Identify a subset of equities in the total market which a) have fairly one dimensional revenue streams, b) have a market capitalization of at least ~$1-2B, and c) are not prone to extraordinary hype or tech-centric accounting, such that e.g. a "win" or a "loss" in an earnings announcement is fairly straightforward to understand (and therefore you can more easily, if not perfectly predict how the market will react).
2. Identify a strong, legal source of alternative data that maps directly to the revenue stream of one of these companies. The more difficult to find and collect, the better. Use your programming skills to automate the collection and curation of this dataset.
3. Incubate your dataset for a period of several months, then build it into a timeseries. Using the timeseries, build a model that forecasts the expected revenue of each particular company using historical 10-K and 10-Q documents.
4. For the companies whose data imply a jump in either direction that is very unexpected (according to e.g. the aggregate analyst consensus), take a contrarian position in the equity. If you're feeling very confident and have a higher risk tolerance, study options and take the corresponding derivative position.
5. In particular, establish a target win rate overall, a target tolerable drawdown period overall, and a target exit price (sufficient win or bearable loss) for each position, then follow it.
If you do this correctly and consistently, you will profit significantly and consistently enough that your system will be fully distinguishable from uninformed gambling. To equip you with a bit of meta-analysis here, this outline works because a) all trading strategies profit from finding opportunities to exploit pricing inefficiencies in various securities (or groups thereof), and b) the only way to deliberately identify those opportunities is by having information, access, or techniques that the broader market does not have yet (or else the price would reflect that information).
The great difficulty in this process is finding and analyzing the alternative data in the first place. As a fallback, if you're not confident you can build a trading strategy with this data you can also sell it to hedge funds, who will be very happy to buy it if it actually maps to revenue and is otherwise unknown.
Heh, it is amusing that the quote ends with "far-off Palestine". In the mid-90s I returned from living in the middle east for 6 months as a student (Jordan, Israel and Egypt). I personally witnessed an event and upon returning I was reading about said event in a US newspaper and was shocked at how wrong it was. This was the first time I experienced this effect. However, instead of turning to other parts of the paper and trusting what they said, I lost all faith in newspapers on that day and have not read one since. If I know they lied or, more generously, misunderstood what happened about something I know personally about, how can I ever trust anything they write about things I don't know about personally? I can't, I won't, and I haven't.
The bane of society, I think, is mentally healthy people.
They run politics but have absolutely no appreciation for what works for non-healthy people. Which is all the people who acutally rely on politics/society.
I think conservatism as a personality is a symptom of good mental health: if you arent able to do something, its lazyness. So you must just need some motivation: either a beating or some money. etc.
That's true if you're mentally healthy. If not then the ways you are failing arent even imaginable from the POV of mental health. And they are as communicable as cancer, or any physical illness.
The last century is full of works by people who have recently taken acid telling everyone how "everything is now differnet" etc. How they couldnt even have imagined the things they took for granted in how their minds work were actually variable.
Every mental illness is its own unique form of acid. And unless you've been on an acid trip, or are very well informed medically, you've no idea what it is like.
While there is obviously some risk, in my experience it is very well compensated. If anything, I think an irrational level risk aversion means there are basically not enough good freelancers to go around and the demand/pay is high.
The standard formula I recommend to a new freelancer is (what-you-would-make-as-salary / 50 / 5 / 8 * 2) = hourly rate. Anything less is undercharging. So if you would make 100k in salary, you can’t charge less than $100/hour. This is an absolute floor.
If you follow that formula, you’ll be able to cover the healthcare, tax accountant, gaps between work, and other expenses and still make your base salary.
But the ceiling is much higher. For a company, hiring a full time employee is just too much risk. You have a few hours of interviews to determine if they will be a good match, and if you are wrong, it is an extremely expensive mistake. Likewise if things slow down, you’ll have no flexible capacity. Freelancers are a dream come true.
Also, in my experience, companies do not think of freelancer rates in the same way as salaries. Freelancers are not on the organizational chart as it were. Whereas standard HR hires and the attached salaries come with a load of political and ego driven baggage, freelancers are thought about more like buying a new office printer. If there is a need and the budget, the company will hire you and you might be making 3x what the project lead makes.
It sounds like Mozilla just wants to keep users confined to a pre-approved list of search engines. Are these also business partners with Mozilla, like Google?
Here are some other HTTPS-enabled search engines to test. If I am not mistaken all have been mentioned on HN in the past.
Many Android devices of that age and even newer had flaws resulting in the failure to properly validate HTTPS connections as they would accept invalid certificates. As a result, every time I fire up an off the shelf WiFi Pineapple in public and run SSLSplit (not to be confused with Moxies SSLStrip), I get credential after credential, typically starting with e-mail accounts. This is obviously bad because if someone is using an e-mail account on their phone for banking, an attacker could gain access to account recovery.
These are the sorts of transparent attacks you don't notice and which cannot be mitigated with anti-virus or avoiding downloading sketchy apps. The sketchy stuff is already running on the device in the form of the OS and apps you use within it. Note that a large number of these vectors were never publicly disclosed including a vulnerability with Samsung Knox that I reported. When it was in use, the device would accept any cert.
I think you're the one confusing reality with matter. Nobody ever cared about anybody's diary as a physical artifact. What matters (pun intended) is the information.
The way to fight for strong privacy isn't to run around screaming how "these people just don't GET it!". Because they will look at the framed Math PhD certificate on the wall and rightfully conclude that you're starting from wrong assumptions.
Instead, start by imagine the most perfect FBI agents you can. Then, debate them.
That debate must start with agreeing that your idealized agent does indeed have a harder job when all evidence moves from binders full of incriminating paper to an encrypted, impenetrable blob. Not accepting that truth makes you useless for your cause, because nobody who isn't already a convert will listen to you if you deny reality with ill-fitting analogies.
Only then can you make your case, the two main arguments of which should be:
- It is impossible to weaken encryption without running the risk of those weaknesses being exploited, or the keys to the backdoor, falling into the hands of bad actors.
- The ability to automate electronic surveillance potentially increases the quantity of surveillance to a point where it also takes on a different quality. Even if judicial oversight remained (which is questionable, considering FISA et al), privacy invasion was previously limited by two informal, yet important, caveats: the costs and resources required to have agents physically search, and the public visibility of such searches.
This is interesting, but means very little to me as a heavy user of Docker.
I don't care about boot time. I care about 1) build time, and 2) ship time. In Docker, they are both fast for the same reason.
At the top of your Dockerfile you install all the heavy dependencies which take forever to download and build. As you get further down the file, you run the steps which tend to change more often. Typically the last step is installing the code itself, which changes with every release.
Because Docker is smart enough to cache each step as a layer, you don't pay the cost for rebuilding those dependencies each time. And yet, you get a bit-for-bit exact copy of your filesystem each time, as if you had installed everything from scratch. Then you can ship that image out to staging and production after you're done locally--and it only has to ship the layers that changed!
So this article somewhat misses the point. Docker (and its ilk) is still the best tool for fast-moving engineering teams--even if the boot time were much worse than what they measured.
On a different note I recommend you also google "Let Nothing Go." [Monsanto is being accused in court of hiring 3rd parties to shill public forums including facebook, and it's alleged the internal name of that program is "Let Nothing Go"]
> Google has lost its cute startup image long time ago and people have realized companies are never out there for the social / greater good, just for profits.
It's more than that. Their behavior has changed.
Some years ago everyone was on their side because they were fighting for network neutrality and building Google Fiber and supporting Linux and providing Chilling Effects links to notify users of DMCA censorship and generally sticking it to the MPAA/Comcast/Microsoft.
These days they're not only not really doing that anymore, now they're misidentifying videos with YouTube Content ID and infecting web standards with EME and tracking everyone much more comprehensively than they used to.
People only think you're on their side as long as you continue to do things that show you're on their side.
About 10 years ago I was in the market for a used Toyota Camry, so I wrote a script that scraped the used car classified and extracted models within a 4 year range and within price & mileage caps. This got plotted out into 4 overlaid point series, producing graphs that roughly looked like 1/x, roughly 2-300 datapoints total.
With that in hand I went to dealer offering the best match, told them which car I wanted and how much I was going to pay for it. Put the graph in front of the salesperson who was floored, went back to his manager, and gave me the car for that price.
That was before I even knew what the term "market price" meant.
I first heard of this technique on a discussion on Lowendtalk from a hoster discussing how pressure campaigns were orchestrated.
The host used to host VMs for a customer that was not well liked but otherwise within the bounds of free speech in the US (I guess something on the order of KF/SaSu/SF), so a given user would upload CSAM on the forum, then report the same CSAM to the hoster. They used to use the same IP address for their entire operation. When the host and the customer compared notes, they'd find about these details.
Honestly at the time I thought the story was bunk, in the age of residential proxies and VPNs and whatnot, surely whoever did this wouldn't just upload said CSAM from their own IP, but one possible explanation would be that the forum probably just blocked datacenter IPs wholesale and the person orchestrating the campaign wasn't willing to risk the legal fallout of uploading CSAM out of some regular citizen's infected device.
In this case, I assume law enforcement just sets up a website with said CSAM, gets archive.is to crawl it, and then pressurize DNS providers about it.