Here are some web trends for 2020:
Responsive web design in 2020 should be a given because every serious project that you create should look good and be completely usable on all devices. But there’s no need to over-complicate things.
Web Development in 2020: What Coding Tools You Should Learn article gives an overview of recommendations what you learn to become a web developer in 2020.
You might have seen Web 3.0 on some slides. What is the definition of web 3 we are talking about here?
There seems to be many different to choose from… Some claim that you need to blockchain the cloud IOT otherwise you’ll just get a stack overflow in the mainframe but I don’t agree on that.
Information on the web address bar will be reduced on some web browsers. With the release of Chrome 79, Google completes its goal of erasing www from the browser by no longer allowing Chrome users to automatically show the www trivial subdomain in the address bar.
You still should target to build quality web site and avoid the signs of a low-quality web site. Get good inspiration for your web site design.
Still a clear and logical structure is the first thing that needs to be turned over in mind before the work on the website gears up. The website structure for search robots is its internal links. The more links go to a page, the higher its priority within the website, and the more times the search engine crawls it.
You should upgrade your web site, but you need to do it sensibly and well. Remember that a site upgrade can ruin your search engine visibility if you do it badly. The biggest risk to your site getting free search engine visibility is site redesign. Bad technology selection can ruin the visibility of a new site months before launch. Many new sites built on JavaScript application frameworks do not benefit in any way from the new technologies. Before you go into this bandwagon, you should think critically about whether your site will benefit from the dynamic capabilities of these technologies more than they can damage your search engine visibility. Well built redirects can help you keep the most outbound links after site changes.
If you go to the JavaScript framework route on your web site, keep in mind that there are many to choose, and you need to choose carefully to find one that fits for your needs and is actively developed also in the future.
JavaScript survey: Devs love a bit of React, but Angular and Cordova declining. And you’re not alone… a chunk of pros also feel JS is ‘overly complex’
Keep in mind the recent changes on the video players and Google analytics. And for animated content keep in mind that GIF animations exists still as a potential tool to use.
Keep in mind the the security. There is a skill gap in security for many. I’m not going to say anything that anyone who runs a public-facing web server doesn’t already know: the majority of these automated blind requests are for WordPress directories and files. PHP exploits are a distant second. And there are many other things that are automatically attacked. Test your site with security scanners.
APIs now account for 40% of the attack surface for all web-enabled apps. OWASP has identified 10 areas where enterprises can lower that risk. There are many vulnerability scanning tools available. Check also How to prepare and use Docker for web pentest . Mozilla has a nice on-line tool for web site security scanning.
The slow death of Flash continues. If you still use Flash, say goodbye to it. Google says goodbye to Flash, will stop indexing Flash content in search.
Use HTTPS on your site because without it your site rating will drop on search engines visibility. It is nowadays easy to get HTTPS certificates.
Write good content and avoid publishing fake news on your site. Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy,
Think to who you are aiming to your business web site to. Analyze who is your “true visitor” or “power user”. A true visitor is a visitor to a website who shows a genuine interest in the content of the site. True visitors are the people who should get more of your site and have the potential to increase the sales and impact of your business. The content that your business offers is intended to attract visitors who are interested in it. When they show their interest, they are also very likely to be the target group of the company.
Should you think of your content management system (CMS) choice? Flexibility, efficiency, better content creation: these are just some of the promised benefits of a new CMS. Here is How to convince your developers to change CMS.
Here are some fun for the end:
Did you know that if a spider creates a web at a place?
The place is called a website
Confession: How JavaScript was made.
2,361 Comments
Tomi Engdahl says:
European Cybersecurity Month (ECSM) is the European Union’s annual campaign dedicated to promoting cybersecurity.
Here Niamh Martin shares the story of how her social media account was hacked and her business almost destroyed.
https://cybersecuritymonth.eu/social-media-hacked
Tomi Engdahl says:
Google lowers Play Store fees to 15% on subscription apps, as low as 10% for media apps
https://techcrunch.com/2021/10/21/google-lowers-play-store-fees-to-15-on-subscriptions-apps-as-low-as-10-for-media-apps/?tpcc=tcplusfacebook
Google is lowering commissions on all subscription-based businesses on the Google Play Store, the company announced today. Previously, the company had followed Apple’s move by reducing commissions from 30% to 15% on the first $1 million of developer earnings. Now, it will lower the fees specifically for app makers who generate revenue through recurring subscriptions. Instead of charging them 30% in the first year, which lowers to 15% in year two and beyond, Google says developers will only be charged 15% from day one.
The company says 99% of developers will qualify for a service fee of 15% or less, as Google is also further reducing fees for specific vertical apps in the Play Media Experience Program. These will be adjusted to as low as 10%, it says.
Tomi Engdahl says:
New York Times:
Internal docs detail Facebook’s struggles with violence-inciting content in India, including failure to designate some politically-connected groups as dangerous — Internal documents show a struggle with misinformation, hate speech and celebrations of violence in the country, the company’s biggest market.
https://www.nytimes.com/2021/10/23/technology/facebook-india-misinformation.html
Tomi Engdahl says:
The Information:
Sources: over 12 news outlets, including AP and Fox Business, formed a consortium to sift leaked Facebook docs from Frances Haugen, reporting stories separately
New Facebook Storm Nears as CNN, Fox Business and Other Outlets Team Up on Whistleblower Docs
https://www.theinformation.com/articles/new-facebook-storm-nears-as-cnn-fox-business-and-other-outlets-team-up-on-whistleblower-docs
t’s not often that major news organizations coordinate to sift through a large trove of leaked company documents and agree not to publish stories about them until a certain date. But in the world of news related to Facebook, these are extraordinary times.
Something similar happened in 2016 when the International Consortium of Investigative Journalists published financial leaks in an investigation called the Panama Papers, uncovering details of the global elite’s tax havens, and in 2013 after ex–National Security Agency contractor Edward Snowden released top-secret documents, kicking off a storm of coverage in global newspapers about how the U.S. and other governments spy on citizens and organizations.
Now it’s Facebook’s turn.
Upcoming news stories based on thousands of Facebook documents—which whistleblower Frances Haugen worked to release to more than a dozen news organizations as diverse as the Associated Press, CNN, Le Monde, Reuters and the Fox Business network—aren’t likely to be as revelatory as those epic leaks of time past. For one, the Facebook documents were the basis for the series of impactful stories from the Wall Street Journal, which received them from Haugen months ago. Those pieces revealed how the company’s research showed Facebook’s products could be “toxic” for some teens
Michael Riley / Bloomberg:
Internal docs show Facebook staff faulted the company for failing to thwart the proliferation of Groups, like Stop The Steal, that fomented January 6 violence
Facebook Faulted by Staff Over Jan. 6 Insurrection: ‘Abdication’
Employees and the company’s own research highlight ways Facebook failed to police its platforms ahead of the siege on the Capitol
https://www.bloomberg.com/news/articles/2021-10-22/facebook-faulted-by-staff-over-jan-6-insurrection-abdication
Tomi Engdahl says:
Brandy Zadrozny / NBC News:
Internal docs: Facebook employees created a test account in 2019 and within days it was recommended extreme and conspiratorial content, including QAnon Groups
‘Carol’s Journey’: What Facebook knew about how it radicalized users
https://www.nbcnews.com/tech/tech-news/facebook-knew-radicalized-users-rcna3581
Internal documents suggest Facebook has long known its algorithms and recommendation systems push some users to extremes.
In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.
Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.
Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.
Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.
That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”
The body of research consistently found Facebook pushed some users into “rabbit holes,” increasingly narrow echo chambers where violent conspiracy theories thrived. People radicalized through these rabbit holes make up a small slice of total users, but at Facebook’s scale, that can mean millions of individuals.
Craig Timberg / Washington Post:
A new whistleblower, a former member of Facebook’s Integrity team, files an SEC complaint alleging Facebook prized profits over fighting hate speech and misinfo
New whistleblower claims Facebook allowed hate, illegal activity to go unchecked
https://www.washingtonpost.com/technology/2021/10/22/facebook-new-whistleblower-complaint/
Latest complaint to the SEC blames top leadership for failing to warn investors about serious problems at the company
A new whistleblower affidavit submitted by a former Facebook employee Friday alleges that the company prizes growth and profits over combating hate speech, misinformation and other threats to the public, according to a copy of the document obtained by The Washington Post.
The whistleblower’s allegations, which were declared under penalty of perjury and shared with The Post on the condition of anonymity, echoed many of those made by Frances Haugen, another former Facebook employee whose scathing testimony before Congress this month intensified bipartisan calls for federal action against the company. Haugen, like the new whistleblower, also made allegations to the Securities and Exchange Commission, which oversees publicly traded companies.
The new whistleblower is a former member of Facebook’s Integrity team whose identity is known to The Post and who agreed to be interviewed about the issues raised in the legal filing.
Tomi Engdahl says:
Lizzy Lawrence / Protocol:
A look at productivity influencers who center their content around apps like Notion, Excel, and Asana, offering advice on how to stay organized — This is the creators’ internet. The rest of us are just living in it. We’re accustomed to the scores of comedy TikTokers …
Meet the productivity app influencers
https://www.protocol.com/workplace/productivity-app-influencers
Within the realm of productivity influencing, there is a somewhat surprising sect: Creators who center their content around a specific productivity app.
This is the creators’ internet. The rest of us are just living in it. We’re accustomed to the scores of comedy TikTokers, beauty YouTubers and lifestyle Instagram influencers gracing our feeds. A significant portion of these creators are productivity gurus, advising their followers on how they organize their lives.
Within the realm of productivity influencing, there’s a surprising sect: Creators who center their content around a specific productivity app. They’re a powerful part of these apps’ ecosystems, drawing users to the platform and offering helpful tips and tricks. Notion in particular has a huge influencer family, with #notion gaining millions of views on TikTok.
The productivity app influencers Protocol spoke to are not getting paid on an ongoing basis by the apps they promote. They’re independently building careers and followings off of them.
Tomi Engdahl says:
Alex Heath / The Verge:
Internal documents show Facebook is struggling to attract American users under the age of 30, with US teenage users declining by 13% since 2019 — The world’s largest social network is internally grappling with an existential crisis: an aging user base — Earlier this year …
Facebook’s lost generation
https://www.theverge.com/22743744/facebook-teen-usage-decline-frances-haugen-leaks?scrolla=5eb6d68b7fedc32c19ef33b4
The world’s largest social network is internally grappling with an existential crisis: an aging user base
Earlier this year, a researcher at Facebook shared some alarming statistics with colleagues.
Teenage users of the Facebook app in the US had declined by 13 percent since 2019 and were projected to drop 45 percent over the next two years, driving an overall decline in daily users in the company’s most lucrative ad market. Young adults between the ages of 20 and 30 were expected to decline by 4 percent during the same timeframe. Making matters worse, the younger a user was, the less on average they regularly engaged with the app. The message was clear: Facebook was losing traction with younger generations fast.
The “aging up issue is real,” the researcher wrote in an internal memo. They predicted that, if “increasingly fewer teens are choosing Facebook as they grow older,” the company would face a more “severe” decline in young users than it already projected.
The findings, echoed by other internal documents and my conversations with current and former employees, show that Facebook sees its aging user base as an existential threat to the long-term health of its business and that it’s trying desperately to correct the problem with little indication that its strategy will work. If it doesn’t correct course, the 17-year-old social network could, for the first time, lose out on an entire generation. And while Instagram remains incredibly popular with teens, Facebook’s own data shows that they are starting to engage with the app less.
Facebook’s struggle to attract users under the age of 30 has been ongoing for years, dating back to as early as 2012. But according to the documents, the problem has grown more severe recently. And the stakes are high. While it famously started as a networking site for college students, employees have predicted that the aging up of the app’s audience — now nearly 2 billion daily users — has the potential to further alienate young people, cutting off future generations and putting a ceiling on future growth.
Tomi Engdahl says:
Alexandra S. Levine / Politico:
Internal documents show Facebook had no clear playbook for handling the dangerous content delegitimizing the US elections ahead of the January 6 riots — Facebook’s rules left giant holes for U.S. election falsehoods to metastasize. On the day of the Capitol riot, employees began pulling levers to try to stave off the peril.
Inside Facebook’s struggle to contain insurrectionists’ posts
https://www.politico.com/news/2021/10/25/facebook-jan-6-election-claims-516997
Facebook’s rules left giant holes for U.S. election falsehoods to metastasize. On the day of the Capitol riot, employees began pulling levers to try to stave off the peril.
In the days and hours leading up to the Jan. 6 Capitol insurrection, engineers and other experts in Facebook’s Elections Operations Center were throwing tool after tool at dangerous claims spreading across the platform — trying to detect false narratives of election fraud and squelch other content fueling the rioters.
But much of what was ricocheting across the social network that day fell into a bucket of problematic material that Facebook itself has said it doesn’t yet know how to tackle.
Internal company documents show Facebook had no clear playbook for handling some of the most dangerous material on its platform: content delegitimizing the U.S. elections. Such claims fell into a category of “harmful non-violating narratives” that stopped just short of breaking any rules. Without set policies for how to deal with those posts during the 2020 cycle, Facebook’s engineers and other colleagues were left scrambling to respond to the fast-escalating riot at the Capitol — a breakdown that triggered outrage across the company’s ranks, the documents show.
“How are we expected to ignore when leadership overrides research based policy decisions to better serve people like the groups inciting violence today,” one employee asked on a Jan. 6 message board, responding to memos from CEO Mark Zuckerberg and CTO Mike Schroepfer. “Rank and file workers have done their part to identify changes to improve our platform but have been actively held back.”
Facebook for years has been collecting data and refining its strategy to protect the platform and its billions of users, particularly during post-election periods when violence is not uncommon. The company has taken added precautions in parts of the world such as Myanmar and India, which have seen deadly unrest during political transitions, including using “break the glass” measures — steps reserved for critical crises — to try to thwart real-world harm.
The Elections Operations Center — effectively a war room of moderators, data scientists, product managers and engineers that monitors evolving situations — quickly started turning on what they call “break the glass” safeguards from the 2020 election that dealt more generally with hate speech and graphic violence but which Facebook had rolled back after Election Day.
Sometime late on Jan. 5 or early Jan. 6, engineers and others on the team also readied “misinfo pipelines,” tools that would help them see what was being said across the platform and get ahead of the spread of misleading narratives — like one that Antifa was responsible for the riot, or another that then-President Donald Trump had invoked the Insurrection Act to stay in power. Shortly after, on Jan. 6, they built another pipeline to sweep the site for praise and support of “storm the Capitol” events, a post-mortem published in February shows.
But they faced delays in getting needed approvals to carry out their work. They struggled with “major” technical issues. And above all, without set guidance on how to address the surging delegitimization material they were seeing, there were misses and inconsistencies in the content moderation, according to the post-mortem document — an issue that members of Congress, and Facebook’s independent oversight board, have long complained about.
The technologists were forced to make quick and difficult calls to address nuances in the misinformation, such as whether future-tense statements should be treated differently than those in the past, and how pronouns (“he” versus “Trump,” for example) might affect results.
Data captured the following morning, Jan. 7, found that Facebook’s artificial intelligence tools had struggled to address a large portion of the content related to the storming of the Capitol.
“I don’t think that Facebook’s technical processes failed on January 6,” said Emerson Brooking, resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab, emphasizing how the post-mortem shows engineers and others working hard to reduce harm on that day. “Instead, I think that Facebook’s senior leadership failed to deal aggressively enough with the election delegitimization that made Jan. 6 possible in the first place.”
“We’re FB, not some naive startup,” one employee wrote on a Jan. 6 message board. “With the unprecedented resources we have, we should do better.”
Facebook’s known unknowns
Facebook itself has identified one major, problematic category that it says slips through the cracks of its existing policies.
It’s a gray area of “harmful non-violating narratives” — material that could prove troublesome but nonetheless remains on Facebook because it does not explicitly break the platform’s rules, according to a March report from a group of Facebook data scientists with machine learning expertise.
Narratives questioning the 2020 U.S. election results fell into that bucket. That meant influential users were able to spread claims about a stolen election without actually crossing any lines that would warrant enforcement, the document said.
When weighing content in this gray zone and others, like vaccine hesitancy, Facebook errs on the side of free speech and maintains a high bar for taking action on anything ambiguous that does not expressly violate its policies, according to the report. Making these calls is further complicated by the fact that context, like how meaning may vary between cultures, is hard for AI and human reviewers to parse. But the social network has struggled to ward off harm by limiting its own ability to act without absolute certainty that a post is dangerous, per the report — a burden of proof that the data scientists said is “extremely challenging” to meet.
“We recently saw non-violating content delegitimizing the U.S. election results go viral on our platforms,” they wrote. “The majority of individual instances of such could be construed as reasonable doubts about election processes, and so we did not feel comfortable intervening on such content.”
“Facebook really doesn’t have its arms around the larger content moderation challenge,” he said. “It’s got an often ambiguous, often contradictory, and problematic set of standards.”
One Facebook staffer, responding on an internal message board to the March report, said that “minimizing harm beyond the sharp-lines or worst-of-the-worst content” should be a top focus because these topics are “actually more harmful than the stuff we’re allowed to remove.”
“There could have been a lot more done, especially by leadership, [as far as] looking at some of these edge cases and trying to think through some of this stuff,”
Harmful non-violating content was far from the only obstacle preventing Facebook from reining in dangerous material after the vote. It also had a hard time curtailing groups from the far-right Stop the Steal movement — which alleged the election had been stolen from Trump — because they were part of grassroots activity fueled by real people with legitimate accounts. Facebook has rules deterring what it calls “coordinated inauthentic behavior,” like deceptive bots or fake accounts, but “little policy around coordinated authentic harm,” per a report on the growth of harmful networks on Facebook.
Facebook took down the original Stop the Steal group in November, but it did not ban content using that phrase until after the Jan. 6 riot.
A culture of wait-and-see
Both the inability to firm up policies for borderline content and the lack of plans around coordinated but authentic misinformation campaigns reflect Facebook’s reluctance to work through issues until they are already major problems, according to employees and internal documents.
That’s in contrast to the proactive approach to threats that Facebook frequently touts — like proactively removing misinformation that violates its safety and security standards and going after foreign interference campaigns before they can manipulate public debate.
“We are actively incentivized against mitigating problems until they are already causing substantial harm,” said the document,
“Continuing to take a primarily reactive approach to unknown harms undermines our overall legitimacy efforts,” the report continued.
“There will always be evolving adversarial tactics and emerging high severity topics (covid vaccine misinformation, conspiracy theories, the next time extremist activities are front and center in a major democracy, etc.),” the report said. “We need consistent investment in proactively addressing the intersection of these threats in a way that standard integrity frameworks prioritizing by prevalence do not support.”
“Employees are tired of ‘thoughts and prayers’ from leadership,” another wrote. “We want action.”
Tomi Engdahl says:
Washington Post:
Internal Facebook docs and interviews detail Mark Zuckerberg’s decisions to prioritize growth over safety, including censoring “anti-state” posts in Vietnam — Late last year, Mark Zuckerberg faced a choice: Comply with demands from Vietnam’s ruling Communist Party …
https://www.washingtonpost.com/technology/2021/10/25/mark-zuckerberg-facebook-whistleblower/
Tomi Engdahl says:
Mike Isaac / New York Times:
Internal docs show how Facebook discussed hiding the Like button to alleviate stress and anxiety, but users interacted and shared fewer posts — Likes and shares made the social media site what it is. Now, company documents show, it’s struggling to deal with their effects.
https://www.nytimes.com/2021/10/25/technology/facebook-like-share-buttons.html
Tomi Engdahl says:
BBC:
Frances Haugen told UK MPs that Facebook is “unquestionably making hate worse”, and warns that Instagram is “more dangerous than other forms of social media”
Frances Haugen says Facebook is ‘making hate worse’
https://www.bbc.com/news/technology-59038506
Whistleblower Frances Haugen has told MPs Facebook is “unquestionably making hate worse”, as they consider what new rules to impose on big social networks.
Ms Haugen was talking to the Online Safety Bill committee in London.
She said Facebook safety teams were under-resourced, and “Facebook has been unwilling to accept even little slivers of profit being sacrificed for safety”.
And she warned that Instagram was “more dangerous than other forms of social media”.
While other social networks were about performance, play, or an exchange of ideas, “Instagram is about social comparison and about bodies… about people’s lifestyles, and that’s what ends up being worse for kids”, she told a joint committee of MPs and Lords.
She said Facebook’s own research described one problem as “an addict’s narrative” – where children are unhappy, can’t control their use of the app, but feel like they cannot stop using it.
Tomi Engdahl says:
Jacob Kastrenakes / The Verge:
Zuckerberg says he’s redirected Facebook teams to serve young adults over older users, and that significant changes to Instagram will lean into video and Reels
Facebook says it’s refocusing company on ‘serving young adults’
Expect changes to Instagram to highlight Reels
https://www.theverge.com/2021/10/25/22745622/facebook-young-adults-refocusing-teams?scrolla=5eb6d68b7fedc32c19ef33b4
Tomi Engdahl says:
Gilad Edelman / Wired:
Leaked docs detail staff’s suggestions to solve Facebook’s problems: deprioritize engagement, reduce AI reliance, and focus on safety in developing countries
How to Fix Facebook, According to Facebook Employees
Internal research documents provide a blueprint for solving the company’s biggest problems.
https://www.wired.com/story/how-to-fix-facebook-according-to-facebook-employees/
Tomi Engdahl says:
Steven Levy / Wired:
Internal docs reveal farewell posts of some Facebook employees disillusioned with a company unwilling or unable to change
https://www.wired.com/story/facebook-papers-badge-posts-former-employees/
Tomi Engdahl says:
Kyle Wiggers / VentureBeat:
AWS launches new EC2 instances powered by AI accelerators from Intel’s Habana, claims 40% better price-performance to train ML models over latest GPU instances
The Facebook Papers’ missing piece
A former integrity worker on Facebook’s internal “posting culture” and why it’s easy to misread the Haugen leaks
https://www.platformer.news/p/the-facebook-papers-missing-piece
Tomi Engdahl says:
Casey Newton / Platformer:
Frances Haugen’s documents have been useful to the press and groups opposing Facebook, but reporting should now expand to examine Haugen and her backers’ goals — At the end of 2019, the group of Facebook employees charged with preventing harms on the network gathered to discuss the year ahead.
The tier list: How Facebook decides which countries need protection
PLUS: some thoughts on the Facebook Papers
https://www.platformer.news/p/-the-tier-list-how-facebook-decides
At the end of 2019, the group of Facebook employees charged with preventing harms on the network gathered to discuss the year ahead. At the Civic Summit, as it was called, leaders announced where they would invest resources to provide enhanced protections around upcoming global elections — and also where they would not. In a move that has become standard at the company, Facebook had sorted the world’s countries into tiers.
Brazil, India, and the United States were placed in “tier zero,” the highest priority. Facebook set up “war rooms” to monitor the network continuously. They created dashboards to analyze network activity and alerted local election officials to any problems.
Germany, Indonesia, Iran, Israel and Italy were placed in tier one. They would be given similar resources, minus some resources for enforcement of Facebook’s rules and for alerts outside the period directly around the election.
In tier two, 22 countries were added. They would have to go without the war rooms, which Facebook also calls “enhanced operations centers.”
The rest of the world was placed into tier three. Facebook would review election-related material if it was escalated to them by content moderators. Otherwise, it would not intervene.
Tomi Engdahl says:
Issie Lapowsky / Protocol:
A look at the Integrity Institute, a think tank formed by two former Facebook employees that wants to bring together tech integrity workers studying tech misuse
They left Facebook’s integrity team. Now they want the world to know how it works.
Without breaking their NDAs.
https://www.protocol.com/policy/integrity-institute
Shortly before he left Facebook in October 2019, Jeff Allen published his last report as a data scientist for the company’s integrity team — the team Facebook Papers whistleblower Frances Haugen has recently made famous.
The report revealed, as Allen put it at the time, some “genuinely horrifying” findings. Namely, three years after the 2016 election, troll farms in Kosovo and Macedonia were continuing to operate vast networks of Facebook pages filled with mostly plagiarized content targeting Black Americans and Christian Americans on Facebook. Combined, the troll farms’ pages reached 140 million Facebook users a month, dwarfing the reach of even Walmart’s Facebook presence.
Not only had Facebook failed to stop its spread, Allen wrote in a 20-page report that was recently leaked to and published by MIT Tech Review, but the vast majority of the network’s reach came from Facebook’s ranking algorithms.
“I have no problem with Macedonians reaching US audiences,” Allen wrote. “But if you just want to write python scripts that scrape social media and anonymously regurgitate content into communities while siphoning off some monetary or influence reward for yourself… well you can fuck right off.”
Tomi Engdahl says:
David McCabe / New York Times:
A look at efforts by countries, including the UK, EU, Japan, and China, and companies, such as Facebook and YouTube, to implement age verification checks
Anonymity No More? Age Checks Come to the Web.
https://www.nytimes.com/2021/10/27/technology/internet-age-check-proof.html
Tomi Engdahl says:
Loveday Morris / Washington Post:
Docs: EU politicians said Facebook algorithm changes in 2019 negatively impacted politics, particularly in Poland, where many blamed Facebook for polarization
https://www.washingtonpost.com/world/2021/10/27/poland-facebook-algorithm/
Tomi Engdahl says:
New York Times:
Facebook tells employees to preserve internal docs and communications related to its business since 2016, as governments and legislative bodies begin inquiries — Facebook has told employees to “preserve internal documents and communications since 2016” that pertain to its businesses …
https://www.nytimes.com/2021/10/27/technology/facebook-legal-communications.html
Wall Street Journal:
Sources: FTC staff are investigating whether Frances Haugen’s documents show Facebook violated a 2019 privacy settlement that included a record $5B fine
Federal Trade Commission Scrutinizing Facebook Disclosures
Lawmakers want agency to determine if Facebook engaged in deceptive conduct; company says internal research is mischaracterized
https://www.wsj.com/articles/facebook-ftc-privacy-kids-11635289993?mod=djemalertNEWS
Tomi Engdahl says:
USA Today:
USA Today launches an SMS text chat service, letting digital subscribers connect with its fact checkers — We’re living in an age of misinformation. From coronavirus to climate change, via TikTok trends and Facebook memes, being certain about the truth is more important than ever – and more difficult to distinguish.
https://eu.usatoday.com/story/news/2021/10/26/join-text-chat-usa-todays-expert-fact-checkers/8551649002/
Tomi Engdahl says:
Washington Post:
Internal docs show Facebook staffers agonized over whether the company was nudging news organizations to produce “darker, more divisive content” — In November 2018, the staff of Facebook’s fledgling Civic Integrity department got a look at some eye-opening internal research …
https://www.washingtonpost.com/business/2021/10/26/conservative-media-misinformation-facebook/
Tomi Engdahl says:
Make your own mock API (super simple)
https://www.youtube.com/watch?v=FLnxgSZ0DG4
Hello everyone! In this weeks video I show you how to make your own RESTful API and deploy it onto the internet in just a few easy steps. This is thanks to the https://github.com/typicode/json-server package. Where you can get a mock REST API with zero coding in less than 30 seconds.
Tomi Engdahl says:
APIs for Beginners – How to use an API (Full Course / Tutorial)
https://www.youtube.com/watch?v=GZvSYJDk-us
What is an API? Learn all about APIs (Application Programming Interfaces) in this full tutorial for beginners. You will learn what APIs do, why APIs exist, and the many benefits of APIs. APIs are used all the time in programming and web development so it is important to understand how to use them.
You will also get hands-on experience with a few popular web APIs. As long as you know the absolute basics of coding and the web, you’ll have no problem following along.
REST API concepts and examples
https://www.youtube.com/watch?v=7YcW25PHnAA
This video introduces the viewer to some API concepts by making example calls to Facebook’s Graph API, Google Maps’ API, Instagram’s Media Search API, and Twitter’s Status Update API.
Tomi Engdahl says:
https://techcrunch.com/2021/10/29/mastodon-issues-30-day-ultimatum-to-trumps-social-network-over-misuse-of-its-code/?tpcc=tcplusfacebook
Tomi Engdahl says:
https://techcrunch.com/2021/10/21/facebook-agrees-terms-to-pay-french-publishers-for-news-reuse/
Tomi Engdahl says:
Inside the Trump SPAC deal taking on Twitter, Disney, CNN and every major tech company
https://techcrunch.com/2021/10/21/inside-the-trump-spac-deal-taking-on-twitter-disney-cnn-and-every-major-tech-company/
Tomi Engdahl says:
Miksi huippupoliitikko somettaa ruokareseptejä? Tässä syy – ja 6 vinkkiä hyvän henkilöbrändin rakentamiseksi
https://www.dna.fi/blogi/-/blogs/miksi-huippupoliitikko-somettaa-ruokaresepteja-tassa-syy-ja-6-vinkkia-hyvan-henkilobrandin-rakentamiseksi?fbclid=IwAR0739I9c_wAo7nehb1VOuxSvKhlUzosH4Z7YcDyE3sjFCReNmBZduTrTJU
Mikä yhdistää huippupoliitikkoja ja lyhyttä tarinaa aamupuurosta? DNA:n viestintäpäällikkö Julia Arko jakaa blogissaan vinkkinsä hyvän henkilöbrändin luomiseksi.
Tomi Engdahl says:
https://www.freecodecamp.org/news/what-is-open-graph-and-how-can-i-use-it-for-my-website/
Tomi Engdahl says:
https://wpmudev.com/blog/running-wordpress-business-with-wpmudev/
Tomi Engdahl says:
Microsoft partners with Shopify to bring merchant listings to Bing, Edge and Microsoft Start
https://techcrunch.com/2021/10/25/microsoft-partners-with-shopify-to-bring-merchant-listings-to-bing-edge-and-microsoft-star/
Earlier this year, Google unveiled a partnership with Shopify that gave the e-commerce platform’s more than 1.7 million merchants the ability to reach consumers through Google Search and other services. Now, Microsoft is announcing a similar deal. The company recently said it’s teaming up with Shopify to expand product selection across its own search engine, Microsoft Bing, as well as within the Shopping tab on its Microsoft Edge browser and on its newly launched news service, Microsoft Start.
Tomi Engdahl says:
https://blog.attractive.ai/2021/09/can-ai-monkey-understand-your-website.html
Tomi Engdahl says:
Design’s dirty secrets and how to address experience bias
https://techcrunch.com/2021/10/03/designs-dirty-secrets-and-how-to-address-experience-bias/
I had a conversation recently with a huge technology company, and they wanted to know if their work in human-centered design guards against experience bias. The short answer? Probably not.
When we say experience bias, we’re not talking about our own cognitive biases; we’re talking about it at the digital interface layer (design, content, etc.). The truth is that pretty much every app and site you interact with is designed either based on the perceptions and ability of the team that created it, or for one or two high-value users. If users don’t have experience with design conventions, lack digital understanding, don’t have technical access, etc., we’d say the experience is biased against them.
The solution is to shift to a mindset where organizations create multiple versions of a design or experience customized to the needs of diverse users.
Tomi Engdahl says:
https://www.cyberciti.biz/linux-news/google-chrome-extension-to-removes-password-paste-blocking-on-website/
Tomi Engdahl says:
Meg Jones Wall / Wired:
How payment processors hold the power to decide which products and services can be purchased online, as Stripe bans businesses like tarot card reading
Stripe Discriminates Against Witches
https://www.wired.com/story/stripe-occult-witches-payment-processing-sacred-arts/
Payment processing companies decide who is empowered to buy and sell online—and their policies show a gross misunderstanding of metaphysical practitioners.
When I decided to start offering tarot readings, selling them through my website seemed like the easiest method. I’m a writer, and prefer to give my readings in a written format—and after building out my site on Squarespace, the integration with Stripe took only a few minutes to set up. I eventually added more products—digital workbooks and study guides that had gained traction through my growing Instagram following—and built up a steady business selling these goods online.
After a few months, I received a notice from Stripe that my sales violated their terms of service, as my tarot work seemed to fit into their broad category of “psychic services” and was therefore considered a restricted, “high risk” business. After emailing them back to defend my business, to no avail, I restructured my payments to work with PayPal and continued to offer services through my website in this more limited capacity.
Tomi Engdahl says:
Siddharth Venkataramakrishnan / Financial Times:
A look at “creepypasta”, ghost stories such as Slender Man that spread online, and how decentralized myth-making can explain the rise of meme stocks and QAnon — Or: how the tale of Slender Man can explain the rise of QAnon. … These words, posted on a forum in 2009 …
https://t.co/cgJeuxswAF?amp=1
Tomi Engdahl says:
Adam Satariano / New York Times:
YouTube’s mistaken deletion of a channel belonging to UK news outlet Novara Media draws criticism over the company’s power as a content regulator — Novara, a London news group, fell victim to YouTube’s opaque and sometimes arbitrary enforcement of its rules.
https://www.nytimes.com/2021/10/28/business/youtube-novara.html
Tomi Engdahl says:
Brandolini’s law, also known as the bullshit asymmetry principle, is an internet adage that emphasizes the difficulty of debunking false, facetious, or otherwise misleading information: “The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it.”
Source: https://artsandculture.google.com/entity/brandolini-s-law/g11ddz0b12h?hl=en
Tomi Engdahl says:
https://en.wikipedia.org/wiki/Brandolini%27s_law
Brandolini’s law, also known as the bullshit asymmetry principle, is an internet adage that emphasizes the difficulty of debunking false, facetious, or otherwise misleading information:[1] “The amount of energy needed to refute bullshit is an order of magnitude larger than is needed to produce it.”
It was publicly formulated the first time in January 2013[4] by Alberto Brandolini, an Italian programmer.
Other notable thinkers and philosophers have noted similar truths throughout history.
Mark Twain is sometimes erroneously quoted as saying that:
It’s easier to fool people than to convince them that they have been fooled.
His actual quote, dictated for his 1906 autobiography, is:
The glory which is built upon a lie soon becomes a most unpleasant incumbrance… How easy it is to make people believe a lie, and how hard it is to undo that work again![
In 2005, Russian physicist Sergey Lopatnikov anonymously published an essay in which he introduced the following definition:
If the text of each phrase requires a paragraph (to disprove), each paragraph – a section, each section – a chapter, and each chapter – a book, the whole text becomes effectively irrefutable and, therefore, acquires features of truthfulness. I define such truthfulness as transcendental.
Tomi Engdahl says:
The BS asymmetry principle
https://sketchplanations.com/the-bs-asymmetry-principle
Also known as Brandolini’s Law, this is the simple observation that it’s far easier to produce and spread BS, misinformation and nonsense than it is to refute it.
Phil Williamson in Nature also wrote a nice article emphasizing that we should take the time and effort to correct misinformation where we can. In it, he proposed the idea that “The global scientific community could…set up its own, moderated, rating system for websites that claim to report on science. We could call it the Scientific Honesty and Integrity Tracker, and give online nonsense the SHAIT rating it deserves.”
Take the time and effort to correct misinformation
https://www.nature.com/articles/540171a
Scientists should challenge online falsehoods and inaccuracies — and harness the collective power of the Internet to fight back, argues Phil Williamson.
With the election of Donald Trump, his appointment of advisers who are on record as dismissing scientific evidence, and the emboldening of deniers on everything from climate change to vaccinations, the amount of nonsense written about science on the Internet (and elsewhere) seems set to rise. So what are we, as scientists, to do?
Tomi Engdahl says:
https://handwiki.org/wiki/Brandolini%27s_law
Tomi Engdahl says:
Facebook is blocking access to data about how much misinformation it spreads and who is affected
https://www.niemanlab.org/2021/11/facebook-is-blocking-access-to-data-about-how-much-misinformation-it-spreads-and-who-is-affected/
Simply counting instances of misinformation found on a social media platform leaves two key questions unanswered.
Leaked internal documents suggest Facebook — which recently renamed itself Meta — is doing far worse than it claims at minimizing Covid-19 vaccine misinformation on the Facebook social media platform.
Online misinformation about the virus and vaccines is a major concern. In one study, survey respondents who got some or all of their news from Facebook were significantly more likely to resist the Covid-19 vaccine than those who got their news from mainstream media sources.
As a researcher who studies social and civic media, I believe it’s critically important to understand how misinformation spreads online. But this is easier said than done.
82 websites and 42 Facebook pages — had an estimated total reach of 3.8 billion views in a year.
One possible denominator is 2.9 billion monthly active Facebook users, in which case, on average, every Facebook user has been exposed to at least one piece of information from these health misinformation sources.
Tomi Engdahl says:
Facebook is blocking access to data about how much misinformation it spreads and who is affected
https://www.niemanlab.org/2021/11/facebook-is-blocking-access-to-data-about-how-much-misinformation-it-spreads-and-who-is-affected/
Simply counting instances of misinformation found on a social media platform leaves two key questions unanswered.
Tomi Engdahl says:
Ditching Google Chrome was the best thing I did this year (and you should too)
Sometimes you don’t notice how bad something is until you look at it in hindsight
https://www.zdnet.com/article/ditching-google-chrome-was-the-best-thing-i-did-this-year-and-you-should-do-the-same-too/
Tomi Engdahl says:
Journalisti, tämän takia Hesarin toimittajien vankeusuhka on uhka myös sinulle
https://journalisti.fi/nakokulmat/2021/10/journalisti-taman-takia-hesarin-toimittajien-vankeusuhka-on-uhka-myos-sinulle/
Kolme Helsingin Sanomien journalistia asetettiin tänään syytteeseen turvallisuussalaisuuden paljastamisesta ja paljastamisen yrityksestä. Kyse on vakavista rikoksista, joista tuomio on vähintään neljä kuukautta ehdollista vankeutta. Tapaus on itsenäisen Suomen historiassa ainutlaatuinen, vaarantaa sanavapauden ja hankaloittaa journalistien työtä.
Journalistien kannattaakin seurata oikeudenkäyntiä tarkkaan, koska siinä määritellään, miten me voimme jatkossa tehdä työtämme.
1. Toimittaja voi joutua vankilaan, vaikka ei julkaisisi mitään.
Syytteessä puhutaan ”turvallisuussalaisuuden paljastamisen yrityksestä”. Jos paljastamisen yritys voi johtaa vankilatuomioon, tulee journalistien työstä hyvin vaikeaa.
2. Julkisen tiedon kertomisesta voi joutua oikeuteen.
Helsingin Sanomien jutussa oli käytetty asiakirjoja, jotka Puolustusvoimat oli julistanut salaisiksi. Lehden mukaan kaikki tiedot saattoi kuitenkin selvittää myös julkisista lähteistä.
Mikäli journalistit saisivat tuomion, se tulisi julkisesti saatavilla olevien tietojen julkaisemisesta.
3. Viranomaiset voivat salata asioita entistä helpommin.
Joskus viranomaiset salaavat asiakirjoja hyvästä syystä. Toisinaan syyt ovat huonot.
Osa viranomaisten salaamasta aineistosta tulee julki, kun tietty määrä aikaa on kulunut. Esimerkiksi puolustusvoimat kuitenkin tuhoaa aineistojaan ennen kuin ne tulevat julkisiksi
Suomalaisten journalistien on nyt oltava valppaina.
Tomi Engdahl says:
Eriq Gardner / The Hollywood Reporter:
Netflix faces more active libel suits than any major news outlet, partly due to its nonfiction fare, often unsuccessfully arguing that it is just a distributor — The mega-streamer is facing more defamation complaints than any major news outlet, stemming from projects like ‘Making a Murderer …
Why Suits Against Netflix Could Shake Streaming
https://www.hollywoodreporter.com/tv/tv-features/netflix-suits-streaming-1235040681/
The mega-streamer is facing more defamation complaints than any major news outlet, stemming from projects like ‘Making a Murderer’ and ‘When They See Us’ — but is it a distributor, a publisher or something else entirely?
Tomi Engdahl says:
Matt Burgess / Wired:
A look at the fight between city officials and parents over Stockholm’s glitchy app for its schools, as annoyed parents built their own open source version — Stockholm’s official app was a disaster. So annoyed parents built their own open source version—ignoring warnings that it might be illegal.
https://www.wired.com/story/sweden-stockholm-school-app-open-source/
The Skolplattform wasn’t meant to be this way. Commissioned in 2013, the system was intended to make the lives of up to 500,000 children, teachers, and parents in Stockholm easier—acting as the technical backbone for all things education, from registering attendance to keeping a record of grades. The platform is a complex system that’s made up of three different parts, containing 18 individual modules that are maintained by five external companies. The sprawling system is used by 600 preschools and 177 schools, with separate logins for every teacher, student, and parent. The only problem? It doesn’t work.
The Skolplattform, which has cost more than 1 billion Swedish Krona, SEK, ($117 million), has failed to match its initial ambition. Parents and teachers have complained about the complexity of the system—its launch was delayed, there have been reports of project mismanagement, and it has been labelled an IT disaster. The Android version of the app has an average 1.2 star rating.
On October 23, 2020, Landgren, a developer and the CEO of Swedish innovation consulting firm Iteam, tweeted a hat design emblazoned with the words “Skrota Skolplattformen”—loosely translated as “trash the school platform.” He joked he should wear the hat when he picks his children up from school. Weeks later, wearing that very hat, he decided to take matters into his own hands. “From my own frustration, I just started to create my own app,” Landgren says.
He wrote to city officials asking to see the Skolplattform’s API documents. While waiting for a response, he logged into his account and tried to work out whether the system could be reverse-engineered. In just a few hours, he had created something that worked. “I had information on my screen from the school platform,” he says. “And then I started building an API on top of their lousy API.”
The work started at the end of November 2020, just days after Stockholm’s Board of Education was hit with a 4 million SEK GDPR fine for “serious shortcomings” in the Skolplattform. Integritetsskyddsmyndigheten, Sweden’s data regulator, had found serious flaws in the platform that had exposed the data of hundreds of thousands of parents, children, and teachers. In some cases, people’s personal information could be accessed from Google searches. (The flaws have since been fixed and the fine reduced on appeal.)
In the weeks that followed, Landgren teamed up with fellow developers and parents Johan Öbrink and Erik Hellman, and the trio hatched a plan. They would create an open source version of the Skolplattform and release it as an app that could be used by frustrated parents across Stockholm. Building on Landgren’s earlier work, the team opened Chrome’s developer tools, logged into the Skolplattform, and wrote down all the URLs and payloads. They took the code, which called the platform’s private API and built packages so it could run on a phone—essentially creating a layer on top of the existing, glitchy Skolplattform.
The result was the Öppna Skolplattformen, or Open School Platform. The app was released on February 12, 2021, and all of its code is published under an open source license on GitHub. Anyone can take or use the code
But rather than welcome it with open arms, city officials reacted with indignation. Even before the app was released, the City of Stockholm warned Landgren that it might be illegal.
In the eight months that followed, Stockholms Stad, or the City of Stockholm, attempted to derail and shut down the open source app.
Officials reported the app to data protection authorities and, Landgren claims, tweaked the official system’s underlying code to stop the spin-off from operating at all.
Then, in April, the City announced it was getting the police involved. Officials claimed the app and its cofounders may have committed a criminal data breach and asked cybercrime investigators to look into how the app worked.
The €1 app has been downloaded around 12,500 times on iPhone and Android (with a 4.2-star average rating) and only shows basic information.
Parents log in using the Swedish digital identity system BankID, which is also used by the Skolplattform. They can then see information about their children that’s pulled into the app through the Skolplattform API.
“Everything that we display is open and public information,” says Öbrink, one of Öppna Skolplattformen’s cofounders. He explains that when students’ grades are shown, they are displayed through an in-app browser where the app can’t access any data.
“We never anticipated that it would work as well as it did.” He says the Öppna Skolplattformen team held meetings with the city in which they said officials could take their code and use their version of the app. “They did not want to collaborate or even discuss collaboration with us, they just went on and reported us to the police,” he says.
The City of Stockholm was unsure about Öppna Skolplattformen from the beginning. “We do not have open APIs, so they have made their own solution,”
Mossberg, speaking before the unofficial app launched, said it may be “illegal” because people’s personal data was involved. Although Mossberg claimed to be generally positive about the app, she said a “rigorous” investigation was being launched.
In mid-February, Swedish security firm Certezza completed an external audit of the app—the report was not published, despite Sweden’s strong transparency laws. In order to access the document, the Öppna Skolplattformen team challenged the nondisclosure in court.
Three weeks later, at the end of February, the stakes were raised. The city said it was making security updates to the Skolplattform to stop any potential personal data from being accessed—effectively shutting down Öppna Skolplattformen’s home-brewed API. The city’s action started a tug-of-war between the two sides: The Skolplattform would be updated; Öppna Skolplattformen would respond with its own updates.
Lena Holmdahl, director of education at the City of Stockholm, says the city acted in line with its responsibilities to its suppliers, students, and employees.
“We have responsibilities that we try to perform in accordance with the agreements, laws, and regulations we are obliged to follow.”
In early April, the city asked the developers to unpublish their source code from GitHub.
“They wrote the police report in a way that was supposed to look scary,” Landgren says. In the following weeks, cybercrime investigators came to his house and interviewed him about the open source app—a process Landgren says caused him to doubt the work the team had done. “You have to make a decision at that point on what you’re trying to do,” he says. Ultimately he continued to work on the project—along with an expanding team—as they believed it was the right thing to do.
They also raised potential security issues with the official app, even as the city worked against them. The team includes designers, lawyers, and developers. “As private citizens, we are highly digitalized,” Landgren says.
“To bridge that gap we, and a lot of other people that joined us, think that open source is probably the best way for us to start collaborating.” He argues that citizen development can be more effective than costly and often botched government IT projects that take years to complete and are out of date by the time they are completed.
“It shows very clearly some of the ways in which Sweden’s digitalization has gone wrong,” says Mattias Rubenson, the secretary of the Swedish branch of the Pirate Party, which has been chronicling the problems it has with the Skolplattform. “There is, in general, the possibility of a school platform being good. But you have to involve students, and especially teachers, in the development from the start. There has been none of that in the School Platform.”
Öppna Skolplattformen had to wait months to be cleared. “We do not believe that anything criminal has been committed,”
Data regulator Integritetsskyddsmyndigheten did not open an investigation into the city’s complaint, a spokesperson says.
The review concluded that the open source app wasn’t sending any sensitive information to third parties and didn’t pose a threat to users. The police report went further in clearing the Öppna Skolplattformen developers. “All information that Öppna Skolplattformen has used is public information that the City of Stockholm voluntarily distributed,” it said.
LANDGREN WAS TRAVELING to his brother’s wedding in France at the start of September when he got the phone call. The city was changing its position on Öppna Skolplattformen—and any other apps seeking to do similar things—and decided to let others access the data within its systems. To do so, the city struck a deal with an external provider that will be able to set up licenses between Öppna Skolplattformen and the city.
“With this solution, the City of Stockholm can guarantee that personal data is handled in a correct and secure way, while parents can take part in the market’s digital tools in their everyday lives,”
The move was validation of Öppna Skolplattformen’s efforts
Landgren now hopes Öppna Skolplattformen will be able to strike a deal with the City of Stockholm that will result in the city paying for a license to the app. The aim is for it to be made free for all parents. “It’s going to look a lot like [the city] buying Microsoft Office,” Landgren says. “A typical license deal.” If the deal can be struck—the details and numbers are still being negotiated—Öppna Skolplattformen volunteers will be paid for their contributions, he says.
Holmdahl, from the city’s education board, admits that the app could be easier for parents to use—although she points out that, unlike the unofficial app, it has to work for teachers and students as well.
“User-driven IT development is interesting but must work together with legislation and responsibility for secure personal data,” she says. Holmdahl maintains the city has always had a license agreement that people could use to get personal data but that there was no license provider at the time Öppna Skolplattformen started.
Ultimately, Landgren hopes the Öppna Skolplattformen saga will teach politicians and city officials that the technology they provide for citizens shouldn’t be procured as huge IT projects—and that the people who will end up using it should be involved in the planning and development.
Tomi Engdahl says:
Sarah Gooding / WP Tavern:
A recently unredacted antitrust complaint alleges that Google gives AMP a “nice comparative boost” by throttling load times of non-AMP ads via one-second delays — The Chrome Dev Summit concluded earlier this week. Announcements and discussions on hot topics impacting …
AMP Has Irreparably Damaged Publishers’ Trust in Google-led Initiatives
https://wptavern.com/amp-has-irreparably-damaged-publishers-trust-in-google-led-initiatives
The Chrome Dev Summit concluded earlier this week. Announcements and discussions on hot topics impacting the greater web community at the event included Google’s Privacy Sandbox initiative, improvements to Core Web Vitals and performance tools, and new APIs for Progressive Web Apps (PWAs).
Paul Kinlan, Lead for Chrome Developer Relations, highlighted the latest product updates on the Chromium blog, what he identified as Google’s “vision for the web’s future and examples of best-in-class web experiences.”
During an (AMA) live Q&A session with Chrome Leadership, ex-AMP Advisory Board member Jeremy Keith asked a question that echoes the sentiments of developers and publishers all over the world who are viewing Google’s leadership and initiatives with more skepticism:
Given the court proceedings against AMP, why should anyone trust FLOC or any other Google initiatives ostensibly focused on privacy?
The question drew a tepid response from Chrome leadership who avoided giving a straight answer.
FLoC continues to be a controversial initiative, opposed by many major tech organizations. A group of like-minded WordPress contributors proposed blocking Google’s initiative earlier this year. Privacy advocates do not believe FLoC to be a compelling alternative to the surveillance business model currently used by the advertising industry. Instead, they see it as an invitation to cede more control of ad tech to Google.
Despite the developer community’s waning trust in the company, Google continues to aggressively advocate for a number of controversial initiatives, even after some of them have landed the company in legal trouble.
The complaint alleges that “Google ad server employees met with AMP employees to strategize about using AMP to impede header bidding, addressing in particular how much pressure publishers and advertisers would tolerate.”
In summary, it claims that Google falsely told publishers that adopting AMP would enhance load times, even though the company’s employees knew that it only improved the “median of performance” and actually loaded slower than some speed optimization techniques publishers had been using. It alleges that AMP pages brought 40% less revenue to publishers. The complaint states that AMP’s speed benefits “were also at least partly a result of Google’s throttling. Google throttles the load time of non-AMP ads by giving them artificial one-second delays in order to give Google AMP a ‘nice comparative boost.‘”
Once AMP was no longer required and publishers could use any technology to rank in Top Stories, the percentage of non-AMP pages increased significantly to double digits, where it remains today.
“But I’m angry. Because it means that for more than five long years, when AMP was a mobile Top Stories requirement, Google penalised these publishers for not using AMP.
Even the publishers who adopted AMP struggled to get ad views. In 2017, Digiday reported on how many publishers have experienced decreased revenues associated with ads loading much slower than the actual content. I don’t think anyone at the time imagined that Google was throttling the non-AMP ads.
“The aim of AMP is to load content first and ads second,” a Google spokesperson told Digiday. “But we are working on making ads faster. It takes quite a bit of the ecosystem to get on board with the notion that speed is important for ads, just as it is for content.”
This is why Google is rapidly losing publishers’ trust. For years the company encumbered already struggling news organizations with the requirement of AMP. The DOJ’s detailed description of how AMP was used as a vehicle for anticompetitive practices simply rubs salt in the wound after what publishers have been through in expending resources to support AMP versions of their websites.
Automattic Denies Prior Knowledge of Google Throttling Non-Amp Ads
In 2016, Automattic, one of the most influential companies in the WordPress ecosystem, partnered with Google to promote AMP as an early adopter. WordPress.com added AMP support and Automattic built the first versions of the AMP plugin for self-hosted WordPress sites. The company has played a significant role in driving AMP adoption forward, giving it an entrance into the WordPress ecosystem.
How much did Automattic know when it partnered with Google in the initial AMP rollout?
“As part of our mission to make the web a better place, we are always testing new technologies including AMP,” an official spokesperson for Automattic said.
This may be true but Automattic has done more than simply test the new technology. In partnering with Google, it has been instrumental in making AMP easier for WordPress users to adopt.
“We received no funds from Google for the project,”
“We chose to partner with Google because we believed that we had a shared vision of advancing the open web. Additionally, we wanted to offer the benefit of the latest technology to WordPress users and publishers including AMP.”
Tomi Engdahl says:
Nginx vs Apache: Web Server Showdown
https://kinsta.com/blog/nginx-vs-apache/
The internet, as we know it today, started its global “conquest” in the ’90s. The whole “Web” protocol can be summed up as a visitor requesting a document from a given web address, with DNS and IP system forwarding that request to the right computer. This computer, which is hosting the requested web page, will “serve” the web page back to the visitor.
Nginx vs Apache
Nginx and Apache are popular web servers used to deliver web pages to a user’s browser. In our case, from a hosted WordPress site. Quick stats:
Apache was released first in 1995, then came Nginx in 2004.
Both are used by large Fortune 500 companies around the globe.
Nginx market share has been steadily growing for years.
In some instances, Nginx has a competitive edge in terms of performance.
Apache’s huge market share is partly due to the fact that it comes pre-installed with all major Linux distributions, like Red Hat/Centos and Ubuntu.
One example of the important role of Apache within the Linux world is that its server process name is HTTPd, making Apache a synonym with web server software.
Besides being the first serious player in the web server market, part of Apache’s proliferation is due to its configuration system and its .htaccess file.
Nginx
Nginx (also written as nginx or NGINX), came on the scene in 2004, when it was first publicly released by Russian developer Igor Sysoev. As Owen Garrett, Nginx‘ project manager said:
“Nginx was written specifically to address the performance limitations of Apache web servers.”
The server was first created as a scaling tool for the website rambler.ru in 2002. It comes in two versions: open source, with BSD-type license, and Nginx Plus, with support and additional enterprise features.
After it was released, Nginx was used mostly to serve static files and as a load-balancer or reverse proxy in front of Apache installations. As the web evolved, and the need to squeeze every last drop of speed and hardware usage efficiency with it, more websites started to replace Apache with Nginx entirely, thanks also to a more mature software.
In March 2019, Nginx Inc was acquired by F5 Networks for $670 million. At that moment, as Techcrunch reports, Nginx server was powering “375 million websites with some 1,500 paying customers”.
Tomi Engdahl says:
Nick Robins-Early / VICE:
Researchers and journalists in Ethiopia say Meta has done little to stop hate speech amid Ethiopia’s civil war, claiming moderation often falls to volunteers — Online hate is adding fuel to the country’s deadly conflict, and researchers say Facebook is failing to stop it. — NR
How Facebook Is Stoking a Civil War in Ethiopia
https://www.vice.com/en/article/qjbpd7/how-facebook-is-stoking-a-civil-war-in-ethiopia
Online hate is adding fuel to the country’s deadly conflict, and researchers say Facebook is failing to stop it.