Security for the ‘Internet of Things’

Security for the ‘Internet of Things’ (Video) posting an Slashdot provides one view to security of Internet of Things. What happens when your oven is on the Internet? A malicious hacker might be able to get it so hot that it could start a fire. Or a prankster might set your alarm in the middle of night. A hacker can use your wireless security camera to hack into your home network. Watch the video at Security for the ‘Internet of Things’ (Video) page (or read transcript) to get the idea what can happen and how to protect against it. Remember: There’s always going to be things that are going to break. There’s always going to be.

Mark: “So I think a lot of the system on chips that we’re seeing that are actually going in Internet of Thing devices, a lot of companies are coming up, take an Arduino or Raspberry Pi, very cool chipsets, very easy to deploy and build on. We’re seeing smaller and smaller scales of those, which actually enable engineers to put those into small little shells. We are obviously kind of at this early part of 3D printing. So your ability to manufacture an entire device with a couple of bucks is becoming a reality and obviously if you have a really niche product that might be really popular in Kickstarter, you could actually deploy tens of thousands of those with a successful crowd-funding campaign and never really know about the actual security of that product before it goes to market.”

484 Comments

  1. Tomi Engdahl says:

    Why Car Info Tech Is So Thoroughly At Risk
    http://tech.slashdot.org/story/15/08/23/2353239/why-car-info-tech-is-so-thoroughly-at-risk

    Cory Doctorow reflects in a post at Boing Boing on the many ways in which modern cars’ security infrastructure is a white-hot mess. And as to the reasons why, this seems to be the heart of the matter, and it applies to much more than cars:
    [M]anufacturers often view bugs that aren’t publicly understood as unimportant, because it costs something to patch those bugs, and nothing to ignore them, even if those bugs are exploited by bad guys

    There is a sociopathic economic rationality to silencing researchers who come forward with bugs.

    Car information security is a complete wreck — here’s why
    http://boingboing.net/2015/08/23/car-information-security-is-a.html?utm_content=buffer6d5bf&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

    Sean Gallagher’s long, comprehensive article on the state of automotive infosec is a must-read for people struggling to make sense of the summer’s season of showstopper exploits for car automation, culminating in a share-price-shredding 1.4M unit recall from Chrysler, whose cars could be steered and braked by attackers over the Internet.

    All complex systems have bugs. Even well-audited systems have bugs luring in them (cough openssl cough). Mission-critical systems whose failings can be weaponized by attackers to wreak incredible mischief are deeply, widely studied, meaning that the bugs in the stuff you depend on are likely being discovered by people who want to hurt you, right now, and turned into weapons that can be used against you. Yes, you, personally, Ms/Mr Nothing To Hide, because you might be the target of opportunity that the attacker’s broad scan of IP addresses hit on first, and the software your attacker wrote is interested in pwning everything, regardless of who owns it.

    The only defense is to have those bugs discovered by people who want to help you, and who then report them to manufacturers. But manufacturers often view bugs that aren’t publicly understood as unimportant, because it costs something to patch those bugs, and nothing to ignore them, even if those bugs are exploited by bad guys, because the bad guys are going to do everything they can to keep the exploit secret so they can milk it for as long as possible, meaning that even if your car is crashed (or bank account is drained) by someone exploiting a bug that the manufacturer has been informed about, you may never know about it. There is a sociopathic economic rationality to silencing researchers who come forward with bugs.

    In the computer world, the manufacturers have largely figured out that threatening researchers just makes their claims more widely know (the big exceptions are Oracle and Cisco, but everyone knows they’re shitty companies run by assholes).

    The car industry is nearly entirely run by Oracle-grade assholes. GM, for example, says that your car is a copyrighted work and that researching its bugs is a felony form of piracy. Chrysler was repeatedly informed about its showstopper, 1.4M-car-recalling bug, and did nothing about it until it was front-page news. Volkswagen sued security researchers and technical organizations over disclosure of major bugs in VW’s keyless entry system. Ford claims that its cars are designed with security in mind, so we don’t have to worry our pretty little heads about them

    None of this stops bad guys from learning about the bugs in these systems — it just stops you

    Reply
  2. Tomi Engdahl says:

    Highway to hack: why we’re just at the beginning of the auto-hacking era
    A slew of recently-revealed exploits show gaps in carmakers’ security fit and finish.
    http://arstechnica.com/security/2015/08/highway-to-hack-why-were-just-at-the-beginning-of-the-auto-hacking-era/

    Imagine it’s 1995, and you’re about to put your company’s office on the Internet. Your security has been solid in the past—you’ve banned people from bringing floppies to work with games, you’ve installed virus scanners, and you run file server backups every night. So, you set up the Internet router and give everyone TCP/IP addresses. It’s not like you’re NASA or the Pentagon or something, so what could go wrong?

    That, in essence, is the security posture of many modern automobiles—a network of sensors and controllers that have been tuned to perform flawlessly under normal use, with little more than a firewall (or in some cases, not even that) protecting it from attack once connected to the big, bad Internet world. This month at three separate security conferences, five sets of researchers presented proof-of-concept attacks on vehicles from multiple manufacturers plus an add-on device that spies on drivers for insurance companies, taking advantage of always-on cellular connectivity and other wireless vehicle communications to defeat security measures, gain access to vehicles, and—in three cases—gain access to the car’s internal network in a way that could take remote control of the vehicle in frightening ways.

    While the automakers and telematics vendors with targeted products were largely receptive to this work—in most cases, they deployed fixes immediately that patched the attack paths found—not everything is happy in auto land. Not all of the vehicles that might be vulnerable (including vehicles equipped with the Mobile Devices telematics dongle) can be patched easily. Fiat Chrysler suffered a dramatic stock price drop when video of a Jeep Cherokee exploit (and information that the bug could affect more than a million vehicles) triggered a large-scale recall of Jeep and Dodge vehicles.

    And all this has played out as the auto industry as a whole struggles to understand security researchers and their approach to disclosure—some automakers feel like they’re the victim of a hit-and-run.

    Reply
  3. Tomi Engdahl says:

    Samsung smart fridge leaves Gmail logins open to attack
    Failures in exploit discovery process are cold comfort for IoT fridge owners
    http://www.theregister.co.uk/2015/08/24/smart_fridge_security_fubar/

    Security researchers have discovered a potential way to steal users’ Gmail credentials from a Samsung smart fridge.

    Pen Test Partners discovered the MiTM (man-in-the-middle) vulnerability that facilitated the exploit during an IoT hacking challenge run by Samsung at the recent DEF CON hacking conference.

    The hack was pulled off against the RF28HMELBSR smart fridge, part of Samsung’s line-up of Smart Home appliances which can be controlled via their Smart Home app. While the fridge implements SSL, it fails to validate SSL certificates, thereby enabling man-in-the-middle attacks against most connections.

    The internet-connected device is designed to download Gmail Calendar information to an on-screen display. Security shortcomings mean that hackers who manage to jump on to the same network can potentially steal Google login credentials from their neighbours.

    “The internet-connected fridge is designed to display Gmail Calendar information on its display,” explained Ken Munro, a security researcher at Pen Test Partners. “It appears to work the same way that any device running a Gmail calendar does. A logged-in user/owner of the calendar makes updates and those changes are then seen on any device that a user can view the calendar on.”

    “While SSL is in place, the fridge fails to validate the certificate. Hence, hackers who manage to access the network that the fridge is on (perhaps through a de-authentication and fake Wi-Fi access point attack) can Man-In-The-Middle the fridge calendar client and steal Google login credentials from their neighbours, for example.”

    Reply
  4. Tomi Engdahl says:

    Hacked Jeep: Whom to Blame?
    http://www.eetimes.com/document.asp?doc_id=1327266&

    So, where, exactly, did hackers find a crack in the firewall of a 2014 Jeep Cherokee? How did they infiltrate it and who’s at fault for failing to foresee the breach?

    The failure apparently occurred in not one, but multiple places in the connected car’s system architecture. Blame, according to multiple automotive industry analysts, could also extend to parties beyond Fiat Chrysler Automobiles (FCA). They include Sprint — a system integrator — with whom Chrysler contracted for secure vehicle network access via the telematics control unit, and Harman Kardon, who designed an in-vehicle infotainment system.

    Since two hackers revealed a week ago their handiwork of wirelessly hacking into a 2014 Jeep Cherokee, first reported by Wired, the issue of cyber security in vehicles has come into sharp focus. Until this incident, the conventional wisdom among engineers was that it’s “not possible” to hack into a car without a physical access.

    The revelation by the hacker team, Charlie Miller and Chris Valasek, set in motion a sweeping recall, on July 24th, of 1.4 million vehicles by Fiat Chrysler. U.S. Senators Ed Markey and Richard Blumenthal also introduced last week legislation to require U.S.-sold cars to meet certain standards of protection against digital attacks.

    However, Roger Lanctot, associate director, global automotive practice at Strategy Analytics, is the first analyst to publicly implicate Sprint. He wrote in his latest blog:

    FCA’s Chrysler division is taking the fall for Sprint’s failure to properly secure its network and the Jeep in question – which was subjected to some comical and terrifying remote control in real-time on the highway thanks to an IP address vulnerability.

    Breakdown of security vulnerability
    Asked to break down the security vulnerability of the hacked car, Lanctot said: “Step one is control of braking, acceleration and steering accessible on the vehicle CAN bus.

    “Step two is remote wireless connectivity to the car via cellular.

    “Step three is providing for remote access to the CAN bus via the telematics control unit interface. Clearly, the FCA systems were configured in such a way as to allow for CAN bus access via the telematics control unit.”

    Lanctot added, “There is nothing wrong with that as long as you provide for appropriate security.”

    Lanctot, however, pointed out, “It appears that the IP address was too easily identified” by the system used by Jeep Cherokee and “the telematics control unit lacked basic software upgrading capability.”

    Lanctot isn’t alone in fingering the IP address issue. Egil Juliussen, director research & principal analyst at IHS Automotive Technology, also told us that the hackers appear to have found “a simple way to get the IP address of a car.” Juliussen explained that once the hackers located the car, they sent code to the infotainment system — built by Harman Kardon –via the ill-gotten IP address.

    Juliussen theorized that the hackers then wrote additional code and sent it via CAN bus to the core auto ECU networks to disable mission-critical functions such as engine and brakes.

    What about isolation?
    Wait. Isn’t the infotainment system supposed to be isolated from mission-critical functions? The “strong isolation” of the two systems is a mantra we hear often when we ask automakers about security in connected cars.

    Thr trouble is that a vehicle’s on-board diagnostics (OBD)-II is connected not just to core ECU networks but also to the infotainment system, explained Juliussen, so that automakers can monitor the infotainment equipment. “Chances are that there are CAN bus bridges between the two separate systems.”

    Juliussen made it clear that the hacking Miller and Valasek pulled off in the Jeep Cherokee is not exactly child’s play.

    Nonetheless, it’s clear that there have been flaws in network security traceable to Sprint, and in the way Harman Kardon’s infotainment system was set up in a vehicle Chrysler’s engineers designed, according to Juliussen.

    Juliussen previously told EE Times, “Cyber-security is one of the biggest problems the auto industry faces” and warned that “we’re kind of late [on that].” He sees a silver lining. Now every carmaker building connected cars is going back and reviewing all its connected security.

    Each party – from Chrysler to Harman Kardon and Sprint – must have checked that each system they were responsible for designing was functioning correctly. That’s a given. But in order to check the system’s security, designers are now being asked to “break something,” explained Juliussen, to see if any out-of-spec operations (outside of normal arrangement of operations) can be exploited by hackers.

    Juliussen said that when the Jeep Cherokee was developed four years ago, cyber security wasn’t nearly the industry’s top priority. It took many years “for the PC industry learn the security issues, the smartphone vendors are learning it now. And it’s time for automakers to catch up.”

    Lanctot also noted, “This is early days, so maybe the lack of an intrusion detection system can be forgiven.” But he stressed that the basic elements of security are to “have a dynamically changing IP address along with some kind of firewall,” in addition to “intrusion detection on the vehicle network.”

    In his view, Sprint not only failed to dynamically change IP, but also offered no ability to update/upgrade the telematics control unit for bug fixes, content updates, or to update network connectivity firmware.

    Indeed, although FCA made software updates for the infotainment system, in response to the hackers’ ravages, the patch is not easily implemented. Car owners will have to perform a manual update via a USB stick or visit to a dealer’s service center.

    Just two years ago, when Sprint announced its Velocity system as “a New and Existing Telematics and In-Vehicle Communications Systems,” the company wrote on its website

    “With years of mobile customer experience and telecommunications knowledge, Sprint is a solutions provider you can depend on to address today’s technology and prepare your business for tomorrow’s innovation.”

    Reply
  5. Tomi Engdahl says:

    This hospital drug pump can be hacked over a network – and the US FDA is freaking out
    Doctors told to stop using kit as open ports put patients at risk
    http://www.theregister.co.uk/2015/08/01/fda_hospitals_hospira_pump_hacks/

    The US Food and Drug Administration has told healthcare providers to stop using older drug infusion pumps made by medical technology outfit Hospira – because they can be easily hacked over a network.

    “Hospira and an independent researcher confirmed that Hospira’s Symbiq Infusion System could be accessed remotely through a hospital’s network. This could allow an unauthorized user to control the device and change the dosage the pump delivers, which could lead to over- or under-infusion of critical patient therapies,” the FDA said.

    “Hospira has discontinued the manufacture and distribution of the Symbiq Infusion System, due to unrelated issues, and is working with customers to transition to alternative systems. However, due to recent cybersecurity concerns, the FDA strongly encourages health care facilities to begin transitioning to alternative infusion systems as soon as possible.”

    It appears from the advisory that both the FTP and telnet ports (ports 20 and 23, respectively) were left open on the drug pumps, and will need to be closed. Also, port 8443 ships with a default login password, and the FDA advises hospitals to change it as soon as possible.

    Reply
  6. Tomi Engdahl says:

    Prevalence of IoT may leave networks vulnerable to attacks
    http://www.controleng.com/single-article/prevalence-of-iot-may-leave-networks-vulnerable-to-attacks/b165aa3784947f73184c2604e08aa048.html

    Devices that use the Internet of Things (IoT) are prevalent in highly regulated industries and the infrastructure supporting those devices is vulnerable to security flaws, according to a recent study.

    IoT devices are connecting to corporate networks, but are not up to the same security standards as other connections, according to “The 2015 Internet of Things in the Enterprise Report,” a global data-driven security assessment of IoT devices and infrastructure found in businesses from OpenDNS. Using data from the billions of Internet requests routed through OpenDNS’s global network daily, the report details the scale to which IoT devices are in enterprise environments and uncovers specific security risks associated with those devices.

    The report, authored by OpenDNS director of security research Andrew Hay, includes:

    IoT devices are actively penetrating some of the world’s most regulated industries including healthcare, energy infrastructure, government, financial services, and retail.
    There are three principal risks IoT devices present to the enterprise: IoT devices introduce new avenues for potential remote exploitation of enterprise networks; the infrastructure used to enable IoT devices is beyond the user and IT’s control; and IT’s often casual approach to IoT device management can leave devices unmonitored and unpatched.
    Some networks hosting IoT data are susceptible to highly-publicized and patchable vulnerabilities such as FREAK and Heartbleed.
    Highly prominent technology vendors are operating their IoT platforms in known “bad Internet neighborhoods,” which places their users at risk.
    Consumer devices such as Dropcam Internet video cameras, Fitbit wearable fitness devices, Western Digital “My Cloud” storage devices, various connected medical devices, and Samsung Smart TVs continuously beacon out to servers in the U.S., Asia, and Europe—even when not in use.
    Though traditionally thought of as local storage devices, Western Digital cloud-enabled hard drives are now some of the most prevalent IoT endpoints observed. These devices are actively transferring data to insecure cloud servers.
    A survey of more than 500 IT and security professionals found 23% of respondents have no mitigating controls in place to prevent someone from connecting unauthorized devices to their company’s networks.

    “This report shows conclusively that IoT devices are making their way into our corporate networks, but are not up to the same security standards to which we hold enterprise endpoints or infrastructure,”

    Reply
  7. Tomi Engdahl says:

    Cyber Insecurity
    http://www.eetimes.com/author.asp?section_id=216&doc_id=1327456&

    In these uncertain times, designers have to consider security at every point in the system, because each system is only as secure as its weakest link.

    We are certainly living in interesting times. Over the years I’ve read a lot of science fiction stories that depicted various flavors of the future, many of which involved the concept of cyber security and nefarious strangers trying to access one’s data.

    Generally speaking, this sort of thing really didn’t affect most of us until relatively recently in the scheme of things. How things have changed. Now it seems that we hear about data breaches on an almost daily basis, many of which can put their victims at risk of identity theft.

    In 2013, for example, we discovered that hackers had managed to steal the credit and debit card information (including names, addresses, and phone numbers) associated with more than 70 million customers.

    Meanwhile, in 2014, I was informed that hackers had managed to access tens of millions of records from my health insurance company.

    I heard a report on the National Public Radio (NPR) that hackers have just posted the data they stole from a company/website called Ashley Madison.

    Apparently, the data released by the hackers includes the names, addresses, and phone numbers associated with the users of the site. Also, I hear that ~15,000 of these records have .mil or .gov email addresses

    The real problem is that we still don’t seem to take security seriously. In the case of my health insurance company, for example, we came to discover that they had taken such minimalist precautions as to make one shake one’s head in disbelief.

    And things are only going to get worse, which means that the designers of today’s electronic, computer, and embedded systems have to consider security at every point in the system — from the leaf nodes at the edge of the Internet of Things (IoT) to the mega servers in the cloud — because each system is only as secure as its weakest link.

    “But where can we learn about this stuff?” you cry.

    Reply
  8. Tomi Engdahl says:

    Strong ARM scoops up Sansa to boost IoT security
    Chipmaker adds Israeli company’s bolt-on protection to its bulging armoured sack
    http://www.theregister.co.uk/2015/07/30/arm_buys_iot_firm_sansa_security/

    Chipmaker ARM has sealed a deal to buy Israeli Internet of Things (IoT) security specialist Sansa Security. Financial terms of the deal, announced Thursday, were not officially disclosed. However, the WSJ previously reported that around $75m-$85m was on the table.

    ARM makes the chips that power the majority of the world’s smartphones. The Sansa acquisition will allow it to add hardware and software-based security features, boosting protection for sensitive data and content on any connected device.

    Sansa’s technology is already deployed across a range of smart connected devices and enterprise systems. The company was previously known as Discretix, prior to rebranding last October, and specialised in embedded security technologies.

    The deal complements the ARM security portfolio, including ARM TrustZone technology and SecurCore processor IP.

    “Any connected device could be a target for a malicious attack, so we must embed security at every potential attack point,” said Mike Muller, CTO of ARM in a statement. “Protection against hackers works best when it is multi-layered, so we are extending our security technology capability into hardware subsystems and trusted software. This means our partners will be able to license a comprehensive security suite from a single source.”

    Sansa offers a complete hardware subsystem designed to isolate security sensitive operations from the main application processor. Its IoT security platform has a mobile component and a capability to work across cloud-based systems.

    Given the well-documented security issues of IoT devices, ARM and Sansa are sowing seeds on fertile ground

    Reply
  9. Tomi Engdahl says:

    Zigbee: Protect the Keys to Security
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327539&

    Consumers need to feel confident that the smart home will not become a Trojan horse for malware.

    Risk of insecure storage of keys
    The Zigbee specification mentions “the level of security provided by the Zigbee security architecture depends on the safekeeping of the symmetric keys…” Furthermore in that section it states “due to the low-cost nature of ad hoc network devices, one cannot generally assume the availability of tamper resistant hardware. Hence, physical access to a device may yield access to secret keying material and other privileged information, as well as access to the security software and hardware.”

    To eliminate this issue we recommend the use of a secure authentication microcontroller, aka secure element, such as (yes, product placement) our A700x family. By storing link keys and network keys in a dedicated secure element that is connected to your Zigbee SoC via a simple I2C interface, you can eliminate the risk associated with physical tampering. There are over 100 security features and countermeasures in our secure elements that can protect from invasive and side channel attacks.

    Risk of insecure key transport
    The Zigbee specification goes on to discuss key transport. It mentions when devices are not preconfigured with specific network keys “a single key may be sent unprotected, thus resulting in a brief moment of vulnerability where the key could be obtained by any device. This can lead to a critical security compromise if it is possible for an untrusted device to obtain the key.” As an alternative to this risky in-band commissioning of new devices onto the network, we recommend the use of NFC for out-of-band commissioning.

    Key disposal
    Decommissioning is as important as onboarding. Prior to disposal of unwanted devices, it’s important to delete any network information. Via NFC, it’s possible to tap and remove any sensitive network information so the device can be safely discarded without concern of key exposure.

    Keys are key
    Encrypted networks like Zigbee are based on keys that are used for encryption and decryption that can lead to vulnerabilities if these keys aren’t protected. NFC provides secure communication via proximity and secure elements provide secure storage. By using these technologies, sensitive key information is never exposed, improving the integrity and overall security of Zigbee devices.

    Reply
  10. Tomi Engdahl says:

    A “Public Health” Approach To Internet of Things Security
    http://tech.slashdot.org/story/15/08/27/1826248/a-public-health-approach-to-internet-of-things-security

    Guaranteeing your personal privacy in an era when more and more devices are connecting our daily lives to the Internet is becoming increasingly difficult to do. David Bray, CIO of the FCC, emphasizes the exponential growth we are facing by comparing the Internet we know today to a beachball, and the Internet of Everything future to the Sun. Bray says unless you plan to unplug from the Internet completely, every consumer needs to assume some responsibility for the security and overall health of the Internet of Everything.

    Why everyone must play a part in improving IoE privacy
    https://enterprisersproject.com/article/2015/8/why-everyone-must-play-part-improving-iot-privacy

    As an Eisenhower Fellow, Dr. David A. Bray recently participated in a five-week professional program that took him out of his normal day-to-day role as CIO for the Federal Communications Commission. While on the Fellowship abroad, Bray met with industry CEOs as well as the Ministries of Communication, Justice, and Defense in both Taiwan and Australia to discuss the “Internet of Everything” and how established industry, startups, public service, non-profits, and university leaders are anticipating and planning for a future in which everything is connected by the Internet.

    Bray’s hypothesis going into the Fellowship was that all sectors aren’t preparing nearly enough for exponential impacts ahead, as well as multi-sector issues such as how we continue to protect privacy, civil liberties, security, and democratic processes in the decade to come.

    In order to effectively prepare for the future, Bray believes it’s worth reminding everyone the sheer exponential scale of the Internet of Everything that we will be facing: “We are currently moving from Internet Protocol version four (IPv4), which routes most of the Internet’s traffic today, to IPv6. If you took all the Internet addresses possible with IPv4 (232, or approximately 4.3 billion addresses) and put them in a beachball, by comparison all the Internet addresses possible with IPv6 (2128) would approximate the size of our Sun. That’s not linear change, that’s exponential change.”

    “From my conversations with leaders in both Taiwan and Australia, we’ll need to think differently about how we approach security and privacy for the Internet of Everything, and understand regular and abnormal ‘herd behaviors’ across a massive amount of online devices,” Bray remarked.

    Since unplugging oneself from the Internet completely is becoming less realistic within an explosion of connected devices, Bray thinks that everyone must play a part in their own personal security, and he sees a digital equivalent of “cyber public health” as a potential path forward in an Internet of Everything era.

    The future is already here

    One of the biggest takeaways from his meetings in Taiwan and Australia is that the Internet of Everything is being rolled out now, despite the fact that we often discuss it in future terms, Bray said.

    “There are already working devices coming out now, and industrial controls that were originally purposed for business-to-business (B2B) use are now connected to the Internet and being made available to consumers. Devices designed for monitoring temperature in an industrial facility, are now becoming consumer devices for the Internet-enabled home,” Bray said.

    While this is an exciting trend, it is also presents some concerns.

    “Leaders in both Australia and Taiwan raised concerns that, for the most part and with a few exceptions, industrial controls have had less security than TCP/IP – the ‘de facto’ protocol for transmitting data over the Internet – has had. The leaders in both countries also asked me who should be responsible for baking-in security and baking-in privacy into these new controls,” Bray said. “Does it happen at the device level? At the aggregation or cloud level? At the individual application level?”

    And those weren’t the only concerns.

    “Leaders in both countries asked me who will be responsible for identifying that your grandmother’s car has been hacked in the future? Even more importantly, they asked who will be the right person to actually knock on her door and notify her?”

    Why a cyber “public health” approach might be needed

    While there are no immediate, definitive answers to these questions, Bray did hear some interesting proposals over the course of his fellowship. A recurring conversation that came up with different leaders was around the idea of taking a cyber “public health” approach via an open public-private entity partnership.

    real world is similar to the Internet in that you can never promise 100 percent immunity to real-world diseases: “Public health exists because even with our best efforts, infectious disease outbreaks do occur in the real world, and we have to rapidly detect, respond, and help treat those effected.”

    “If we think of the Internet as a series of digital ecosystems where participants need to assume some responsibility for making sure they’re doing their best to keep their Internet devices clean and secure – the digital equivalent of washing their hands – then we can also imagine the need for cyber epidemiology when individual hygiene is insufficient in preventing a mass ‘outbreak’ or individual infection,”

    An open, opt-in model that spans multiple organizations

    “Definitions of what constituted abnormal cyber behaviors would be performed at two levels,” Bray remarked. “The first would be at the level of organizations opting-in to participate by analyzing their own networks for abnormal trends and potential exploits over time. The second would be at the level of an open public-private partnership receiving de-identified data from their networks to create a global assessment of the cyber ‘health’ of the Internet. Within the data, there might be false positives, for example when a software bug occurs or a hardware problem occurs, yet even this data would be valuable because globally you might learn a bug is widespread across multiple organizations and needs to be fixed.”

    Comments:

    The public health approach is a bad match. Responding to disease outbreaks is different in every way from responding to cyber attacks.

    1) Time frame: Disease is spread over the course of days if not weeks, while cyber crime occurs in seconds or less.

    2) Motive: Disease is not being spread by human attacks as a general rule, it is environmental. Cyber crime/security is all about attacks by humans on others. Even if a case of human disease attacks were made, the expertise required to carry one out, and the capability to spread it is extremely limiting. Cyber crime takes more perseverance than skill in the consumer market, and targets are international.

    3) Scope: When disease outbreaks occur, they tend to be specific to a geographic location and there are typically very few diseases that have to be controlled at a time. Cyber crime/security is widespread in terms of geographical targets, attack types, and severity of effects.

    4) Effectiveness: When a disease outbreak is identified, resources are assembled and deployed to contain it. But this takes days, if not weeks. It also requires cooperation of the affected population, coordination of the experts, and possibly some legal or diplomatic engagement. The response to cyber crime needs to be quick, multi-faceted, and preferably automated. The phrase “When all you have is a hammer, every problem is a nail,” comes to mind when thinking of (inter)national organizations.

    Unfortunately, the solution is probably going to require legislation to hold manufacturers and service vendors accountable for failure to provide adequate security measures in their products. Dealing with cyber attacks after the fact is too late, and cleaning up the damage is not only expensive but in too many cases impossible.

    Reply
  11. Tomi Engdahl says:

    The Coming Terrorist Threat From Autonomous Vehicles
    http://yro.slashdot.org/story/15/08/30/1539258/the-coming-terrorist-threat-from-autonomous-vehicles

    Alex Rubalcava writes that autonomous vehicles are the greatest force multiplier to emerge in decades for criminals and terrorists and open the door for new types of crime not possible today. According to Rubalcava, the biggest barrier to carrying out terrorist plans until now has been the risk of getting caught or killed by law enforcement so that only depraved hatred, or religious fervor has been able to motivate someone to take on those risks as part of a plan to harm other people. “A future Timothy McVeigh will not need to drive a truck full of fertilizer to the place he intends to detonate it,” writes Rubalcava. “A burner email account, a prepaid debit card purchased with cash, and an account, tied to that burner email, with an AV car service will get him a long way to being able to place explosives near crowds, without ever being there himself.”

    According to Rubalcava the reaction to the first car bombing using an AV is going to be massive, and it’s going to be stupid. There will be calls for the government to issue a stop to all AV operations, much in the same way that the FAA made the unprecedented order to ground 4,000-plus planes across the nation after 9/11.

    A Roadmap for a World Without Drivers
    https://medium.com/@alexrubalcava/a-roadmap-for-a-world-without-drivers-573aede0c968

    Reply
  12. Tomi Engdahl says:

    Why is the smart home insecure? Because almost nobody cares
    The miserable life of the security veep
    http://www.theregister.co.uk/2015/08/27/smart_home_insecure/

    It’s easy to laugh-and-point at Samsung over its latest smart-thing disaster: after all, it should have already learned its lesson from the Smart TV debacle, right?

    Except, of course, that wherever you see “Smart Home”, “Internet of Things”, “cloud” and “connected” in the same press release, there’s a security debacle coming. It might be Nest, WeMo, security systems, or home gateways – but it’s all the same.

    Meet the suffering security bod

    Why? Let me introduce someone I’ll call the Junior VP of Embedded Systems Security, who wears the permanent pucker of chronic disappointment.

    The reason he looks so disappointed is that he’s in charge of embedded Internet of Things security for a prominent Smart Home startup.

    Everybody said “get into security, you’ll be employable forever on a good income”, so he did.

    Nobody told him that as Junior VP for Embedded Systems Security (JVPESS), his job is to give advice that’s routinely ignored or overruled.

    Meet the designer

    “All we want to do is integrate the experience of the bedside A.M. clock-radio into a fully-social cloud platform to leverage its audience reach and maximise the effectiveness of converting advertising into a positive buying experience”, the Chief Design Officer said (the CDO dresses like Jony Ive, because they retired the Steve Jobs uniform like a football club retiring the Number 10 jumper when Pele quit).

    “just a couple of last minute revisions. We have to press ‘go’ on the project by close-of-business today so if you could just look this over”

    Alas, she likes the idea of customer metrics, so the JVPESS leaves her office, once again disappointed.

    What ships has a security architecture re-implemented in half an hour using a deprecated version of OpenSSL and a self-signed certificate with hard-coded crypto credentials.

    The product is a market hit, and within a month, blackhats have dropped malware on a million Android phones, and users get messages at 14 minutes past midnight demanding 0.56 Bitcoin to switch off the message, and Nielsen thinks the top-rating show airs at 2AM on a community radio station in West Bumcrack, Iowa, whose only content is speeches from YouTube by Julian Assange and Edward Snowden.

    The JVPESS is charged with sorting out the mess, while the Ninja and Jony Ive’s Style Slave want the patched code ported to a home security console by Monday, because the second-round investors are demanding a pivot.

    Reply
  13. Tomi Engdahl says:

    Intel, NSF tip dollars into IoT security
    Medical devices, smart cars, smart homes in sights
    http://www.theregister.co.uk/2015/09/01/intel_nsf_tip_dollars_into_iot_security/

    America’s National Science Foundation has noticed the dodgy security surrounding the Internet of Things, and has splashed US$6 million in two grants to improve, umm, things.

    The grants to examine “cyber-physical systems” (CPS), awarded in partnership with Intel, have gone to the University of Pennsylvania’s Insup Lee to work on “security and privacy-aware cyber-physical systems”, and to Philip Levis at Stanford, who is working on end-to-end IoT security.

    The U-Penn grant will look at autonomous vehicles (including internal and external vehicle networks), the smart-connected medical home, and medical device interoperability.

    Lee hopes his outputs will include attack detection, ways to ensure that IoT systems recover from attacks quickly, lightweight cryptography, control designs, data privacy, and an “evidence-based framework for CPS security and privacy assurance”.

    he architectural model includes:

    A distributed model controller, with different models defining the data the application generates and stores, and how data moves;
    A common “embedded gateway cloud” architecture; and
    End-to-end security provided by encryption from the IoT device to the end user device; and
    A broad “software-defined hardware” model to help developers create more secure devices: “The data processing pipeline can be compiled into a prototype hardware design and supporting software as well as test cases, diagnostics, and a debugging methodology for a developer to bring up the new device”.

    Reply
  14. Tomi Engdahl says:

    Safely Riding the Internet Highway
    http://www.edn.com/design/analog/4440126/Safely-Riding-the-Internet-Highway-?_mc=NL_EDN_EDT_EDN_today_20150831&cid=NL_EDN_EDT_EDN_today_20150831&elq=61407c1e3071445f91deeffef5596b9e&elqCampaignId=24570&elqaid=27810&elqat=1&elqTrackId=7e488d0d1b314dadb8f33c019e2d7fe1

    Learning how to drive the internet highway on the path to the Smart Home means rules, regulations and laws.

    Compared to most of the world’s infrastructure, it is amazing how primitive and lawless the internet really is. Yes, the web is technically sophisticated but when trying to understand how it should be used and how it can benefit our lives, it is still a wild and unruly path, needing a great deal of growth and maturing.

    When compared to our highway system – the learned knowledge of how we should travel on the internet highway, relatively, we are still in the horse and buggy days.

    To be efficient, useful and safe, societies worldwide have developed customs, rules and regulations to ensure that we don’t get run over and injured as we travel from one location to another.

    the real acceleration of car technologies occurred about 125 years ago. This development changed the way we live our lives today. Not only did it create a complex and worldwide automotive manufacturing industry, but it also gave birth to a cornucopia of associated industries and infrastructures. For example:

    1. Infrastructure. To service the world of the car, today we have major transcontinental highways, bridges that cross major rivers and seas, tunnels that go under water and through mountains, etc.
    2. Suppliers. Of course, we cannot forget the petrochemical industry. Without the need for fuel to power our vehicles, the oil industry as we know it today would not exist.
    3. Legislation. Although somewhat different from country to country
    4. Enforcement. With all the rules put in place there is also a mechanism for compliance and rule enforcing, embedded in the larger legal structure of a country
    5. Training. Although also different from country to country, it is common that someone needs to go through driving lessons and an exam to obtain a license for driving, before legally allowed to drive
    6. Insurance. With the increase in the speed of the cars in most countries it is now required to have a liability insurance to drive a car.
    7. A “standardized” Operator interface.

    The concept of ‘driving’ is a complete fabric that has evolved from the basic concept of a car.

    Now let us compare this highway and driving evolution to the experience of traveling on the internet highway.

    Although the internet has come a long way, it is clear that the fabric of the internet highway is still very immature, and everyone who is getting on the internet is doing so very much at his or her own risk.

    While the infrastructure is rapidly building, reaching into all the small corners of the world, the responsibility of getting on the internet is still very much with the individual users, without clear rules or legislation around basic principles of security and privacy. People buy a software package for security against computer viruses, more or less as a sort of insurance premium – without any assurance that this will fully protect them. Many people use a smart phone to access to the web – with very little protections against a rapidly growing assortment of attack vectors.

    Because of the open character of the internet the lack of security goes even a step further. Governments that are supposed to set the traffic rules on the internet are often the biggest culprits in exploiting the lack of rules to their own advantages.

    In the “real” world, “the people” have come a long way protecting themselves against an overzealous government while in the virtual world of the internet, governments and parliaments are still learning to understand the concept of the internet, and whether existing legislation around security and privacy is adequate.

    A second area of total confusion is around the privacy and ownership of personal data. Large companies develop appealing applications that people can use for free (Google, Facebook, Twitter, etc.), but by downloading, installing and using them, people explicitly give their privacy away. Have you ever read the small print (the EULA – End User License Agreement) that comes with an application before you download it onto your phone? Most applications – especially useful free apps – collect a great deal about the user’s life – where they are and when, who they contact, what sites they research and visit – in order to package this data and resell it to advertisers.

    Most “free” online games only survive by hooking their players into buying shortcuts and add-ons. Others can be victimized by phishing emails, exploiting people’s greed, curiosity and simple lack of knowledge about internet scams.

    The internet can be a dangerous and costly place, where people unknowingly expose themselves in the virtual world in a way that bad guys can come after them in the physical world, opening themselves up for extortion or eventually leading to suicide.

    With the emerging Internet of Things the amount of devices on the internet will increase exponentially with dozens, if not hundreds of devices in every home feeding valuable personal data onto the web. With data analytics software becoming more powerful and ubiquitous, both the usefulness as well as the capability for abuse will increase as well: the stakes are just getting higher, at both sides.

    There is not a simple solution. The internet is a great place to be, but at the same time it is full of dangers. The internet is the world greatest information tool but we need to learn how to use it carefully so that we do not end up injuring ourselves, our families and cultures.

    As a society, we have to invest in understanding these dangers and learning how to address them. This will not come for free, just like our cars and roads did not come for free. We will have to build a fabric around the internet that includes legislation, enforcement and training. Technology is complex, and there still is a lot of ongoing development around the internet

    There is reason for optimism. Today, driving may still be a dangerous thing, but it is now safer than it was ever before. Development of common sense rules, standards and infrastructure took a while – and so it will be with driving on the internet freeway. That is – if we put the right efforts and resources into it!

    Reply
  15. Tomi Engdahl says:

    Of 10 IoT-connected home security systems tested, 100% are full of security FAIL
    http://www.computerworld.com/article/2881942/of-10-iot-connected-home-security-systems-tested-100-are-full-of-security-fail.html

    HP researchers tested 10 of the newest connected home security systems and discovered the Internet of Things-connected security systems are full of security FAIL.

    If you jump into the Internet of Things and purchase a home security system to provide security, you may actually be less secure and more vulnerable than before you bought a security system. HP Fortify researchers tested 10 of the newest home security systems and discovered IoT-connected home security systems are full of security fail.

    Connected home security systems are connected via the cloud to a mobile device or the web for remote monitoring, and come with a variety of features such as motion detectors, door and window sensors and video cameras with recording capabilities. Although “the intent of these systems is to provide security and remote monitoring to a home owner,” HP researchers said (pdf), “given the vulnerabilities we discovered, the owner of the home security system may not be the only one monitoring the home.”

    “The biggest takeaway is the fact that we were able to brute force against all 10 systems, meaning they had the trifecta of fail (enumerable usernames, weak password policy, and no account lockout), meaning we could gather and watch home video remotely,” wrote HP’s Daniel Miessler.

    HP Fortify found an “alarmingly high number of authentication and authorization issues along with concerns regarding mobile and cloud-based web interfaces.” Under the category of insufficient authentication and authorization, the researchers reported (pdf):

    100% allowed the use of weak passwords
    100% lacked an account lockout mechanism that would prevent automation attacks
    100% were vulnerable to account harvesting, allowing attackers to guess login credentials and gain access
    Four of seven systems that had cameras, gavethe owner the ability to grant video access to additional users, further exacerbating account harvesting issues.
    Two of the systems allowed video to be streamed locally without authentication
    A single system offered two-factor authentication

    “Properly configured transport encryption is especially important since security is a primary function of these home security systems.” Yet regarding the encryption that is critical for protecting “sensitive data such as credentials, person information, device security settings and private video to name a few,” they discovered that “50% exhibited improperly configured or poorly implement SSL/TLS.”

    70% of the home security systems allowed “unrestricted account enumeration through their insecure cloud-based interface.” Mobile didn’t fare much better as “50% allowed unrestricted account enumeration through their mobile application interface.”

    According to HP’s infographic (pdf), “If video streaming is available through a cloud-based web or mobile application interface, then video can be viewed by an Internet-based attacker from hacked accounts anywhere in the world.

    “It seems that every time we introduce a new space in IT we lose 10 years from our collective security knowledge,” stated Miessler. “The Internet of Things is worse than just a new insecure space: it’s a Frankenbeast of technology that links network, application, mobile, and cloud technologies together into a single ecosystem, and it unfortunately seems to be taking on the worst security characteristics of each.”

    Reply
  16. Tomi Engdahl says:

    Maybe we should retire RC4 in IoT also:

    Emil Protalinski / VentureBeat:
    Google, Microsoft, and Mozilla will drop RC4 encryption in Chrome, Edge, IE, and Firefox next year — Google, Microsoft, and Mozilla all made the same announcement today: They will drop support for the RC4 cipher in their respective browsers. Chrome, Edge, Internet Explorer

    Google, Microsoft, and Mozilla will drop RC4 encryption in Chrome, Edge, IE, and Firefox next year
    http://venturebeat.com/2015/09/01/google-microsoft-and-mozilla-will-drop-rc4-support-in-chrome-edge-ie-and-firefox-next-year/

    Google, Microsoft, and Mozilla all made the same announcement today: They will drop support for the RC4 cipher in their respective browsers. Chrome, Edge, Internet Explorer, and Firefox will all stop using the outdated security technology next year.

    RC4 is a stream cipher designed in 1987 that has been widely supported across browsers and online services for the purposes of encryption. Multiple vulnerabilities have been discovered in RC4 over the years, making it possible to crack within days or even hours.

    In February, new attacks prompted the Internet Engineering Task Force (IETF) to prohibit the use of RC4 with TLS. Browser makers have made adjustments to ensure they only use RC4 when absolutely necessary, but now they want to take it a step further.

    Reply
  17. Tomi Engdahl says:

    New Technologies Secure the IoT & IIoT
    http://www.eetimes.com/document.asp?doc_id=1327578&

    Cybercriminals are constantly discovering new ways to exploit the growing Internet of Things (IoT) and Industrial Internet of Things (IIoT).

    Traditional cybersecurity solutions rely on techniques like signatures and repeated updates, which means they are difficult to integrate and they cannot secure IoT systems and devices effectively. For example, conventional solutions simply cannot cope with zero-day (or zero-hour) attacks. (A zero day/hour vulnerability refers to a security hole in software that is unknown to the vendor and that can be exploited by hackers before the vendor becomes aware of it.)

    In order to address these issues, Webroot has introduced its IoT Security Toolkit — a set of technologies that enables IoT solution designers and integrators to leverage cloud-based, real-time threat intelligence services from Webroot to protect deployed systems against cyberattacks.

    The Webroot IoT Security Toolkit also features high-performance, low system impact, small device footprint agents. These agents constantly collect data about files and other system-level events; they detect new and altered files or anomalous conditions; and they communicate all relevant information to the BrightCloud Threat Intelligence Platform.

    The Webroot IoT Security Toolkit also includes a secure web gateway. This cloud-based service inspects and filters all incoming and outgoing traffic between devices and their control systems over the Internet, intercepting malware before it reaches downstream networks or endpoint devices.

    Reply
  18. Tomi Engdahl says:

    The rapid growth in Internet connected devices increases the opportunity for rogue elements to hack into systems and cause damage. Device designers must become increasingly vigilant with the security of connected devices. This webinar will examine two weapons in the arsenal of tools available to embedded Linux system designers to develop robust devices intended for large-scale deployment in Industrial and IoT connected systems.

    What You Will Learn

    SELinux and SMACK

    Source: http://www.mentor.com/embedded-software/events/using-selinux-and-smack-on-embedded-linux-in-industrial-and-iot-devices?contactid=1&PC=L&c=2015_08_31_embedded_technical_news

    Reply
  19. Tomi Engdahl says:

    The Year of the Car Hacks
    http://hackaday.com/2015/09/01/the-year-of-the-car-hacks/

    With the summer’s big security conferences over, now is a good time to take a look back on automotive security. With talks about attacks on Chrysler, GM and Tesla, and a whole new Car Hacking village at DEF CON, it’s becoming clear that autosec is a theme that isn’t going away.

    Up until this year, the main theme of autosec has been the in-vehicle network. This is the connection between the controllers that run your engine, pulse your anti-lock brakes, fire your airbags, and play your tunes. In most vehicles, they communicate over a protocol called Controller Area Network (CAN).

    A number of talks were given on in-vehicle network security, which revealed a common theme: access to the internal network gives control of the vehicle. We even had a series about it here on Hackaday.

    The response from the automotive industry was a collective “yeah, we already knew that.” These networks were never designed to be secure, but focused on providing reliable, real-time data transfer between controllers. With data transfer as the main design goal, it was inevitable there would be a few interesting exploits.

    Infotainment and Telematics

    Automotive companies are working hard on integrating new features in to distinguish their products and create new revenue streams. Want a concierge service? You can pay for GM’s OnStar. Need an in-car WiFi hotspot? Chrysler has that built into uConnect for $35 a month. Want to control every aspect of your vehicle from a touch screen? Maybe the Tesla Model S is for you.

    There are two main features that are leading to more connected vehicles: infotainment and telematics. Infotainment systems are the in-vehicle computers that let you play music, get vehicle information, navigate, and more. Telematics systems provide vehicle data to third parties for safety, diagnostics, and management.

    Regulators are helping speed up the process. Due to the eCall initiative, all new vehicles sold in Europe after 2018 must provide voice communication and a “minimum set of data” in the event of an accident. This means vehicle will be required by law to have a cellular connection, supporting voice and data.

    The Chrysler hack took advantage of a vulnerability that anyone familiar with network security would consider trivial: an open port running an insecure service. If you want to know the details of the hack, [Chris] and [Charlie] have published a detailed paper that’s definitely worth a read.

    The crux of the vulnerability relied on an assumption made by Chrysler. Their telematics unit had two processors, one connected to the in-vehicle network and one connected to the internet. The assumption was that the airgap between these devices prevented remote access to the in-vehicle network.

    Unfortunately, their airgap was made of copper. It was a SPI connection between the two processors, which allows for a variety of commands to be executed, including a firmware update. With rogue firmware running on the in-vehicle network, we’re back to the five-year old issue of in-vehicle networks being insecure.

    [Chris] and [Charlie] decided to focus on a Chrysler Jeep Cherokee, but let’s not place all the blame on Chrysler. The uConnect device running the vulnerable service was actually made by Harman. Harman is the largest manufacturer of automotive audio and infotainment systems. You’ll find their devices in vehicles from Audi, BMW, Land Rover, Mercedes-Benz, Volvo, Buick, and others.

    This is how the automotive industry tends to work nowadays. An OEM, like Chrysler, integrates parts from a variety of “Tier One” suppliers. The Tier One suppliers source parts from “Tier Two” suppliers. It’s up to Chrysler to choose these parts, then stick them all together into a vehicle.

    When buying from a range of suppliers, security is a hard problem. As an engineer, you’re stuck with integrating parts that were chosen based on a range of criteria, and security isn’t at the forefront of purchasing decisions. OEMs do not always have the resources to evaluate the security of the products they are purchasing, and instead rely on the suppliers to build secure products.

    The other issue with suppliers is that fixes happen slowly. Chrysler could not patch this issue themselves, but instead needed to wait for the supplier to do it. After the patch was complete, they likely needed to perform testing and validation of the patch before releasing. This all takes time.

    Outside of the security industry, people have been hacking cars for years. Tuners charge money for “chipping” cars to improve performance, remove limiters, and alter settings.

    This type of work has good intentions, people pay for modifications to their vehicle. The security industry is more focused on nefarious motives.

    Vehicles are also becoming more automated. Advanced Driver Assistance Systems (ADAS) improve safety by giving computers control of the vehicles steering, throttle, and brakes. However, these systems also provide an additional threat to a compromised system.

    http://illmatics.com/Remote%20Car%20Hacking.pdf

    Reply
  20. Tomi Engdahl says:

    How Secure are Smartwatches? Not Very.
    HP Fortify finds 100 percent of tested smartwatches exhibit security flaws
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327588&

    New analysis indicates that all the smartwatches tested contain significant vulnerabilities, including insufficient authentication, lack of encryption, and privacy problems.

    Mobile phone companies have been pushing smartwatches as a way to pump up a saturated and declining market, but there are good reasons to resist the marketing hype and not buy one. According to a report titled “Internet of Things Security Study: Smartwatches”just released by HP Fortify, as serious as security vulnerabilities have been on smartphones, they may be worse on smartwatches.

    The Fortify team tested 10 Android- and Apple iOS-based devices and found that all contained significant vulnerabilities, including insufficient authentication, lack of encryption, and privacy concerns. Included in the testing were Android, iOS cloud, and mobile application components. As result of these findings, Jason Schmitt, general manager, HP Security, Fortify, questions whether smartwatches are designed adequately to store and protect the sensitive data and tasks they are built to process.

    Insufficient user authentication/authorization
    Every smartwatch tested was paired with a mobile interface that lacked two-factor authentication and the ability to lock out accounts after failed password attempts. Thirty percent of the units tested were vulnerable to account harvesting, meaning an attacker could gain access to the device and data due to a combination of weak password policy, lack of account lockout, and user enumeration.

    Lack of transport encryption
    While 100 percent of the test products implemented transport encryption using SSL/TLS, 40 percent of the cloud connections make use of weak security cyphers and are vulnerable to open source secure sockets layer-based POODLE attacks due to their continued use of SSL v2.

    http://go.saas.hp.com/l/28912/2015-07-20/325lbm/28912/69038/IoT_Research_Series_Smartwatches.pdf

    Reply
  21. Tomi Engdahl says:

    Despite Reports of Hacking, Baby Monitors Remain Woefully Insecure
    http://it.slashdot.org/story/15/09/02/229208/despite-reports-of-hacking-baby-monitors-remain-woefully-insecure

    Researchers from security firm Rapid7 have found serious vulnerabilities in nine video baby monitors from various manufacturers. Among them: Hidden and hard-coded credentials providing local and remote access over services like SSH or Telnet; unencrypted video streams sent to the user’s mobile phone; unencrypted Web and mobile application functions and unprotected API keys and credentials; and other vulnerabilities that could allow attackers to abuse the devices

    Despite reports of hacking, baby monitors remain woefully insecure
    http://www.itworld.com/article/2979713/despite-reports-of-hacking-baby-monitors-remain-woefully-insecure.html

    Researchers from Rapid7 found serious vulnerabilities in nine video baby monitor devices

    Disturbing reports in recent years of hackers hijacking baby monitors and screaming at children have creeped out parents, but these incidents apparently haven’t spooked makers of these devices.
    no flash
    Tested: How Flash destroys your browser’s performance

    It’s a memory hog — and we’ve got the numbers to prove it.
    Read Now

    A security analysis of nine baby monitors from different manufacturers revealed serious vulnerabilities and design flaws that could allow hackers to hijack their video feeds or take full control of the devices.

    Reply
  22. Tomi Engdahl says:

    Hacking Medical Mannequins
    http://science.slashdot.org/story/15/09/02/1636236/hacking-medical-mannequins

    A team of researchers at the University of South Alabama is investigating potential breaches of medical devices used in training, taking the mannequin iStan as its prime target in its scenario-based research

    Hacking medical mannequins
    https://thestack.com/security/2015/09/02/hacking-medical-mannequins/

    The computer scientists investigated the ease of compromising a training mannequin system, tampering with communication vulnerabilities identified between the device and its controlling computer.

    The mannequin model used, named iStan, is one of the most advanced wireless patient simulator devices and is in use at the College of Nursing at the university. The device can bleed, secrete bodily fluids, has a blood pressure and heart rate, and breathes realistically. The simulator links with iStan software which controls the mannequin remotely by directing commands and inputs which represent real-life situations.

    Identifying the network security solution and network protocol as the vulnerable components, the team was able to carry out brute force attacks against the router PIN, and denial of service (DDoS) attacks, using open source tools such as BackTrack.

    The paper reads: ‘If medical training environments are breached, the long term ripple effect on the medical profession, potentially, impacts thousands of lives due to incorrect analysis of life threatening critical data by medical personnel.’

    Reply
  23. Tomi Engdahl says:

    IoT baby monitors STILL revealing live streams of sleeping kids
    The hacker that rocks the cradle
    http://www.theregister.co.uk/2015/09/03/baby_monitors_insecure_internet_things/

    Internet-connected baby monitors are riddled with security flaws that could broadcast live footage of your sleeping children to the world and his dog, according to new research.

    Mark Stanislav, a security researcher at Rapid7, discovered numerous security weaknesses and design flaws after evaluating nine different devices from eight different vendors. Security flaws included hidden, hardcoded credentials, unencrypted video streaming, unencrypted web and mobile app functions, and much more.

    Isolated real-world reports of hacking of baby monitors date back at least two years, so it’s not as if the problem is new.

    Last year privacy watchdogs at the ICO warned parents to change the default passwords on webcams to stop perverts shopping on kids.

    The warning followed a security flap created by the site, hosted in Russia, that streamed live footage ranging from CCTV networks to built-in cameras from baby monitors. The website itself – insecam.cc – accesses the cams using the default login credentials, which are freely available online for thousands of devices.

    Possessed baby monitor shouts obscenities at Texas tot
    ‘Somewhat of a blessing’ the child is deaf, say parents
    http://www.theregister.co.uk/2013/08/14/eurohacker_shouts_obscenities_at_texas_tot_via_hijacked_baby_monitor

    Reply
  24. Tomi Engdahl says:

    Managing the risk of the Internet of Things
    http://www.controleng.com/single-article/managing-the-risk-of-the-internet-of-things/f862ceb88986099ed2de7d98b4d6db81.html

    The Internet of Things (IoT) is growing rapidly, and more devices are going online. Are industry, consumers, and the companies creating products and services and integrating the technologies ready to deal with the security that goes with protecting the devices and users? Industrial network design and best practices can help. See six steps for IoT risk mitigation.

    The Internet of Things (IoT), or variations of the term, has saturated the media with stories of connected vehicles, networked wearables, home automation, and smart meters. With such significant conversation, one would think that this market was invented yesterday, but, in fact, the machine-to-machine communication that typically interfaces with the physical world via communication networks has been with us for a long time. The less flashy devices known as industrial control systems have been running our electric grids, oil pipelines, and manufacturing plants for decades. Like cloud computing, which partly owes its lineage to the mainframe timesharing concepts of the 1960s and 1970s, IoT has been rebranded.

    But notwithstanding the hype, the market for connected devices is shifting. Like cloud technology, IoT is massively larger in scale than its earlier generations and is growing fast. What makes it significant, and a little scary, is its sheer ubiquity, touching consumers and businesses alike.

    IoT defined

    To understand the risk to IoT, definitions are needed. Clearly, IoT is a somewhat fluid term and owes its name more to media hype rather than to a multi-year standards process. Consequently, it has the “know it when you see it” quality. At its most basic level, IoT implies network connectivity, the use of embedded (or limited computing) devices, and, typically, involves some connection to the physical world, such as measuring temperature, blood pressure, or road vibrations. In essence, it implies network connectivity for everyday devices that traditionally were not considered computers; however, nearly every use of IoT also involves some traditional computer usage. For example, these small, embedded devices usually report their status and receive instruction from a traditional computer workstation, server, laptop, or smartphone.

    It’s better to think of IoT as less a series of small devices and more of an ecosystem that requires multiple components to work correctly. The supporting components, while appearing to be normal computing devices, still need to be adjusted for the real-time nature of and massive data often associated with IoT.

    But fundamentally, IoT is about the core components that interact with the physical world. They typically include sensors to measure things like temperature, wind speed, or presence of an object.

    While IoT is still a relatively new concept, core components have had populated industrial networks for decades, and they foretell some of the risks that could potentially be faced. Industrial networks have frequently been the subject of cyber attacks. Unlike traditional information technology components, they are often more vulnerable because many industrial networks were never designed to connect to networks that were linked to a hostile Internet. Instead, those closed networks assumed physical attacks were the threats to guard against.

    IoT threats are real

    Threats have been executed through IoT.

    Nearly two decades ago, a disgruntled former employee used network access to remotely release sewage.
    In 2007, researchers demonstrated that a generator could be destroyed by remotely opening and closing circuit breakers rapidly.
    In 2014, hackers broke into the industrial network of a German steel mill and prevented a blast furnace from shutting down.
    With respect to the more modern IoT devices, a researcher hacked his insulin pump, others managed to compromise smart meters, and, in a segment aired on “60 Minutes,” Defense Advanced Research Projects Agency (DARPA) scientists remotely controlled automobile brakes.

    These examples show how securing billions more of IoT devices, deploying them on a wide variety networks, and connecting some of them directly to the Internet will continue to pose great challenges.

    Even with better network stacks and more rigorous cyber-security controls, the nature of many of these devices means that the robust controls that exist on typical workstations, laptops, servers, or even smartphones are unlikely to be implemented in the devices’ design. Controls need to be evaluated and implemented in a different way. Moreover, these devices are incredibly diverse in application, location, and architecture. Some rely on centralized control, while others have their own intelligence and often operate autonomously.

    Additionally, it is often the data that matters. While the idea of hacking cars to run them off the road or manipulating pacemakers to induce heart attacks may generate the headlines and Hollywood movie plots, the reality is that much of the IoT world exists simply to observe and report. Their job is to generate and forward data.

    That data will serve as the foundation by which everything else is built and operated. And few will ever stop and think whether the underlying data is correct.

    By manipulating data, hackers could threaten air, water, food supply, and personal safety. For example, the correct data will keep us from dying in car crashes when nearly all vehicles will be self-driving, heavily relying on sensor data to operate correctly. For that reason, the commands issued to IoT devices and the integrity of data received from those devices must be protected.

    Bosch Rexroth

    Building a risk model

    To evaluate IoT risk, first define the use case. How will the devices and the supporting infrastructure be used? While technical descriptions are useful, the focus should be on the relevant business processes and expected outcomes. How exactly will this produce operational, business, or personal value? Given that nearly all projects must be approved with similar justification, this information shouldn’t be hard to find.

    Unlike broad-based budgeting that seeks a general goal, but doesn’t touch on the how, these use cases should be very specific and should include details. The kind of data involved should include whether humans will interact directly with these devices in a physical sense (such as health monitoring devices, self-driving vehicles, or control system computer banks), whether the devices will interact with existing technology, and any assumptions that are made about the infrastructure that should already be in place. All business objectives should be noted, because one of the tasks for a risk analysis is to determine the consequences of those objectives not being met due to hacking or some other device failure.

    It is also important that a use case be created for each variation. For example, connecting a smart meter that merely measures and reports energy usage has very different implications than one that also supports the ability to remotely disconnect power. The details matter. Deciding how detailed the use case should be is often a judgment call.

    Once the use case is settled, companies are safe to analyze potential impacts. As noted, the easiest place to start is looking at the business objectives of the use case and asking what would happen if those objectives were denied in a cyber security attack.

    If we’re talking about a pacemaker or an airplane, the consequences could be loss of human lives. In most cases, a device would simply cease to supply data. It’s important to understand the consequences from a variety of perspectives. For example, a device that ceases to function will likely signal the owner that something is wrong, and a repair or replacement likely would occur promptly. However, if a hacker managed to cause the device to send the wrong data, critical business functions could be impacted, leading to potentially worse consequences over time. In either case, it’s helpful to start thinking about the worst possible, but plausible, impact that could result from a cyber attack, including downstream effects such as lost customer revenue.

    More importantly, considering what economists call “externalities” is also critical. For example, unless a company is sued, lost customer data doesn’t really hurt it directly, although indirect reputational harm can occur.

    Prioritize vulnerabilities

    Once impacts are known, the potential vulnerabilities are easier to identify and prioritize. Identification of vulnerabilities usually starts with examining all interfaces and potential attack surfaces, both logical and physical. Because the number is often quite large, it may be necessary to focus on likely and sufficiently impactful threats.

    Notably, certain basic cyber security practices should always be implemented, regardless of threat, because devices could always be repurposed, and even bored teenagers can draw upon a huge reservoir of hacking tools and techniques readily available online at no cost. So looking for buffer overflows, assigning appropriate user roles, implementing acceptable user authentication, and applying patches to discovered software flaws, where feasible, are always recommended.

    Finally, the exercise goes beyond the standard risk analysis to recognize that IoT will not stand still. By its very nature, it will grow and mutate to satisfy demands. Technologies like road sensors and smart meters are not designed to be replaced frequently, so software updates and network changes will need to use the installed hardware. That also means that considerations like upgradeability and extensibility, while not largely cyber security considerations, become bigger issues with IoT. Consequently, future use and misuse cases should be identified.

    Six IoT risk mitigation steps

    For those currently involved with IoT, which includes nearly everyone, six basic actions should be taken regardless of the risk involved or the dollar amount being spent on the program.

    Beginning right away, IoT owners should identify current IoT implementations that are in place, planned, or anticipated. This may include building management systems for heating and air conditioning or even the mechanisms used to run the elevators if they’re networked.
    Next, organizations should identify any security policies or procedures related to IoT. If none exist, companies should at least document some high-level controls that should be in place, such as locking the elevator machine room.
    Within three months, organizations should ensure that device owners have applied the risk model described above and reviewed the results with management.
    Organizations should also identify mitigation steps and associated costs to achieve the desired state.
    And in the next six months, organizations should identify IoT risks that they don’t control, but that affect their organization.
    Organizations should also participate in industry groups to encourage development of security standards for the devices that most affect them.

    Reply
  25. Tomi Engdahl says:

    Enhance IoT security: use snipers
    http://www.edn.com/electronics-blogs/eye-on-iot-/4440254/Enhance-IoT-security–use-snipers?_mc=NL_EDN_EDT_EDN_weekly_20150903&cid=NL_EDN_EDT_EDN_weekly_20150903&elq=58f65a70bbe94ce98c0344f09c7bd6b7&elqCampaignId=24640&elqaid=27917&elqat=1&elqTrackId=14252202d7e04243a8bb29f60f3f107e

    There has been a lot of talk about IoT security, but not much new in the way of ideas for attaining it. Mostly the discussion has been to raise awareness of the need for security, even in the smallest devices, and to encourage designing security in from the very beginning. But now Pat Burns, CEO of Haystack Technologies, has put out a proposal that is intriguing in its simplicity and seems well worth careful consideration.

    Basically, what Burns said in an article published on LinkedIn is that the first step in securing IoT devices is making them harder for hackers to find. In particular, change their operation so that they are not continually broadcasting their presence to facilitate discovery by the network. They should instead, Burns argues, operate in a mode of radio silence for as long as possible, broadcasting only in response to an authorized query or when they have something vital to report.

    In other words: operate in stealth mode.

    In his discussion of legacy wireless technologies for the IoT he categorizes devices into three groups. Devices that talk non-stop, communicating to the network every few milliseconds whether or not they have important information to share, he calls “chatterboxes.” Devices that broadcast on a regular, scheduled basis – again whether or not they have an important message – he calls “cuckoo clocks.” Devices that only broadcast when they have something important to share or are responding to a query from an authorized entity, he calls “snipers.” Burns is suggesting that IoT devices should be snipers.

    A Simple Proposal To Improve Security for the Internet of Things
    https://www.linkedin.com/pulse/simple-proposal-improve-security-internet-things-pat-burns?published=t

    A small change can help stop big hackers.

    Almost every IoT security breach in recent news can be traced to the poor architecture of the wireless protocol used by the device. But unlike fighter pilots who maintain radio silence in order to avoid detection by the enemy, it is surprising how few IoT technologies were designed with even a minimum level of stealth in mind.

    Remaining quiet is not the most important principle of IoT security and privacy, but it’s a pretty basic one.

    Avoiding or minimizing the chances of unauthorized discovery is not technically difficult. But today’s IoT technologies like Bluetooth, 6lowpan, Sigfox, LoRaWAN, and others make unauthorized discovery very easy and it creates the worst kind of angst in IT departments.

    Most wireless IoT technologies were originally conceived as ways to stream large files (Bluetooth, WiFi) while some were designed to be “lighter” versions of WiFi (e.g., ZigBee). Today they are being re-positioned as “IoT” technologies and security, to put it nicely, is an afterthought. Oh yes — some have tried to “layer on” security and may profess to support encryption, but hacks for all of these technologies are quite public yet fundamentally traceable to one original sin:

    these wireless IoT technologies don’t know how to keep quiet.

    In technical practice, however, it means upgrading or replacing legacy technologies, of which there are roughly three classes as it relates to stealth:

    Chatterboxes. These devices talk non-stop, sending data to the network every few milliseconds whether they have something important to say or they just want to repeat what they said 200 milliseconds ago. They usually share things in the clear (not encrypted) and are easy to spot in the wild. And hack.
    Cuckoos. Like a cuckoo clock, these devices don’t necessarily talk non-stop but they periodically blurt out their presence — usually every few seconds — in order to sync with a network and aid discovery. They are usually capable of sending a message when they have news of an event to share — like a change in temperature, for example.
    Snipers. These devices speak only when an authorized device queries them or, like Cuckoos, when they have something important to share with the network. They don’t “fire” their weapon unless absolutely required.

    Most wireless IoT endpoints in the marketplace today fall into the chatterbox or cuckoo class, which violate what should be a first principle of IoT device security:

    Be stealthy.

    Why Stealthy IoT Endpoint Design Matters

    “Properly implemented strong crypto systems are one of the few things that you can rely on. Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.” — Edward Snowden

    Silence is almost always inexpensive, easy to do, and effective and is probably the most popular security behavior practiced by humans since … Neanderthal man.

    There is no technical reason that the Internet of Things cannot embrace silence, or stealth as I prefer to call it, as a first principle of endpoint security. Stealth is not a silver bullet for IoT security (there is no silver bullet) and stealth alone won’t protect a network from intrusions, but dollar-for-dollar, stealth is the simplest, cheapest, and most effective form of IoT security protection available.

    Reply
  26. Tomi Engdahl says:

    Our Rising Dependency on Cyberphysical
    http://www.securityweek.com/our-rising-dependency-cyberphysical

    In a previous column, I discussed how “cyberphysical” is an appropriate term to capture this new world we are entering, where machines operate automatically and rapidly based on real-time feedback. The next step is to understand why this cyberphysical matters to the wider population that these machines will service. We can then assess levels of risk in order to better develop a culture of cyberphysical security.

    The most notable trend is that critical services we rely on are increasingly dependent upon cyberphysical interactivity. The scope of these critical services continues to broaden and deepen across industries, especially as the functionality and speed of devices is more widely understood.

    To me, nothing offers a more direct example of cyberphysical dependency than heart pacemakers. More than three million people rely on these devices every day, and 600,000 new implants are performed each year (American Heart Association).

    Another set of cyberphysical interactions occur to deliver our electricity, which we ambitiously consume at approximately 18,000 TerraWatts a year. How many of us can go 60 minutes without an electrical charge to our cell phones? Smart meters, not to mention power generation control systems, play a part in delivering this critical energy service.

    Moving forward, we can envision a host of additional cyberphysical systems beyond these two examples, managing and impacting our daily lives. Many have seen self-driving cars, which are expected to grow at 134% CAGR in the next five years (not to mention electric cars, another dependency back on our power generation systems). Or consider home automation systems and maritime cargo monitoring.

    As a security specialist, while I anticipate great reward from these new types of cyberphysical systems, I also envision the need for better protection. The dependency on cyberphysical systems exposes the broader population to a variety of risks.

    Amidst pressures to be “first to market,” it is not uncommon for manufacturers to trade off convenience and price for limited protection. In some cases, it might not even be a conscious design decision. Considering our growing dependency on cyberphysical systems, however, security testing seems an obvious addition (but I will discuss solutions further in my next column).

    In other industries, it is less a rush to the consumer market triggering risks than it is a status quo regarding defining what constitutes “safe.” In the energy sector, offshore oil rigs were once “air gapped” and not connected to other systems.

    Today, devices from as far afield as transportation and government services have typically been prioritized by physical security implications first.

    Reply
  27. Tomi Engdahl says:

    GM Performs Stealth Update To Fix Security Bug In OnStar
    http://mobile.slashdot.org/story/15/09/10/1539239/gm-performs-stealth-update-to-fix-security-bug-in-onsta

    Back in 2010, long before the Jeep Cherokee thing, some university researchers demonstrated remote car takeover via cellular (old story here). A new Wired article reveals that this was actually a complete exploit of the OnStar system (and was the same one used in that 60 Minutes car hacking episode last year). Moreover, these cars stayed vulnerable for years — until 2014, when GM created a remote update capability and secretly started pushing updates to all the affected cars.

    GM Took 5 Years to Fix a Full-Takeover Hack in Millions of OnStar Cars
    http://www.wired.com/2015/09/gm-took-5-years-fix-full-takeover-hack-millions-onstar-cars/

    When a pair of security researchers showed they could hack a Jeep over the Internet earlier this summer to hijack its brakes and transmission, the impact was swift and explosive: Chrysler issued a software fix before the research was even made public. The National Highway Traffic and Safety Administration launched an investigation. Within days Chrysler issued a 1.4 million vehicle recall.

    But when another group of researchers quietly pulled off that same automotive magic trick five years earlier, their work was answered with exactly none of those reactions. That’s in part because the prior group of car hackers, researchers at the University of California at San Diego and the University of Washington, chose not to publicly name the make and model of the vehicle they tested, which has since been revealed to be General Motors’ 2009 Chevy Impala. They also discreetly shared their exploit code only with GM itself rather than publish it.

    Reply
  28. Tomi Engdahl says:

    Evaluating a Role in the IIOT Future
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327670&

    Standards are needed to assure that the Industrial Internet of things becomes a reality. In the meantime, everyone is jumping on the bandwagon because the risks of waiting are too high.

    The technology to implement a smart network of sensor data that gives an instant sense of an industrial machine’s or system’s well-being is available. With this technology, you can even project the cost savings in anticipating breakdowns, forgoing not-needed maintenance, not requiring folks just-in-case, and boosting the efficiency of operations. So, why is it being hyped so much, but not happening?

    It’s really simple. The relevant standards are not agreed upon as yet and therefore it is a market that is being teed up, but waiting for a “go” signal. Sure, there are obstacles such as data security, but the breaches of bank, government, and supposedly safe corporate information have not stopped those systems from being implemented. Nor will it stop the IIOT (Industrial Internet of Things) from becoming a reality. The reason is that the coming market is too lucrative to forgo. IIOT will ultimately determine the leaders of industrial systems for many years into the future.

    Most sensors are analog. Thus, measuring instruments need to make measurements near the sensor to minimize errors. Power requirements must be minimal, signal conditioning included, and voltages beyond the usual 3.3 V must be accommodated. All the typical interactions of noise and cross-coupling among circuits and grounds must be solved. We think that this approach to instrumentation meets the need of this giant market to be. But it is a little too early to say for sure.

    There are other hurdles to overcome, but these are poised to be worked out through the IIC <Industrial Internet Consortium), a public-private organization.

    Reply
  29. Tomi Engdahl says:

    Thought Heartbleed was dead? Nope – hundreds of thousands of things still vulnerable to attack
    IoT crawler reveals map of at-risk devices and computers
    http://www.theregister.co.uk/2015/09/15/still_200k_iot_heartbleed_vulns/

    More than a year after its introduction, the notorious HeartBleed security flaw remains a threat to more than 200,000 internet-connected devices.

    This according to Shodan, a search tool that (among other things) seeks out internet-of-things (IoT) connected devices. Founder John Matherly posted a map the company built showing where many of the world’s remaining vulnerable devices lay:

    Heartbleed caused a minor panic when it was first uncovered in 2014. The flaw allowed an attacker to exploit weaknesses in the OpenSSL software library to extract passwords and other sensitive information from a targeted device.

    Of the 200,000-plus vulnerable devices, 57,272 were housed in the United States. Germany was second with 21,060 Heartbleed-prone devices and China had 11,300. France was fourth with 10,094 followed by the UK with 9,125.

    “Clearly, some manufacturers and IT teams have dropped the ball, and failed to update vulnerable systems,” noted security consultant Graham Cluley.

    “My bet is that there will always be devices attached to the internet which are vulnerable to Heartbleed.”

    Reply
  30. Tomi Engdahl says:

    Strong ARM scoops up Sansa to boost IoT security
    Chipmaker adds Israeli company’s bolt-on protection to its bulging armoured sack
    http://www.theregister.co.uk/2015/07/30/arm_buys_iot_firm_sansa_security/

    Chipmaker ARM has sealed a deal to buy Israeli Internet of Things (IoT) security specialist Sansa Security. Financial terms of the deal, announced Thursday, were not officially disclosed. However, the WSJ previously reported that around $75m-$85m was on the table.

    ARM makes the chips that power the majority of the world’s smartphones. The Sansa acquisition will allow it to add hardware and software-based security features, boosting protection for sensitive data and content on any connected device.

    Sansa’s technology is already deployed across a range of smart connected devices and enterprise systems. The company was previously known as Discretix, prior to rebranding last October, and specialised in embedded security technologies.

    The deal complements the ARM security portfolio, including ARM TrustZone technology and SecurCore processor IP.

    “Any connected device could be a target for a malicious attack, so we must embed security at every potential attack point,” said Mike Muller, CTO of ARM in a statement. “Protection against hackers works best when it is multi-layered, so we are extending our security technology capability into hardware subsystems and trusted software. This means our partners will be able to license a comprehensive security suite from a single source.”

    Reply
  31. Tomi Engdahl says:

    IoT security is RUBBISH says IoT vendor collective
    Online Trust Alliance calls on gadget vendors to stop acting like clowns
    http://www.theregister.co.uk/2015/08/12/iot_security_is_rubbish_says_iot_vendor_collective/

    A vendor group whose membership includes Microsoft, Symantec, Verisign, ADT and TRUSTe reckons the Internet of Things (IoT) market is being pushed with no regard to either security or consumer privacy.

    In what will probably be ignored by the next startup hoping to get absorbed into Google’s Alphabet’s Nest business, the Online Trust Alliance (OTA) is seeking comment on a privacy and trust framework for the Internet of Things.

    Stunt-hacks and bad implementations have demonstrated that IoT security is currently pretty hopeless. The OTA says that won’t change if manufacturers and services keep pumping out gewgaws and gadgets without caring about risks.

    Announcing the framework, the OTA warns against letting the Internet of Things market repeat history and ignore the product lifecycle in their security considerations.

    Reply
  32. Tomi Engdahl says:

    Intel Takes On Car Hacking, Founds Auto Security Review Board
    Chipmaker establishes new Automotive Security Review Board for security tests and audits
    http://www.eetimes.com/document.asp?doc_id=1327696&amp;

    After a summer full of car hacking revelations, Intel, today, announced the creation of a new Automotive Security Review Board (ASRB), focused on security tests and audits for the automobile industry.

    The potential for modern connected cars to be attacked and remotely controlled by malicious hackers is a topic that has received considerable attention recently from security experts, industry stakeholders, regulators, lawmakers, and consumers.

    Intel Takes On Car Hacking, Founds Auto Security Review Board
    http://www.darkreading.com/vulnerabilities—threats/intel-takes-on-car-hacking-founds-auto-security-review-board/d/d-id/1322172

    Chipmaker establishes new Automotive Security Review Board for security tests and audits

    After a summer full of car hacking revelations, Intel, today, announced the creation of a new Automotive Security Review Board (ASRB), focused on security tests and audits for the automobile industry.

    The potential for modern connected cars to be attacked and remotely controlled by malicious hackers is a topic that has received considerable attention recently from security experts, industry stakeholders, regulators, lawmakers, and consumers.

    Demonstrations like one earlier this year where two security researchers showed how attackers could take wireless control of a 2014 Jeep Cherokee’s braking, steering, and transmission control systems, have exacerbated those concerns greatly and lent urgency to efforts to address the problem.

    Intel also released a whitepaper describing a preliminary set of security best practices for automakers, component manufactures, suppliers, and distributors in the automobile sector.

    ASRB members will have access to Intel automotive’s development platforms for conducting research. Findings will be published publicly on an ongoing basis, Intel said. The member that provides the greatest cybersecurity contribution will be awarded a new car or cash equivalent.

    Intel’s security best practices whitepaper, also released today, identified several existing and emerging Internet-connected technologies in modern vehicles that present a malicious hacking risk.

    Modern vehicles have over 100 electronic control units, many of which are susceptible to threats that are familiar in the cyber world, such as Trojans, buffer overflow flaws, and privilege escalation exploits, Intel said. With cars connected to the external world via Wi-Fi, cellular networks, and the Internet, the attack surface has become substantially broader over the last few years.

    The whitepaper identifies 15 electronic control units that are particularly at risk from hacking. The list includes electronic control units managing steering, engine, and transmission, vehicle access, airbag and entertainment systems. “Current automotive systems are vulnerable,” Intel noted. “Applying best-known practices and lessons learned earlier in the computer industry will be helpful as vehicles become increasingly connected.”

    Concerns have been growing in recent times about critical security weaknesses in many of the Internet-connected components integrated in new vehicles these days. Chrysler for instance, recalled 1.4 million vehicles after two security researchers showed how they could bring a Jeep Cherokee traveling at 70 mph to a screeching halt by hacking into its braking system from 10 miles away.

    A report released by Senator Edward Markey (D-MA) in February, based on input from 16 major automakers, revealed how 100 percent of new cars have wireless technologies that are vulnerable to hacking and privacy intrusions. The report found that most automakers were unaware or unable to say if their vehicles had been previously hacked while security measures to control unauthorized access to control systems were inconsistent.

    Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk
    http://www.markey.senate.gov/imo/media/doc/2015-02-06_MarkeyReport-Tracking_Hacking_CarSecurity%202.pdf

    Reply
  33. Tomi Engdahl says:

    Intel enlists ‘top security talent’ to eradicate smart-car cyberattacks
    Can Intel’s review board help automakers improve future vehicle defense against cyberattacks?
    http://www.cnet.com/news/intel-launches-automotive-security-board-to-tackle-connected-car-security-risks/#ftag=CADf328eec

    Intel has announced the launch of the Automotive Security Review Board (ASRB) to mitigate future cybersecurity risks to vehicles and drivers.

    Announced on Sunday, the semiconductor firm said the ASRB will encompass “top security talent” worldwide with a particular bent toward physical system security.

    Members of the board will perform ongoing security tests and audits to form best practice recommendations and design suggestions to benefit automakers, which will then help keep drivers and passengers safe in modern vehicles.

    In recent news, Fiat Chrysler hit the spotlight after severe vulnerabilities were discovered within SUVs offered by the automaker. The models in question made use of the Uconnect connected car system, which was found to be vulnerable to remote attacks leading to engine control, placing drivers in danger.

    Over 1.4 million vehicles have been recalled in order to patch the flaw.

    Intel will provide the board with the company’s advanced development platforms on which to conduct research.

    Reply
  34. Tomi Engdahl says:

    Thought Heartbleed was dead? Nope – hundreds of thousands of things still vulnerable to attack
    IoT crawler reveals map of at-risk devices and computers
    http://www.theregister.co.uk/2015/09/15/still_200k_iot_heartbleed_vulns/

    More than a year after its introduction, the notorious HeartBleed security flaw remains a threat to more than 200,000 internet-connected devices.

    This according to Shodan, a search tool that (among other things) seeks out internet-of-things (IoT) connected devices. Founder John Matherly posted a map the company built showing where many of the world’s remaining vulnerable devices lay:

    Heartbleed caused a minor panic when it was first uncovered in 2014. The flaw allowed an attacker to exploit weaknesses in the OpenSSL software library to extract passwords and other sensitive information from a targeted device.

    IoT security is RUBBISH says IoT vendor collective
    Online Trust Alliance calls on gadget vendors to stop acting like clowns
    http://www.theregister.co.uk/2015/08/12/iot_security_is_rubbish_says_iot_vendor_collective/

    A vendor group whose membership includes Microsoft, Symantec, Verisign, ADT and TRUSTe reckons the Internet of Things (IoT) market is being pushed with no regard to either security or consumer privacy.

    In what will probably be ignored by the next startup hoping to get absorbed into Google’s Alphabet’s Nest business, the Online Trust Alliance (OTA) is seeking comment on a privacy and trust framework for the Internet of Things.

    Stunt-hacks and bad implementations have demonstrated that IoT security is currently pretty hopeless. The OTA says that won’t change if manufacturers and services keep pumping out gewgaws and gadgets without caring about risks.

    Announcing the framework, the OTA warns against letting the Internet of Things market repeat history and ignore the product lifecycle in their security considerations.

    Reply
  35. Tomi Engdahl says:

    What it takes to secure an SoC
    http://www.edn.com/electronics-blogs/eye-on-iot-/4440308/What-it-takes-to-secure-an-SoC-?_mc=NL_EDN_EDT_EDN_today_20150916&cid=NL_EDN_EDT_EDN_today_20150916&elq=f9401654c58b44c68da10dc9b5d9eb4b&elqCampaignId=24774&elqaid=28104&elqat=1&elqTrackId=1bb2c393b6794662ac9ce748bc8e1e52

    In the era of Internet-of-Things (IoT), security has become one of the most vital parts of a System-on-Chip (SoC). Secured SoCs are used to provide authentication, confidentiality, integrity, non-reproduction, and access control to the system (hardware and software). Here are some of the multiple architectural techniques to develop a secure system.

    Typically in a secured SoC, four key functionalities are desired: secure booting, secured memory, run time data integrity check, and a central security breach response.

    Secure Booting

    Boot is an important and vulnerable part of an SoC from a security point of view. If a hacker is able to control the booting process of the SoC, then all other security implementations can be bypassed to gain unauthorized access. SoC architects develop multiple techniques to provide security during the boot process of SoC.

    Secured Memory

    The Memory in SoC can be secured to preserve sensitive data such as cryptographic keys, unique IDs, passwords, and the like.

    The memory can be divided into multiple partitions, each with a different set of access controls.

    Run-Time Data Integrity Check

    A run-time data integrity check is used to ensure the integrity of the peripheral memory contents during run time execution. The secure booting sequence generates a reference file that contains the hash value of the contents of individual memory blocks stored in a secured memory. In the run-time mode, the integrity checker reads the contents of a memory block, waits for a specified period of time, and then reads the contents of another memory block. In the process, the checker also computes the hash values of the memory blocks and compares them with the contents of the reference file generated during boot time.

    In case of a mismatch between two hash-values, the checker reports a security intrusion to a central unit that decides the action to be taken based on the security policy.

    Central Security Breach Response Unit

    This hardware module can be viewed as the SoC’s central reporting unit for security-related events such as software intrusions, voltage tampering, and the like. This security related event information allows the Security Breach Response Unit to determine the next state of the SoC.

    Conclusion

    SoC Security is paramount for the safe and reliable operation of IoT connected devices. The same capability that enables the SoC to perform their tasks must also enable them to recognize and handle threats. Fortunately, this does not require a revolutionary approach, but rather an evolution of the existing architecture.

    Reply
  36. Tomi Engdahl says:

    Schneider patches yet ANOTHER dumb vuln
    Smart buildings, dumb vulns, does it ever change?
    http://www.theregister.co.uk/2015/09/17/schneider_patches_another_vuln/

    Schneider Electric has pushed out a patch to an industrial control system which – stop me if you’ve heard this before – passes credentials between client and server in plain text.

    CVE-2015-3962 applies to the company’s Struxureware Building Expert, prior to version 2.15, and the company has released an update to the system (outlined in its advisory, PDF here).

    The vulnerable system handles air-conditioning, lighting, and metering.

    The ICS-CERT advisory accompanying the vuln says it hasn’t been exploited, which The Register would regard as astonishingly good fortune, since if someone obtained credentials and signed in using a valid admin user ID, how would anyone know?

    Advisory (ICSA-15-258-01)
    Schneider Electric StruxureWare Building Expert Plaintext Credentials Vulnerability
    https://ics-cert.us-cert.gov/advisories/ICSA-15-258-01

    Reply
  37. Tomi Engdahl says:

    Why is the smart home insecure? Because almost nobody cares
    The miserable life of the security veep
    http://www.theregister.co.uk/2015/08/27/smart_home_insecure/

    Except, of course, that wherever you see “Smart Home”, “Internet of Things”, “cloud” and “connected” in the same press release, there’s a security debacle coming. It might be Nest, WeMo, security systems, or home gateways – but it’s all the same.

    Why?

    “All we want to do is integrate the experience of the bedside A.M. clock-radio into a fully-social cloud platform to leverage its audience reach and maximise the effectiveness of converting advertising into a positive buying experience”,

    What ships has a security architecture re-implemented in half an hour using a deprecated version of OpenSSL and a self-signed certificate with hard-coded crypto credentials.

    Reply
  38. Tomi Engdahl says:

    Centrify:
    IoT, the “Illusion of Trust” — Many businesses are placing trust in the cloud like they did for internal networks, without proper consideration for the challenges and deeper issues at hand. But the added convenience of cloud applications also comes with some serious potential downsides.

    IoT, the “Illusion of Trust” — Moving Trust from the Network to Users and Devices
    http://blog.centrify.com/internet-of-things-trust-cloud/

    Our always on, always connected world has fundamentally changed how businesses operate. Communicating with customers and employees will never be the same again with cloud solutions bringing many benefits by making things easier for businesses, and it’s happening whether we like it or not.

    But many businesses are placing trust in the cloud like they did for internal networks, without proper consideration for the challenges and deeper issues at hand. The added convenience of cloud applications also comes with a potential downside, such as potential security threats and surrender of control.

    Many people are familiar with the acronym “IoT,” and we understand it to mean the Internet of Things. This is a catch-all term nowadays all things cloud and smart connected devices. We believe there’s another meaning for these three letters — “Illusion of Trust.” We call it the Illusion of Trust because business owners don’t realize that cloud security is an issue. When businesses move their intranet services and data to cloud providers, they are likely placing “blind trust” in a traditional network security model that is not entirely reliable anymore.

    Leading organizations like Google, Coca-Cola, Verizon Communications Inc. and Mazda Motor Corp however are showing us examples that when they move their corporate applications to the Internet, they are also taking a new approach to enterprise security. It means flipping common corporate security practice on its head, shifting away from the idea of a trusted privileged internal corporate network secured by perimeter devices such as firewalls, in favor of a model where corporate data can be accessed from anywhere with the right device and user credentials.

    The new enterprise security model should hence assume that the internal network is as dangerous as the Internet. Access should depend on the employee’s device and user credentials.

    With this approach, trust is moved from the network level to the device level. Employees can only access corporate applications with a device that is procured and actively managed by the company.

    Then comes a cloud identity service that performs single sign-on, a user authentication portal that validates employee use against the user database and group database, validates correct device security posture against the device inventory database, then generates short-lived authorization for access to specific resources and steps-up to strong authentication using mobile MFA for critical resources.

    As companies adopt mobile and cloud technologies, the perimeter is becoming increasingly difficult to enforce, and it has made control and security harder — business owners are demanding solutions from their IT partners and providers, and this is where cloud identity providers play an important role to win the trust of businesses and cloud application providers.

    Reply
  39. Tomi Engdahl says:

    LinkedIn infosec bod proffers DIY Ubiquiti fix for automation zero day
    WiFi men prefer blog-snuffing to patching.
    http://www.theregister.co.uk/2015/09/22/linkedin_infosec_bod_proffers_diy_ubiquiti_fix_for_automation_zero_day/

    LinkedIn application Security Luca Carettoni has proffered a homebrew patch to close off a dangerous zero day hole that allows remote attackers to hijack home automation Ubiquiti mFi controllers.

    The holes in the automation systems remain officially unpatched despite working exploit code and vulnerability details being published and remaining accessible.

    Carettoni says the patch is a simple fix that Ubiquiti should have rushed out after the initial quiet disclosure was made in July.

    “It is reasonable to assume that the security flaw can be easily abused by unsophisticated attackers,” Carettoni says

    “… a quick search on Google is sufficient to find the exploit for this bug. Despite the public exposure, Ubiquiti has yet to publish a patch.

    “After waiting patiently for a few weeks, I created my own patch.”

    The mFiPatchMe fix took the application security man about an hour to brew without having any knowledge of the Ubiquiti codebase.

    SecuriTeam researchers describe how the mFi authentication mechanism can be bypassed.

    “Ubiquiti Networks mFi Controller Server installs a web management interface which … offers a login screen where only the administrator user can monitor and control remotely the configured devices,” they say.

    “Because of two errors inside the underlying (redacted) class, it is possible to bypass the authentication mechanism.

    “… a remote attacker could then login and perform unauthorised operations as administrator through the secure web interface.”

    Reply
  40. Tomi Engdahl says:

    Schneider patches yet ANOTHER dumb vuln
    Smart buildings, dumb vulns, does it ever change?
    http://www.theregister.co.uk/2015/09/17/schneider_patches_another_vuln/

    Schneider Electric has pushed out a patch to an industrial control system which – stop me if you’ve heard this before – passes credentials between client and server in plain text.

    CVE-2015-3962 applies to the company’s Struxureware Building Expert, prior to version 2.15, and the company has released an update to the system (outlined in its advisory, PDF here).

    The vulnerable system handles air-conditioning, lighting, and metering.

    The ICS-CERT advisory accompanying the vuln says it hasn’t been exploited, which The Register would regard as astonishingly good fortune, since if someone obtained credentials and signed in using a valid admin user ID, how would anyone know?

    Reply
  41. Tomi Engdahl says:

    Advisory (ICSA-15-258-01)
    Schneider Electric StruxureWare Building Expert Plaintext Credentials Vulnerability
    https://ics-cert.us-cert.gov/advisories/ICSA-15-258-01

    Reply
  42. Tomi Engdahl says:

    Does IoT Data Need Special Regulation?
    http://news.slashdot.org/story/15/09/24/0136258/does-iot-data-need-special-regulation

    As part of the UK’s Smart Meter Implementation Programme, Spain’s Telefonica is deploying a M2M solution, using its own proprietary network, to collect and transmit data from 53 million gas and electricity smart meters. The most troubling issue is that the UK government awarded the contract to a private telecom that uses a proprietary network rather than to an independent organization that uses freely available spectrum and open source solutions? Those Smart Meters are supposed to be in operation for more than three decades, and rely on a network that can cease to exist.

    Does IoT Data Need Special Regulation?
    http://www.citiesofthefuture.eu/does-iot-data-need-special-regulation/

    Do you know that one telecom will collect, consolidate, and transmit, using its own M2M network, two-thirds of smart meter data in the UK? What assurance users have that their data, which can be collected several times a day, do not end up being misused without their knowledge?

    Computers, smartphones, tablets, wearables — indeed anything with a CPU — produce data as a natural by-product. Even sensors, feature phones, and connected devices produce data. By 2020, studies project that more than 60 billion devices will be connected.

    All that data gets collected somewhere. The question is who gets the right to access and analyse it. Data collected through computers and phones, for instance, are subject to limited regulation. But data collected through IoT devices are still part of the Wild West.

    One example is connected smart meters. Last week, I saw a demonstration at the IoT Solutions World Congress in Barcelona where all data from the water, electricity and gas usage of a home could be consolidated in a small box and transmitted together using WiFi or a cellular network.

    In fact, Telefonica in the UK is deploying a similar solution involving gas and electricity meters as part of the UK’s Smart Meter Implementation Programme, and will connect 53 million meters, at 30 million domestic and smaller non-domestic properties by 2020, at two-thirds of the UK market.

    But who controls all the data that Telefonica collects?

    But perhaps what’s more troubling is that Telefonica is using proprietary hardware and software to manage the meters and collect the data; M2M services require a network designed for purpose. So why has the UK government awarded the contract to a private telecom that uses a proprietary network rather than to an independent organization that uses freely available spectrum and open source solutions?

    This topic of privacy and who owns the data has been at the top of the agenda of every IT conference I have attended in the last few years. The questions of data ownership, the right to delete and “be forgotten”, and the price we pay for so called “free internet services” are always part of a heated debate.

    “The people who have the most valuable data are the banks, the telephone companies, the medical companies, and they’re very highly regulated industries. As a consequence they can’t really leverage that data the way they’d like to unless they get buy-in from both the consumer and the regulators.”

    By contrast, Internet giants like Google and Facebook operate in a largely unregulated environment. “They’re slowly, slowly coming around to the idea that they’re going to have to compromise on” issues of data control, says Professor Pentland.

    This is where the regulators come in. The European Union is taking an active role protecting its citizens privacy perhaps because it has little faith that the industry will regulate itself.

    But the explosion of connected devices, especially IoT ones that collect people’s data, is creating an Orwellian state, where all our activities are constantly monitored, analyzed and archived for further cross-reference. Many experts and organizations are already warning people about the dangers of sharing information they may not want exposed in the future.

    Reply
  43. Tomi Engdahl says:

    The message to take from these security reports is that it’s time for the whole ICS industry to step up to the challenge of security. Things to do, according to Ahlberg, include:

    Put reporting mechanisms in place to detect faults and attack attempts
    Become more friendly to security researchers who are trying to identify vulnerabilities so that they can be closed
    Figure out and implement patching systems that will continue to improve security on systems in the field indefinitely. “If a system is once installed and you don’t touch it again,” said Ahlberg, “it becomes incredibly vulnerable over time.”

    Ahlberg acknowledges that these efforts will add to the cost of new systems as well as representing a major expense in field-upgrading installed systems. As a result, he is calling for industry-wide collaboration along with the help of governments to deal with legacy systems. “No one actor can fix 25 years of buildup. This is going to take real work.”

    “The good news is that other industries have done this,” Ahlberg added, “and built up programs to handle ongoing security improvement. This will give the ICS industry a head start.”

    Source: http://www.eetimes.com/document.asp?doc_id=1327785&page_number=2

    Reply
  44. Tomi Engdahl says:

    Thousands of ‘directly hackable’ hospital devices exposed online
    Hackers make 55,416 logins to MRIs, defibrillator honeypots
    http://www.theregister.co.uk/2015/09/29/thousands_of_directly_hackable_hospital_devices_found_exposed/

    Derbycon Thousands of critical medical systems – including Magnetic Resonance Imaging machines and nuclear medicine devices – that are vulnerable to attack have been found exposed online.

    Security researchers Scott Erven and Mark Collao found, for one example, a “very large” unnamed US healthcare organization exposing more than 68,000 medical systems. That US org has some 12,000 staff and 3,000 physicians.

    Exposed were 21 anaesthesia, 488 cardiology, 67 nuclear medical, and 133 infusion systems, 31 pacemakers, 97 MRI scanners, and 323 picture archiving and communications gear.

    The healthcare org was merely one of “thousands” with equipment discoverable through Shodan, a search engine for things on the public internet.

    Erven, an associate director at Protiviti and who has five years of experience specifically securing medical devices, said critical hospital machinery is at the fingertips of miscreants.

    “Once we start changing [Shodan search terms] to target speciality clinics like radiology or podiatry or paediatrics, we ended up with thousands with misconfiguration and direct attack vectors,” Erven said.

    “Not only could your data get stolen but there are profound impacts to patient privacy.”

    “[Medical devices] are all running Windows XP or XP service pack two … and probably don’t have antivirus because they are critical systems.”

    Executing custom payloads, establishing shells, and lateral pivoting within a network, are all possible, he said.

    Proven attacks

    The security men showcased the real-world risks to exposed hospital equipment after their “real life” MRI and defibrillator machine honeypots attracted tens of thousands of login attempts from miscreants on the internet.

    In total, the machines built to mimic actual equipment attracted a whopping 55,416 successful SSH and web logins and some 299 malware payloads.

    Attackers also popped the devices with 24 successful exploits of MS08-067, the remote code execution hole tapped by the ancient Conficker worm.

    Collao said attackers did not appear to realize the machines they popped were would-be critical medical devices.

    “They come in, do some enumeration, drop a payload for persistence and connect to a command and control server,” Collao said.

    “These devices are getting owned repeatedly now that more hospitals are WiFi-enabled and no longer support arcane protocols.”

    The honeypots ran for about six months and mimicked devices “to a tee” complete with security vulnerabilities. The pair used Shodan to find devices on which to base their honeypots.

    Reply
  45. Tomi Engdahl says:

    “If you change the default password, the aid will stop” – hospital equipment security leaks

    The doctor and the nurses are not necessarily the only ones who have access to, for example, magnetic or X-ray results.

    Medical devices are in fact in danger of being compromised. The matter is sorted out, researchers Scott Erven and Mark Collao, who presented their findings to DerbyCon conference earlier this week.

    More and more healthcare facilities are connected to the network so that the data obtained can help to move to electronic health information systems. Included is, for example, magnetic and equipment for use in X-ray imaging, and drug pumps. In addition to the breach of privacy is theoretically possible that patients suffer real danger, if the cybercriminals change through roundneck equipment research and management plans.

    Researchers are looking for health care equipment Shodan search engine, which is intended for connection to the network to retrieve the equipment. According to them, some of the systems and devices are connected to the network by default, some others due to improper configuration. In addition, many devices use the manufacturer’s default passwords.

    The alarming discovery was that different models of the same device were to use the same default passwords. In some cases, manufacturers even warned users that a change default passwords could lead to the elimination of aid.

    Source: http://www.tivi.fi/Kaikki_uutiset/jos-vaihdat-oletussalasanan-tuki-lakkaa-sairaalalaitteiden-tietoturva-vuotaa-6001031

    Reply
  46. Tomi Engdahl says:

    Researchers: Thousands of Medical Devices Are Vulnerable To Hacking
    http://it.slashdot.org/story/15/09/30/2114230/researchers-thousands-of-medical-devices-are-vulnerable-to-hacking

    At the DerbyCon security conference, researchers Scott Erven and Mark Collao explained how they located Internet-connected medical devices by searching for terms like ‘radiology’ and ‘podiatry’ in the Shodan search engine. Some systems were connected to the Internet by design, others due to configuration errors. And much of the medical gear was still using the default logins and passwords provided by manufacturers.

    Thousands of medical devices are vulnerable to hacking, security researchers say
    http://www.itworld.com/article/2987812/thousands-of-medical-devices-are-vulnerable-to-hacking-security-researchers-say.html

    The security flaws put patients’ health at risk

    Next time you go for an MRI scan, remember that the doctor might not be the only one who sees your results.

    Thousands of medical devices, including MRI scanners, x-ray machines and drug infusion pumps, are vulnerable to hacking, creating significant health risks for patients, security researchers said this week.

    The risks arise partly because medical equipment is increasingly connected to the Internet so that data can be fed into electronic patient records systems, said researcher Scott Erven, who presented his findings with fellow researcher Mark Collao at the DerbyCon security conference.

    Besides the privacy concerns, there are safety implications if hackers can alter people’s medical records and treatment plans, Erven said.

    “As these devices start to become connected, not only can your data gets stolen but there are potential adverse safety issues,” he said.

    The researchers located medical devices by searching for terms like “radiology” and “podiatry” in Shodan, a search engine for finding Internet-connected devices.

    Some systems were connected to the Internet by design, others due to configuration errors

    The researchers studied public documentation intended to be used to set up the equipment and found some frighteningly lapse security practices.

    The same default passwords were used over and over for different models of a device, and in some cases a manufacturer warned customers that if they changed default passwords they might not be eligible for support. That’s apparently because support teams needed the passwords to service the systems.

    Reply
  47. Tomi Engdahl says:

    MEDJACK: Hackers hijacking medical devices to create backdoors in hospital networks
    http://www.itworld.com/article/2932539/security/medjack-hackers-hijacking-medical-devices-to-create-backdoors-in-hospital-networks.html

    Attackers are infecting medical devices with malware and then moving laterally through hospital networks to steal confidential data, according to TrapX’s MEDJACK report.

    After the Office of Personnel Management breach, medical data was labeled as the “holy grail” for cybercriminals intent on espionage. “Medical information can be worth 10 times as much as a credit card number,” reported Reuters. And now to steal such information, hospital networks are getting pwned by malware-infected medical devices.
    Subscribe to ITworld Today!

    You could win a print copy of “Teach Yourself AngularJS, JavaScript, and jQuery.”
    Read Now

    TrapX, a deception-based cybersecurity firm, released a report about three real-world targeted hospital attacks which exploited an attack vector the researchers called MEDJACK for medical device hijack. “MEDJACK has brought the perfect storm to major healthcare institutions globally,” they warned. “Medical devices complimented by the MEDJACK attack vector may be the hospital’s ‘weakest link in the chain’.”

    Reply
  48. Tomi Engdahl says:

    Jai Vijayan / darkREADING:
    Researchers find curious Linux.WiFatch malware on tens of thousands of routers and IoT devices that appears to be securing infected systems

    And Now A Malware Tool That Has Your Back
    http://www.darkreading.com/vulnerabilities—threats/and-now-a-malware-tool-that-has-your-back/d/d-id/1322451

    In an unusual development, white hat malware is being used to secure thousands of infected systems, not to attack them, Symantec says.
    Security researchers at Symantec have been tracking a malware tool that, for a change, most victims wouldn’t actually mind have infecting their systems–or almost, anyway.

    The threat dubbed Linux.Wifatch compromises home routers and other Internet-connected consumer devices. But unlike other malware, this one does not steal data, snoop silently on victims, or engage in other similar malicious activity.

    Instead, the author or authors of the malware appear to be using it to actually secure infected devices. Symanetc believes the malware has infected tens of thousands of routers and other IoT systems around the world. Yet, in the two months that the security vendor has been tracking Linux.Wifatch it has not seen the malware tool being used maliciously even once.

    Wifatch has one module that attempts to detect and remediate any other malware infections that might be present on a device that it has infected. “Some of the threats it tries to remove are well known families of malware targeting embedded devices,” Ballano wrote.

    Another module appears designed specifically to protect Dahua DVR and CCTV systems. The module allows Wifatch to set the configuration of the device so as to cause it to reboot every week, presumably as a way to get rid of any malware that might be present or running on the system.

    Most Wifatch infections that Symantec has observed have been over Telnet connections to IoT devices with weak credentials, according to the vendor.

    In keeping with its vigilante role, once Wifatch infects a device it tries to prevent other malicious attackers from doing the same by shutting down the Telnet service. It also connects to a peer-to-peer network to receive periodic updates.

    Wifatch is mostly written in Perl and targets IoT devices based on ARM, MIPS and SH4 architectures. The hitherto white hat malware tool ships with a separate static Perl interpreter for each targeted architecture.

    “Whether the author’s intentions are to use their creation for the good of other IoT users—vigilante style—or whether their intentions are more malicious remains to be seen,” the researcher said.

    Router infections can be hard for end users to detect. However, it is possible to get rid of Wifatch on an infected device simply by rebooting it. Users should also consider updating their device software and changing default passwords on home routers and IoT devices, Ballano said.

    Reply
  49. Tomi Engdahl says:

    NXP-Freescale: Merger of ‘Compatible’ Giants on Track
    http://www.eetimes.com/document.asp?doc_id=1327868&amp;

    NXP’s pending acquisition of Freescale is “on track,” but awaiting regulatory approvals

    IoT and security
    Wainwright believes automotive and the Internet of Things (IoT) will bring significant opportunities to the merged entity. At Thursday’s “Designing with Freescale” event in Paris where more than 80 demos were presented, he cautioned, “What can derail IoT is security.”

    Referring to data in the United States, where 70 percent of connected devices have no password-protected connectivity, he said Freescale is positioned and well-prepared to help IoT startup companies with security. “We offer trusted architecture to end node, gateway and cloud” complete with cryptographic security protocols.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*