Engineering teams developing vehicles for production are juggling multiple priorities at once. They’re designing and testing to meet rigorous safety standards and solving complex engineering problems all while trying to bring their vehicle to market quickly. It would be devastating to see that hard work compromised by leaked IP or slowed down by IP that is ultimately controlled by someone else. Modern control development projects are reshaping the way we protect intellectual property and engineers are looking for ways to take better control of their projects. Here’s a look at three situations when your IP can be compromised and how to protect it.
1. When You Share IP During The RFQ
For many companies, it’s pretty common at the beginning of a project to get bids from three or more suppliers as part of the Request for Quote (RFQ) process. If you’re developing a complex vehicle system, for example, it may be necessary to share details on application functionality requirements so that your supplier can provide an accurate quote on a turn-key solution. You may also have to provide the suppliers with specific details about your solution, including how you are solving the problem for your customer.
Here are the questions that should keep you up at night:
- What happens to the IP shared with any suppliers I didn’t choose?
- What will they do with the information we provided?
- Will they use our IP to try to win business from my competitors?
2. When You Outsource IP Creation
Sometimes it’s necessary to take your existing system and tweak a small aspect – just to see how the change might improve things. That might create the need to outsource IP creation to a third party. This can then involve nontrivial cost and time if you have to go through requirements, documentation, RFQ, quoting, and scheduling efforts with the third party.
This also means you’ve lost control.
Future innovation will be limited and you’ll lose the agility needed for rapid innovation (same-day changes) –a competitive edge that you cannot afford to give up.
That should make you wonder:
- How long will it take to get our changes made?
- What will this cost us?
- How can we take this back in-house?
- What will this supplier do with the IP they created?
3. When Your Supplier Owns Or Holds Your IP
Software content such as control algorithms makes up an increasing portion of the value of a vehicle or system. This trend will continue to accelerate as vehicles become more automated. In fact, according to a recent report by McKinsey, the average share of value represented by the software in a modern car is expected to reach 30% by 2030.
As we have seen with high-profile hacks, security vulnerabilities, once exposed, can cause tremendous amounts of negative publicity, economic loss, and potentially physical harm. Allowing implementation details and security credentials to be outside of your control exposes you to major risks.
If part of the new frontier of vehicle design is IP protection of critical software, what happens when that IP, which you rely on, is held by a third party?
Do you want to put that value at risk of falling into your competitors’ hands?
Perhaps the most limiting of all is when a turn-key supplier owns your application IP and your hardware business. This can limit your choices of hardware suppliers, which can cost you more and limit your control of supply chain issues. You’ll also lose control of your schedule as they may have different priorities or resource constraints. Finally, they can require significant money for each change you request — limiting your ability to serve your markets in a timely manner.
The bottom line: Waiting on a supplier to implement changes means you can’t effectively innovate and will get left behind by competitors who can.
We Can Help You Protect Your IP
Your business moves at its own pace and has its own priorities. Relinquishing control of your IP and software build process to a supplier means that you may have to move at their pace and your priorities may not match up with theirs. The time and cost to make even small changes to your control system software can become a major limiting factor when third parties are involved – especially if it is not a priority for them.
By using Embedded Model-Based Design with Raptor / Simulink, you’ll be in control of the software model which produces the executable application. You ‘ll also be in control of the ‘build button’ – meaning you can create a new executable application any time you need to make a software tweak. This means you hold the IP and don’t need to call a supplier and start a new project each time you have a change to make or a new model year.
The Raptor toolchain provides end-to-end control of your project because it allows you to keep your IP in-house, which means you can sleep better at night and make changes on the fly.
If you’re ready to take control of your machine and your IP get in touch with us to learn more about Raptor.
With the recent introduction of the new Raptor GCM48-5605B-1906, containing 6 separate CAN channels, we have been receiving significantly more interest lately in building gateway software within Raptor. Gateway applications function by bridging the messaging between communication link(s) in a system, where the gateway module may or may not perform modifications on the messages depending on its function / requirements.
Gateways are useful for solving many system issues such as vehicle architecture challenges, backwards compatibility concerns, or even creating isolation for security reasons. In this user tip we provide tips for assuring rapid development and recommendations for ensuring efficient implementation and operation.
Limit Message Gating to Specific Messages If Possible
Many times the limiting factor for a gateway is simply the number of messages it can pass between the different communication links (CAN buses). This issue is highlighted especially in the case where there is processing done on messages. Limiting the processing to specific messages if possible is desired. If you have a limited number of messages you need to pass through, limiting your implementation to those will significantly reduce the workload for the ECU. You can use ID masks to filter the message received by a CAN block. In this example only message ID 0x4D1 will be received by this CAN RX Raw block.
If you need to gateway several messages (or an entire bus of data) leave the mask open (0x0) and use CAN Rx/Tx Raw blocks.
Typically, to assure that messages are being transferred as they arrive you will want to use CAN Mailbox Triggers, placing the blocks above inside a triggered subsystem connected to the Mailbox Trigger.
Use a Beacon Message to Confirm Configuration and Wiring
When setting up a system with many CAN channels, such as the GCM48 with six, it can be useful to be able to verify, when looking at the traffic on the bus, what CAN channel is connected where on each ECU. For this reason, we often add a calibratible ‘beacon’. This can be turned on and when showing the CAN traffic, in Raptor-CAN or CAN King, with ASCII display mode you will see ‘__CAN1__’ for instance. Allocating the difference beacon CAN Identifiers among your modules can allow you to confirm your configuration and wiring.
Gateway Application Example
In this section, a gateway example with a model sample that can be downloaded here so you can follow along, with the overall gateway structure detailed alongside useful tips.
In many applications several messages (or an entire bus) need to be gatewayed, and specific messages need to be intercepted so specific signals can be measured or modified before being transmitted. The example below shows one method to implement this type of gateway.
First, use a CAN trigger block to trigger a subsystem with a CAN Rx Raw block with an open mask.
Tip: To help with troubleshooting, add counters in Rx/Tx Triggered subsystems to verify the code is executing during runtime.
In this architecture all message IDs are received and logic in the ‘Intercept Certain IDs’ subsystem determines if each ID received will be intercepted or transmitted on the outgoing bus unchanged.
The CAN2_CAN3_InterceptMsgID corresponds to a predefined message ID that is to be intercepted. When this ID is received the unchanged message subsystem above is not enabled and instead the Modify Msg subsystem is enabled. In this example there is only one predefined ID each incoming ID is compared to, but this approach could be replicated with as many message IDs as needed. This logic also has a CustomID feature which allows the user to select specific IDs at runtime to block or input custom values; this will be explained more in depth in a later section.
The ‘Modify Msg’ subsystem contains the logic below. This is the next level of determining if the intercepted message is a hard coded message or a custom ID selected by the user at runtime. For now we will focus on the hard coded message.
In the ‘Hardcoded Message Intercepts’ subsystem is the logic to actually intercept the values. An efficient approach is to use CAN Rx/Tx Raw blocks with DBC Unpack/Pack blocks. Alternatively the CAN Tx/Tx Message blocks can be used, but they require more ECU overhead.
Details to note here are the _InterceptEnbl Adjustment, and _numberTimesTriggered measurement. In this case the adjustment is used to intercept existing data for each signal and replace it with a control value. The _numberTimesTriggered measurement is especially helpful for troubleshooting if this subsystem is running since it only runs when the calibrated ID is received.
- If the ECU supports receive (RX) and transmit (TX) queues, in the CAN Definition block, take care to size these to worst case conditions. If these are sized too low, you will see CAN queue overruns (in System->CAN in Raptor-Cal) that will result in dropped messages. If they are sized too high then excess RAM will be consumed which can be a limiting factor elsewhere in your software.
- Because the DBC message-based CAN transmit and receive blocks automatically parse the data fields based on the DBC format, they use more CPU processing power than the raw CAN transmit and receive blocks. For most pass-thru gateway situations, the raw blocks provide the most efficient implementation because they don’t pack or unpack the data.
- If you are using the raw transmit and receive blocks for gating your messages, but you need to receive-modify-transmit some messages, you can use the DBC pack and unpack blocks to handle the modification. An example of this can be seen in the gateway example model from this tip.
- GCM48 and GCM80 may have issues gating heavy bus loading due to the limited processing power of the microprocessor inside. It will be especially important to minimize the messages that are gated.
Engineers are known for having a vision and an eye on the future. However, when it comes to the sourcing and supply of production ECUs, some are short-sighted. Much of your focus when selecting an ECU (and rightfully so) is on the ruggedization, performance, software compatibility and price. This is our reminder to engineering teams (and favor to their supply chain managers) to think about the long haul. The ECU supplier you choose also impacts your ability to quickly source the parts you need when you need them, now and in the future.
The Challenges Of Vehicle Manufacturing
One of the challenges of vehicle manufacturing is its long design and manufacturing cycles. While the design cycle for a cell phone might be eighteen months, a car takes nearly 2-5 years just to design. Once designed, a single vehicle could be comprised of more than 30,000 individual parts.
Then there’s sourcing, testing and validating of all those parts.
And unlike the cell phone, a vehicle has to last for 7-10 years, or more. In order to keep those cars running, those original 30,000 parts need to remain high-quality and readily available to the OEM for nearly a decade.
Finally, that particular vehicle will likely be manufactured for years and years — getting only a slight model refresh here and there. This is an extremely long cycle of design, sourcing, testing, and manufacturing that requires the support of hundreds of dedicated parts suppliers up and down the tiers.
This poses a unique challenge to automotive suppliers: keep those parts on the shelf (or at the ready) for a decade at a time. For some, it’s just the way it’s done. There are German car companies that are still able to supply original spare parts for historic cars that are well over 30 years old. That’s like IBM keeping parts on the shelf for the first personal computer they ever built.
A thirty-year-old ECU probably belongs in a museum. But your ECU supplier should be able to promise a longevity of parts for at least as long as your vehicle is on the road.
In our experience, there are typically three scenarios for engineering teams sourcing automotive-grade, production-ready ECUs:
- Those who just need an off-the-shelf ECU
- Those who have their own ECU but need additional software to run it
- Those who need a custom-built solution
For Groups 1 & 3, this is a make vs. buy decision. Making a custom ECU might only make sense in the case of very high volume production –likely by an OEM, or in the case of a military organization that is able to accept a higher price point to satisfy their highly-specific project requirements.
With Group 2 it’s likely that the ECU they have could’ve been custom-built for them and now they want the ability to program it. Or, it might be an off-the-shelf ECU they have sourced already and they want to program it.
All three groups should be aiming for an ECU supplier with extremely high-quality parts, manufactured by a Tier 1 supplier. Quality and safety cannot and should not be compromised.
But what about a small-to-medium manufacturer trying to bring their own systems to market – can they afford the caliber of ECU that the big guys use?
The answer is yes…
What Does A World-Class ECU Production Supplier Provide?
At New Eagle, we have a strong supply chain and long-standing relationships with Tier 1 manufacturers such as BOSCH and Continental – specialists in producing high-quality, high-volume ECUs.
Our established relationships with these manufacturers allow us to bring ISO 26262-capable units to startups and OEMs alike. The real advantage for the former, being that they now have access to rigorously-tested and validated ECUs, backed by robust supply chains that were once only available to the OEMs. Imagine skipping the design and validation phase and being able to buy a ruggedized, high- quality ECU right off the shelf. Even something as simple as handling warranty parts should be easy with the right supplier. Imagine the peace of mind in knowing that tens of thousands of replacement parts will be there.
As a matter of fact, New Eagle has been focused on the accessibility of world-class ECUs to small-to-medium sized manufacturers for many years. We’ve supplied some production customers with the same parts for the past 10+ years. This is part of our mission to help customers Take Control.
Questions To Ask ECU Suppliers
Not everyone selling ECUs is thinking about the future. Some are short-sighted and fail to deliver later on the promises they made. Some have long lead times, smaller lots, and unfortunately, product defects. And some have gone out of business altogether.
If you’re comparing ECU suppliers, ask them these questions:
- Is this quote based on prototype pricing or production pricing?
- Where do you source your units from? Which manufacturers do you use?
- What are your lead times?
- How long will the part(s) be available?
- Who can help me with reordering parts?
After All: It’s More Than A Piece Of Hardware
As we’ve said before, an ECU is more than just a piece of hardware. It’s the brain that controls your vehicle and impacts your timeline, budget –even your choice of software. To keep it running, you’ll need an ECU supplier who can deliver on quality, availability of parts, and accessibility in case you need engineering support. Look for suppliers who are focused on continuous improvement and offer software maintenance, training and superior customer response times. A trusted ECU supplier keeps your project on track and allows you to scale in the future. Choose wisely.
When implementing any embedded system, it’s important to assess its health. While detection is essential, it’s even more so in order to prevent catastrophic behavior during modes of failure and alert users/other parts of software that the system is failing. We created the Raptor Fault Manager to enable the user to easily detect faults, mitigate failure and broadcast fault statuses in automotive applications.
New Eagle offers fault management in two flavors: The OBD Fault Manager and the Standard Fault Manager. Raptor OBD Fault Manager helps with efficient application development targeting emissions regulations such as CARB CCR 1968.2 (Light-Duty Automotive) or CARB CCR 1971.1 (Heavy-Duty Vehicles). The Standard Fault Manager provides a simple interface for defining and maintaining faults and their states in your application. A simple fault management architecture can be seen as such:
Some sensor data is read in with logic to detect a fault is a typical use case for fault detection. The user can utilize the Raptor fault-management blockset to alert the fault manager of the fault state which provides the ability to reference faults and their states across the application. From there, you can signal to other parts of the application as well as the user of your system that a fault has occurred. By using the OBD Fault Manager blockset and the Raptor J1939 blockset, fault diagnostic message (DM) blocks can access the fault server and this information in data-link messages with little implementation effort by the developer.
Raptor Fault Manager Use Case
Follow along in this Raptor Fault Manager example where you can track the fault status on a simple “sensor” that we emulate through XCP using Raptor-Cal. For simplicity, this example model and the explanations below use the ‘Standard’ Raptor Fault Manager instead of the OBD version. The OBD version has many more fault states as required by the standards. Our example system will read in sensor data, detect faults in that sensor’s data, update the states of the fault and alert the user of the fault. Please note that any Raptor ECU can be used for fault management.
To begin, simply add the “Fault Manager” block to the model:
The Raptor Fault Manager is required in order to use the other Raptor Fault blocks throughout the model. It stores and manages each of the fault definitions, actions and triggers the designer may add to the application. By default, the Raptor Fault Manager uses a Standard fault configuration type propagating the Raptor Fault Definitions with the basic capacities: Active, Suspected and Occurred. These are described as:
- Active – A fault is set to ‘active’ when X counts are equal to or greater than Y counts
- Suspected – A fault is set to ‘suspected’ when X counts exist but are less than Y counts–when “Add X/Y Filtering” is enabled
- Occurred – A fault is set to ‘occurred’ when a previously active fault now does not match or exceed the X count threshold
In this example, we will omit the use of X/Y filtering and apply our own filtering using similar logic. Since the logic is very similar, it would make more sense to use the X/Y filtering. However, this leaves room for the implementation of more complex fault filtering.
In addition to filtering, Raptor-Dev faults have three additional behaviors:
- Disabled – The fault will not be enabled even if the X and Y conditions are met.
- Sticky – Once a sticky fault is set, the fault remains in this state until the next power cycle. This fault can also be manually cleared.
- Persistent – Once a persistent fault is set, the fault remains in this state across power cycles. This fault must be manually cleared.
Now that the Fault Manager has been added to the project, we can add our fault. We will do this by adding a “raptor_fault_def” block from the library browser (under Raptor/Faults). We will use this fault to detect the state and modes of failure for our “sensor”.
The “Fault Name” is used for tracking a fault relevant to our application so these names need to be unique. In our example, we will name it “SensorFailure” and will make this “sticky”. In addition to this, let’s add “calibratable,” so that we can override the states of this fault.
Now when you check the Fault Manager’s Fault Map Tool, you can see the fault:
Once the fault is established, we can add Fault Triggers and/or Fault Actions. The output of these blocks is configured in the Fault Map Tool mentioned above. The Fault Map Tool can be used to set up the default or initial value for Fault Triggers or Fault Actions. They are typically managed via calibration in a production setting. We will add a Fault Action block and provide it with a unique Fault Action name: “MySensorFailed”:
In the Fault Map tool, under the Fault Action Map, we can see our fault action.
The “Assigned Actions” are logically or’d together and control the output of the Fault Action block. We will set this to “Active,” such that whenever the “SensorFault” we defined is “Active,” our “Fault Action” block will output a 1. The Fault Trigger Map works similarly for Fault Triggers; however, Fault Triggers provide a function call as their output rather than a boolean value. Faults can affect multiple Fault Actions and Fault Triggers. This is the “Signal” portion of our fault system architecture.
With a Fault and Fault Action established, we are able to apply our custom fault filtering. We will look at our sensor’s output and verify it is within a certain range. Since this is a hypothetical sensor, we will make up a range to be between -1 and 4501.
When the sensor is within range, we will execute some action that would imply operations are normal. For our sample application, we will simply output a single CAN message with data:
If the sensor is outside of that range, we will increment a counter:
Filtering is the first building block for fault detection. Fault filtering can be done in many ways and is entirely a design choice. From this counter, we are able to implement a primitive filter:
We can check the value of the counter or any filter you want and update the fault status of the block. In our application, we are assuming the fault is suspected if the sensor is out of range 250 foreground rates in a row. It is considered active if it occurs >= 500 times in a row.
It is great to detect the faults but now we must act on that detection. We can use the Fault Action block to control our application based on whether-or-not the sensor is operating correctly. The Fault Action block will output a logical 1 when the fault is “Active” which we configured in the “Fault Map Tool”. You can also use the “Fault Trigger” block for similar functionality but for the sake of simplicity, we will only use the “Fault Action” block.
When the sensor is operational, we will output a CAN message that indicates the system is running correctly and reset the fault counter. The use of this CAN message isn’t important but rather an example of logic that could run under working circumstances.
When the sensor fault is active, we will alert the system by outputting the same CAN message with hypothetical data indicating a fault. Again, this action is particular to our application.
For more fault signaling, you can use the “Fault Status” blocks to determine the current states of your fault:
In this example, we have successfully
- “Read” sensor data (“MySensor”)
- Defined a Fault (“SensorFault”)
- Applied a custom filter to the sensor data to detect signal abnormalities (“sensor_failure_count”)
- Acted on signal abnormalities (“Fault Action” block)
- Created logic to alert the system users about the occurrence of the fault
- Mitigated the fault by providing alternative system logic during fault conditions
While this was a simple example, you can create more complex fault filtering techniques as well as explore the use of different blocks for handling faults throughout your controller software. We hope that this Raptor Power User Tip has provided an overview of Fault Management using Raptor and a working example that you can load on to your hardware to see how it works for yourself.
The latest release of Raptor 2019b includes a number of improvements to Raptor’s powerful software tools. If your software maintenance plan is up-to-date, you can access the latest features and improvements today at software.neweagle.net.
Software isn’t all to celebrate this new year, though — with comprehensive support, on-site Raptor training offerings, and a growing selection of all-new OEM controllers coming soon to our hardware lineup, taking control of your next project is getting easier—and more reliable—than ever this 2020!
Configuring Software to Use Wake-ON-CAN or Wake-ON-LIN
Normally, Raptor™ Electronic Control Units (ECUs) support the capability of awakening in response to a voltage on the ignition or wake line, typically identified as ‘WAKE_INPUT1’ on the datasheet. However, some Raptor ECUs support the capability to configure the unit to wake on another environmental stimulus, such as an incoming CAN message or a change on the LIN bus. This capability can be used in many application situations, such as to wake up internal controllers when the vehicle door is opened. To learn how, read our step-by-step Raptor Power User Tip and download our sample model.
New Automotive-Grade Raptor™ Controllers
Raptor’s growing suite of automotive-grade products is making it faster and easier to bring technology to market with production-scalable platform solutions. From controlling actuators to managing data and displays, explore our lineup of new ECUs below to find one with a pin count, input/output, memory, and processor configuration that’s right for your project. To order or request additional assistance, contact [email protected]
Manufactured by our one of our world-class ECU supply partners, the GCM48 features a capable variety of inputs and outputs for control, along with 6 CAN channels in one compact unit and the capability to work as a LIN slave with 3 LIN channels. It is available for pre-order, with the first volume units available in February 2020.
Available for pre-order, this cost-effective general purpose controller is ideal for EV applications, featuring ASIL-B capability. Able to work in 24V systems, it provides 4 CAN, 1 LIN and includes CAN-FD capability along with a capable CPU.
The GCM 112 is ideal for Autonomous Drive-By-Wire, Electric-Hybrid, and Intelligent Machine control applications. It features 5 CAN buses, 3 LIN Masters, 2 Ethernet channels, and a variety of configurable discrete inputs and outputs — ideal for applications that require advanced performance, timing systems and functional safety capabilities! With a high-performance CPU, multi-core architecture with a companion safety power system basis chip, it supports the highest level of functional safety (ASIL-D). To pre-order yours, contact [email protected]
This powerful general purpose control module is perfect for applications that require advanced performance, timing systems and functional safety capabilities. It has a broad communication capability with 4 CAN buses, 2 LIN buses, and 1 Ethernet bus. Pre-order yours today by contacting [email protected]
Normally Raptor™ Electronic Control Units (ECUs) support the capability of awakening in response to a voltage on the ignition or wake line, typically identified as ‘WAKE_INPUT1’ on the datasheet. Additionally, some Raptor ECUs support the capability to configure the unit to wake on another environmental stimulus such as an incoming CAN message, or a change on the LIN bus. This capability can be used in many application situations, such as to wake up internal controllers when the vehicle door is opened. This Power User Tip provides an overview of configuring the software to accommodate these scenarios.
Along with this Power User Tip, we have prepared a sample model so you can follow along.
You may download it here.
Setting the Wake Source
For this example, we will be enabling Wake-On-CAN for CAN3 on this ECU. To configure the ECU to awaken based on messaging on CAN3, we implement an App Trigger set to execute on Startup. Inside that subsystem, we use the ‘Set Hardware Option’ block to configure the WAKE_SOURCE for the ECU.
This configuration change will take effect after the first power-up after programming. The ECU will need to be awakened via WAKE_INPUT1 (Ignition) to be available for programming. This input is always enabled for waking the ECU regardless of what other sources you may have configured.
Reading the Source of Wake
Typically an application will want to determine the wake source or reason that it was awakened. In the normal case where the ECU is only able to be awakened via the WAKE_INPUT1 (Ignition) input this is often simply implied because in that case there is only one way to wake the ECU. In cases where there are multiple wake sources available, the application may need to use different logic for keeping the ECU awake based on which source is responsible for the wake up. To read the WAKE_SOURCE, you can use an ‘Internal Measurement’ block as shown below.
Controlling ECU Shutdown
When utilizing a communication-based wake source (CAN/LIN) there must not be any bus traffic on the wake source. On many systems, this network management determines when the modules go to sleep through a messaging protocol. If your controller is the master, you will need to tell other devices when to shut off their broadcasting and go to sleep. If your module is not the network master, you will need to honor network management as a slave and allow the other modules to go to sleep by shutting off broadcast. An alternative to a protocol based network management would be to shut off the power from all other ECUs to ensure they stop broadcasting.
In a basic standard operating scenario where the ECU is waking based on an ignition line, the ECU will stay awake as long as the ignition line stays high. When the line goes low the ECU will typically tidy up, store Nonvolatile Memory, and then go to sleep. In the case of Wake-On-CAN, the logic needs to be a little more complex. CAN messages arrive at various intervals. Assume a single message on the CAN bus being sent at a 500 ms rate. That is likely enough time for the ECU to wake, store Nonvolatile Memory, and go back to sleep. You would not want to wake, store and sleep every 500 ms as you would quickly wear out the storage memory. More likely, you would want to Wake-On-CAN and stay awake as long as you see messaging, perhaps sleeping after messaging stops for 10 seconds.
In order to implement that, you could implement a timer that counts up every Foreground execution.
Then whenever you see the message you need (in this case we are looking for 0x4000 extended specifically), you would reset the counter.
Then, in the power-down subsystem where the ECU power control is implemented, you can determine the wake reason and use the timer to keep the ECU awake as long as messaging is coming in every 10 seconds.
Here we have implemented updates to keep the ECU alive based on either the WAKE_INPUT1 source or in the case of a Wake-On-CAN for 10 seconds after the last expected CAN message (ID 0x4000 extended).
The sample model provided gives an example of an implementation for developing a Wake-ON-CAN solution for Raptor ECUs. For any questions or concerns, please New Eagle’s Support page.
Bonus Tip on GCM-5605B-048-1906:
This module has a hardware-based wake source that doesn’t have any software configuration to select the wake source. It will wake from the key switch, CAN2 or LIN2. This means any of these sources will wake the module and keep it from going to sleep once woken. As a result, you will need to perform network management on CAN2 and LIN2 if utilized in your system.
New Eagle offers a variety of ways to keep up to date with their Raptor Suite of Tools. Sign up for the Raptor Newsletter to learn more about how the Raptor Platform can take your solution to production.
As 2019 comes to an end, the Raptor platform celebrates new beginnings with everything from the release of Raptor 2019b to extensions to its hardware product line. Keep reading to learn what’s new, or subscribe to our Raptor News for notifications of the latest updates delivered right to your inbox.
Raptor-Dev is now better than ever with significant improvements to CAN and DBC blocks, and nearly 50 items implemented or resolved in total, including:
- Support for Mathworks 2019b
- VeeCAN: Added USB support for VeeCAN500, Simulator improvements
- GCM196: CAN driver tuning options added, several optional build issues resolved
- GCM48: Upcoming support for this new low-cost, high-capability ECU
- J1939: Updates for Diagnostic Messaging support on multiple CAN busses
- LIN block improvements, including support for upcoming LIN slave capability
Download Raptor-Cal 2019b to take advantage of better performance, usability, and memory usage, plus nearly 30 additional improvements, including:
- Significant updates for performance and memory usage
- DAQ list management performance and consistency updates
- Calibration Transfer & Compare workflow improvements
- Datalogging enhancements
Raptor 2019b Release Webinar
Go in-depth on Raptor 2019b by joining our Raptor 2019b Webinar on January 16, 2020.
We’ll highlight what’s new in the 2019b release, preview what’s planned for the 2019b_2.0 and 2020a releases, and update you on future hardware training resources, all with time for Q&A.
Register now so you don’t miss out!
Hands-On Raptor Training
Learn how to use the Raptor platform in the MATLAB® and Simulink® environment to create real-world applications by registering for our three-day, embedded model-based development (eMBD) Raptor Training program on January 21-23, 2020.
Sign up now before space runs out!
NEW! GCM 48
The GCM 48 is joining the Raptor product line as a rugged, low-cost option for budget-minded developers. Featuring 6 CAN buses, 3 LIN Masters (one capable of LIN slave programming) and configurable discrete inputs and outputs, now’s the time to place your pre-order by contacting our team!
New Eagle is thrilled to announce that founder Mickey Swortzel is listed among the Top 50 Women 2 Watch in 2020 by the Women Presidents’ Organization (WPO). These top 50 women run day-to-day management as the CEO/president/partner of privately-held, multi-million dollar service-based companies.
“It is an honor to be named to this list of exceptional women,” says Mickey Swortzel. “But more than an award for myself, it’s an award for our team and our customers. We’re doing exceptional work in the autonomous and EV/HEV industries. This award speaks to the value we’re bringing to our customers and opportunities for our team.”
What Is Women 2 Watch?
Women 2 Watch sheds light on the impact of women-led companies on the global economy. To determine who ranks in the top 50, the WPO uses a mathematical formula combining percentage with absolute growth. The WPO then calculates whether each company has experienced significant prosperity from 2014 to 2019. along with promise for increased growth moving forward.
“Every year we have strong contenders from within the WPO membership for The Fifty Fastest Growing Women-Owned/Led Companies™ list,” says WPO Chief Executive Officer Camille Burns in an official statement. “This year it will be sponsored by American Express. We think it is extremely important to recognize these successful women around the world who are thriving. The economic impact of these women-led businesses on the global economy is being felt directly through the jobs they create and the communities they serve.”
About the Women Presidents’ Organization
The WPO is a peer advisory organization founded with the goal of connecting leading women across six continents to help each other progress their businesses’ success. Each month, chapters of 20 women presidents from a range of industries meet to share their business experience and expertise.
We proudly extend our congratulations to Mickey and her fellow awardees as we celebrate being one of many women-owned businesses worldwide. If you or someone you know is interested in applying for the 2020 Women 2 Watch list or joining the WPO, visit the WPO website for details.
Join us for our popular Raptor Training program where participants will receive hands-on training with our embedded model-based controls, Raptor platform.
Tuesday, January 21 – Thursday, January 23, 2020
New Eagle Headquarters
110 Parkland Plaza
Ann Arbor, MI 48103
If you cannot attend the first training of the new year, reach out to our sales team to inquire about additional options customized to your needs. Also, be sure to subscribe to New Eagle’s Raptor newsletter to stay informed about our Raptor Platform’s tips and tricks, updates and future classes.
What to Expect in Raptor Training
Attendees will use a throttle body controller project as an introductory guide to Raptor-Dev in the MATLAB/Simulink library. This will allow them to create a model intended for a target piece of hardware. Then, participants will use Raptor-CAL to flash the compiled software onto the hardware and make live calibratable adjustments on the flashed ECU.
On-Site, Customized Raptor Training Options
Can’t make it or prefer a more personalized training of New Eagle’s embedded model-based development tools? Our Raptor experts can travel to your team’s work facility for hands-on instruction catered to your needs.
To learn more about these options and schedule a training at your location, email us at [email protected].
REGISTER NOW as we only have a few seats left!
In MATLAB R2019b, MathWorks released a new Simulink feature called Subsystem Reference. It is a modeling construct that is good for componentization and is similar to libraries and Model Reference but has a slightly different use case.
An Introduction to Subsystem Reference from Mathworks:
Subsystem reference allows you to save the contents of a subsystem in a separate SLX file and reference it using a Subsystem Reference block. You can create multiple instances referencing the same subsystem file. When you edit any instance of a referenced subsystem, the changes are saved in the separate SLX file in which the subsystem is stored and all the referenced instances of that file are synchronized.
When you save a subsystem to a separate file you can reuse it multiple times by using Subsystem Reference blocks referencing the same subsystem file.
You can identify a Subsystem Reference block by the triangles in the opposite corners of the block icon.
A referenced subsystem supports all the semantics of a regular subsystem. A referenced subsystem adapts itself to the context of the parent model and has identical execution behavior when compared to a nonreferenced subsystem.
Libraries and Model Reference
This new feature sounds very similar to Libraries and Model Reference, but they are each slightly different and provide for difference use cases:
Libraries are intended for a large amount of reuse for a small amount of functionality and stable implementations. The Raptor Blockset library uses this functionality. When composing large applications where a team needs to make changes to different areas of the model, libraries can be used, but there are a few problems that can occur:
- It is easy to disable or parameterize a link that disconnects the local implementation from the reference implementation creating future maintenance complications.
- You can’t perform an update (CTRL+D) on a library so making frequent edits can be a challenge.
Model Reference is a standalone file that has very rigid architectural boundaries in both simulation and code generation. They can provide some benefits in simulation and code generation because the generated files don’t need to be rebuilt every time. However, Model Reference has some challenges:
- Signal properties must be specified at the boundaries and thus cannot be inherited from its connecting blocks.
- Code customization is difficult due to the rigid constraints and limits on flexibility (Raptor-Dev does not support Model Reference due to these constraints).
Subsystem Reference is a blend of these two. It is stored in a separate file, but it does not have rigid architectural constraints of Model Reference or the added overhead of minor changes to a model like libraries. From a modeling standpoint, the subsystem reference will open in the model browser and edits (and updates) can be performed when editing the reference from the full model.
It should be noted that unlike libraries, there is no “disable link” feature on a subsystem reference. All instances of the subsystem reference will share an implementation.
More comparisons between these different constructs can be found on the MathWorks website here:
How Subsystem Reference Works
Starting with the VeeCAN 500 template application:
>> raptor_create_project(‘SubsystemReferenceRaptorDemo’, ‘DISP-VC500-1904’)
Navigate to this subsystem:
Let’s create a Subsystem Reference for the Screens subsystem thus allowing two developers to edit the main model and the subsystem in parallel.
First, right-click on the subsystem and select the following:
You will be prompted for the name of the file to create for this subsystem. For this example, “ScreensComponent” was chosen.
Therefore, the model looks like this:
Note the triangles in the upper left and lower right. This signifies that the subsystem is a Subsystem Reference. Double-clicking on the block opens the contents of the new Subsystem Reference file. Any changes made in this view will be propagated to the full parent model.
More importantly, the file is stored separately and can be revision controlled separately from the main model.
Download the sample files created for this Raptor User Tip follow this link.
Be Sure You Know This About Subsystem Reference
- This is a new feature in MATLAB R2019b, so there may be some wrinkles.
- As noted earlier, all instances will share the same implementation so if the reference is shared between two applications then both applications will get the changes.
- If the Subsystem Reference file is opened, it behaves like a library (cannot update/sim/code gen). It needs the context of the full application for these features.
- Inputs and Outputs can be inherited so adding a signal specification block to protect the Subsystem Reference from inheriting unexpected data types would be a good idea.