text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Links on Android Authority may earn us a commission. Learn more. gestures such as tilting their device; or using the proximity sensor to automatically disable touch events whenever the user holds their device to their ear. In this article, we’ll create three applications that retrieve light, proximity and motion data from a range of hardware and software sensors. We’ll also monitor these Android sensors in real time, so your application always has access to the latest information. By the end of this article, you’ll know how to extract a single piece of data from a Android sensor, and how to handle sensors that provide their data in the form of a multidimensional array. What Android sensors can I use? Android sensors can be divided into the following categories: - Environmental sensors. These measure environmental conditions, such as air temperature, pressure, humidity and ambient light levels. - Position sensors. This category includes sensors that measures the device’s physical position, such as proximity sensors and geomagnetic field sensors. Motion sensors. These sensors measure device motion, and include accelerometers, gravity sensors, gyroscopes, and rotation vector sensors. In addition, sensors can either be: - Hardware based. These are physical components that are built into the device and directly measure specific properties, such as acceleration or the strength of the surrounding geomagnetic fields. - Software based, sometimes known as virtual sensors or composite sensors. These typically collate data from multiple hardware-based sensors. Towards the end of this article, we’ll be working with the rotation vector sensor, which is a software sensor that combines data from the device’s accelerometer, magnetometer, and gyroscope. Environmental sensors: Measuring ambient light Android’s. In this section, we’re going to create an application that retrieves the current lux value from the device’s light sensor, displays it in a TextView, and then updates the TextView as new data becomes available. You can then use this information in a range of apps, for example you might create a torch application that pulls information from the light sensor and then automatically adjusts the strength of its beam based on the current light levels. Create a new Android project with the settings of your choice, and let’s get started! Displaying your sensor data I’m going to add a TextView that’ll eventually display the data we’ve extracted from the light sensor. This TextView will update whenever new data becomes available, so the user always has access to the latest information. Open your project’s activity_main.xml file, and add the following: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns: <TextView android:id="@+id/lightTextView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/light, we need to create the “light_sensor” string resource that’s referenced in our layout. Open your project’s strings.xml file, and add the following: <string name="light_sensor">Light Sensor: %1$.2f</string> The “%1$.2f” is a placeholder that specifies the information we want to display, and how it should be formatted: - %1. You can insert multiple placeholders into the same string resource; “%1” indicates that we’re using a single placeholder. - $.2. This specifies how our application should format each incoming floating-point value. The “$.2” indicates that the value should be rounded to two decimal places. - F. Format the value as a floating-point number. While some sensors are more common than others, you should never assume that every device has access to the exact same hardware and software. Sensor availability can even vary across different versions of Android, as some sensors weren’t introduced until later releases of the Android platform. You can check whether a particular sensor is present on a device, using the Android sensor framework. You can then disable or enable parts of your application based on sensor availability, or you might display a message explaining that some of your application’s features won’t work as expected. While we have our strings.xml file open, let’s create a “no_sensor” string, which we’ll display if the light sensor is unavailable: <string name="no_sensor">No light sensor available</string> If your application cannot provide a good user experience without having access to a particular sensor, then you need to add this information to your Manifest. For example, if your app requires access to a compass sensor, then you can use the following: <uses-feature android: Now, your app can only be downloaded to devices that have a compass sensor. While this may limit your audience, it’s far less damaging than allowing someone to download your application when they’re guaranteed to have a bad experience, due to their device’s sensor configuration. Communicating with a sensor: SensorManager, SensorEvents, and listeners To communicate with the device’s light sensor, you need to complete the following steps: 1. Obtain an instance of SensorManager The SensorManager provides all the methods you need to access the device’s full range of sensors. To start, create a variable that’ll hold an instance of SensorManager: private SensorManager lightSensorManager; Then, you need to obtain an instance of SensorManager, by calling the Context.getSystemService method and passing in the Context.SENSOR_SERVICE argument: lightSensorManager = (SensorManager) getSystemService( Context.SENSOR_SERVICE); 2. Get a reference to lightTextView Next, we need to create a private member variable that’ll hold our TextView objects, and assign it to our TextView: private TextView lightTextView; ... ... ... lightTextView = (TextView) findViewById(R.id.lightTextView); 3. Check whether the sensor exists on the current device You can gain access to a particular sensor by calling the getDefaultSensor() method, and then passing it the sensor in question. The type constant for the light sensor is TYPE_LIGHT, so we need to use the following: lightSensor = lightSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT); If the sensor doesn’t exist on this device, then the getDefaultSensor() method will return null, and we’ll display the “no_sensor” string: String sensor_error = getResources().getString(R.string.no_sensor); if (lightSensor == null) { lightTextView.setText(sensor_error); } } 4. Register your sensor listeners Every time a sensor has new data, Android generates a SensorEvent object. This SensorEvent object includes the sensor that generated the event, a timestamp, and the new data value. Initially, we’ll be focusing on the light and proximity sensors, which return a single piece of data. However, some sensors provide multidimensional arrays for each SensorEvent, including the rotation vector sensor, which we’ll be exploring towards the end of this article. To ensure our application is notified about these SensorEvent objects, we need to register a listener for that specific sensor event, using SensorManager’s registerListener(). The registerListener() method takes the following arguments: - An app or Activity Context. - The type of Sensor that you want to monitor. - The rate at which the sensor should send new data. A higher rate will provide your application with more data, but it’ll also use more system resources, especially battery life. To help preserve the device’s battery, you should request the minimum amount of data that your application requires. I’m going to use SensorManager.SENSOR_DELAY_NORMAL, which sends new data once every 200,000 microseconds (0.2 seconds). Since listening to a sensor drains the device’s battery, you should never register listeners in your application’s onCreate() method, as this will cause the sensors to continue sending data, even when your application is in the background. Instead, you should register your sensors in the application’s onStart() lifecycle method: @Override protected void onStart() { super.onStart(); //If the sensor is available on the current device...// if (lightSensor != null) { //….then start listening// lightSensorManager.registerListener(this, lightSensor, SensorManager.SENSOR_DELAY_NORMAL); } } 5. Implement the SensorEventListener callbacks SensorEventListener is an interface that receives notifications from the SensorManager whenever new data is available, or the sensor’s accuracy changes. The first step, is modifying our class signature to implement the SensorEventListener interface: public class MainActivity extends AppCompatActivity implements SensorEventListener { We then need to implement the following callback methods: onSensorChanged() This method is called in response to each new SensorEvent. Sensor data can often change rapidly, so your application may be calling the onSensorChanged() method on a regular basis. To help keep your application running smoothly, you should perform as little work as possible inside the onSensorChanged() method. @Override public void onSensorChanged(SensorEvent sensorEvent) { //To do// } onAccuracyChanged() If the sensor’s accuracy improves or declines, then Android will call the onAccuracyChanged() method and pass it a Sensor object containing the new accuracy value, such as SENSOR_STATUS_UNRELIABLE or SENSOR_STATUS_ACCURACY_HIGH. The light sensor doesn’t report accuracy changes, so I’ll be leaving the onAccuracyChanged() callback empty: @Override public void onAccuracyChanged(Sensor sensor, int i) { //To do// } } 6. Retrieve the sensor value Whenever we have a new value, we need to call the onSensorChanged() method, and retrieve the “light_sensor” string. We can then override the string’s placeholder text (%1$.2f) and display the updated string as part of our TextView: @Override public void onSensorChanged(SensorEvent sensorEvent) { //The sensor’s current value// float currentValue = sensorEvent.values[0]; //Retrieve the “light_sensor” string, insert the new value and display it to the user// lightTextView.setText(getResources().getString( R.string.light_sensor, currentValue)); } 7. Unregister your listeners Sensors can generate large amounts of data in a small amount of time, so to help preserve the device’s resources you’ll need to unregister your listeners when they’re no longer needed. To stop listening for sensor events when your application is in the background, add unregisterListener() to your project’s onStop() lifecycle method: @Override protected void onStop() { super.onStop(); lightSensorManager.unregisterListener(this); } Note that you shouldn’t unregister your listeners in onPause(), as in Android 7.0 and higher applications can run in split-screen and picture-in-picture mode, where they’re in a paused state, but remain visible onscreen. Using Android’s light sensors: Completed code After completing all the above steps, your project’s MainActivity should look something like this: import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.content.Context; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.widget.TextView; public class MainActivity extends AppCompatActivity //Implement the SensorEventListener interface// implements SensorEventListener { //Create your variables// private Sensor lightSensor; private SensorManager lightSensorManager; private TextView lightTextView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); lightTextView = (TextView) findViewById(R.id.lightTextView); //Get an instance of SensorManager// lightSensorManager = (SensorManager) getSystemService( Context.SENSOR_SERVICE); //Check for a light sensor// lightSensor = lightSensorManager.getDefaultSensor(Sensor.TYPE_LIGHT); //If the light sensor doesn’t exist, then display an error message// String sensor_error = getResources().getString(R.string.no_sensor); if (lightSensor == null) { lightTextView.setText(sensor_error); } } @Override protected void onStart() { super.onStart(); //If the sensor is available on the current device...// if (lightSensor != null) { //….then register a listener// lightSensorManager.registerListener(this, lightSensor, //Specify how often you want to receive new data// SensorManager.SENSOR_DELAY_NORMAL); } } @Override protected void onStop() { super.onStop(); //Unregister your listener// lightSensorManager.unregisterListener(this); } @Override public void onSensorChanged(SensorEvent sensorEvent) { //The sensor’s current value// float currentValue = sensorEvent.values[0]; //Retrieve the “light_sensor” string, insert the new value and update the TextView// lightTextView.setText(getResources().getString( R.string.light_sensor, currentValue)); } @Override //If the sensor’s accuracy changes….// public void onAccuracyChanged(Sensor sensor, int i) { //TO DO// } } Test your completed Android sensor app To test this application on a physical Android smartphone or tablet: - Install the project on your device (by selecting “Run > Run” from the Android Studio toolbar). - Although it varies between devices, the light sensor is often located on the upper-right of the screen. To manipulate the light levels, move your device closer to, and then further away from a light source. Alternatively, you could try covering the device with your hand, to block out the light. The “Light Sensor” value should increase and decrease, depending on the amount of light available. If you’re using an Android Virtual Device (AVD), then the emulator has a set of virtual sensor controls that you can use to simulate various sensor events. You access these virtual sensor controls, via the emulator’s “Extended Controls” window: - Install the application on your AVD. - Alongside the AVD, you’ll see a strip of buttons. Find the three-dotted “More” button (where the cursor is positioned in the following screenshot) and give it a click. This launches the “Extended Controls” window. - In the left-hand menu, select “Virtual sensors.” - Select the “Additional sensors” tab. This tab contains various sliders that you can use to simulate different position and environmental sensor events. - Find the “Light (lux)” slider and drag it left and right, to change the simulated light levels. Your application should display these changing values, in real time. You can download the completed project from GitHub. Measuring distance, with Android’s proximity sensors Now we’ve seen how to retrieve information from an environmental sensor, let’s look at how you’d apply this knowledge to a position sensor. In this section, we’ll use the device’s proximity sensor to monitor the distance between your smartphone or tablet, and other objects. If your application has any kind of voice functionality, then the proximity sensor can help you determine when the smartphone is being held to the user’s ear, for example when they’re having a telephone conversation. You can then use this information to disable touch events, so the user doesn’t accidentally hang up, or trigger other unwanted events mid-conversation. Creating the user interface I’m going to display the proximity data onscreen, so you can watch it update in real time. To help keep things simple, let’s reuse much of the layout from our previous application: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns: <TextView android:id="@+id/proximityTextView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/proximity, open your strings.xml file and create a “proximity_sensor” string. Once again, this string needs to contain a placeholder, which will eventually be populated by data extracted from the proximity sensor: <resources> <string name="app_name">ProximitySensor</string> <string name="proximity_sensor">Proximity Sensor: %1$.2f</string> <string name="no_sensor">No proximity sensor available</string> </resources> Getting data from the proximity sensor Similar to the light sensor, Android’s proximity sensor returns a single data value, which means we can reuse much of the code from our previous application. However, there are a few major differences, plus some name-related changes that make this code easier to follow: - Create an instance of SensorManager, which this time around I’m going to name “proximitySensorManager.” - Obtain an instance of “proximitySensorManager.” - Create a reference to the “proximityTextView.” - Call the getDefaultSensor() method, and pass it the TYPE_PROXIMITY sensor. - Register and unregister listeners for the proximity sensor. After making these tweaks, you should end up with the following: import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.content.Context; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorManager; import android.hardware.SensorEventListener; import android.widget.TextView; public class MainActivity extends AppCompatActivity //Implement the SensorEventListener interface// implements SensorEventListener { //Create your variables// private Sensor proximitySensor; private SensorManager proximitySensorManager; private TextView proximityTextView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); proximityTextView = (TextView) findViewById(R.id.proximityTextView); //Get an instance of SensorManager// proximitySensorManager = (SensorManager) getSystemService( Context.SENSOR_SERVICE); //Check for a proximity sensor// proximitySensor = proximitySensorManager.getDefaultSensor( Sensor.TYPE_PROXIMITY); //If the proximity sensor doesn’t exist, then display an error message// String sensor_error = getResources().getString(R.string.no_sensor); if (proximitySensor == null) { proximityTextView.setText(sensor_error); } } @Override protected void onStart() { super.onStart(); //If the sensor is available on the current device...// if (proximitySensor != null) { //….then register a listener// proximitySensorManager.registerListener(this, proximitySensor, //Specify how often you want to receive new data// SensorManager.SENSOR_DELAY_NORMAL); } } @Override protected void onStop() { super.onStop(); //Unregister your listener to preserve system resources// proximitySensorManager.unregisterListener(this); } @Override public void onSensorChanged(SensorEvent sensorEvent) { //The sensor’s current value// float currentValue = sensorEvent.values[0]; //Retrieve the “proximity_sensor” string, insert the new value and update the TextView// proximityTextView.setText(getResources().getString( R.string.proximity_sensor, currentValue)); } @Override //If the sensor’s accuracy changes….// public void onAccuracyChanged(Sensor sensor, int i) { //...TO DO// } } Testing: How close is the user to their device? To put this application to the test on a physical Android smartphone or tablet, install the application on your device and then experiment by moving your hand towards the screen, and then moving it away again. The “Proximity Sensor” value should record your movements. Just be aware that proximity sensors can vary between devices. Some devices may only display two proximity values – one to indicate “Near” and one to indicate “Far” – so don’t be surprised if you don’t see much variety on your physical Android device. To test this application on an emulator: - Install your application on an AVD. - Find the three-dotted “More” button and give it a click, which launches the “Extended Controls” window. - In the window’s left-hand menu, select “Virtual sensors.” - Select the “Additional sensors” tab. - Find the “Proximity” slider, and drag it left and right to emulate an object moving closer to the device, and then further away. The “Proximity Sensor” values should change, as you manipulate the slider. You can download the completed project from GitHub. Motion sensors: Processing multidimensional arrays Up until this point, we’ve focused on sensors that supply a single item of data, but there are some sensors that provide multidimensional arrays for each SensorEvent. These multidimensional sensors include motion sensors, which we’ll be focusing on in this final section. Motion sensors can help you: - Provide an alternative method of user input. For example, if you’re developing a mobile game then the user might move their character around the screen by tilting their device. - Infer user activity. If you’ve created an activity-tracking app, then motion sensors can help you gauge whether the user is travelling in a car, jogging, or sitting at their desk. - More accurately determine orientation. It’s possible to extract coordinates from a device’s motion sensors, and then translate them based on the Earth’s coordinate system, to get the most accurate insight into the device’s current orientation. In this final section, we’ll be using the rotation vector sensor (TYPE_ROTATION_VECTOR). Unlike the light and proximity sensors, this is a software sensor that collates data from the device’s accelerometer, magnetometer, and gyroscope sensors. Although working with this sensor often requires you to perform mathematical conversions and transformations, it can also provide you with a range of highly-accurate information about the device. We’ll be creating an application that uses the rotation vector sensor to measure: - Pitch. This is the top-to-bottom tilt of the device. - Roll. This is the left-to-right tilt of the device. Displaying real time pitch and roll data Since we’re measuring two metrics, we need to create two TextViews and two corresponding string resources: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns: <TextView android: <TextView android: </android.support.constraint.ConstraintLayout> Open the strings.xml file, and add the following: <resources> <string name="app_name">MotionSensors</string> <string name="pitch_sensor">Pitch Sensor: %1$.2f</string> <string name="roll_sensor">Roll Sensor: %1$.2f</string> <string name="no_sensor">No motion sensor available</string> </resources> Using the rotation vector sensor in your app We’ll be re-using some of the code from our previous applications, so let’s focus on the areas where communicating with the rotation vector sensor, is significantly different to what we’ve seen before. 1. Use the TYPE_ROTATION_VECTOR Since we’re working with the rotation vector sensor, we need to call the getDefaultSensor() method, and then pass it the TYPE_ROTATION_VECTOR constant: positionSensorManager.getDefaultSensor(Sensor.TYPE_ROTATION_VECTOR); 2. Translate the sensor data Unlike the previous light and proximity sensors, motion sensors return multidimensional arrays of sensor values for every SensorEvent. These values are formatted using the standard “X, Y, Z” coordinate system, which is calculated relative to the device when it’s held in its default, “natural” orientation. Android doesn’t switch these X, Y and Z coordinates around to match the device’s current orientation, so the “X” axis will remain the same regardless of whether the device is in portrait or landscape mode. When using the rotation vector sensor, you may need to convert the incoming data to match the device’s current rotation. Portrait is the default orientation for most smartphones, but you shouldn’t assume this is going to be the case for all Android devices, particularly tablets. In this article, we’ll use a rotation matrix to translate the sensor’s data from its original, device coordinate system, to the Earth’s coordinate system, which represents the device’s motion and position relative to the Earth. If required, we can then remap the sensor data, based on the device’s current orientation. Firstly, the device coordinate system is a standard 3-axis X, Y, Z coordinate system, where each point on each of the three axes is represented by a 3D vector. This means we need to create an array of 9 float values: float[] rotationMatrix = new float[9]; We can then pass this array to the getRotationMatrix() method: SensorManager.getRotationMatrixFromVector(rotationMatrix, vectors); int worldAxisX = SensorManager.AXIS_X; int worldAxisZ = SensorManager.AXIS_Z; The next step, is using the SensorManager.remapCoordinateSystem() method to remap the sensor data, based on the device’s current orientation. The SensorManager.remapCoordinateSystem() method takes the following arguments: - The original rotation matrix. - The axes that you want to remap. - The array that you’re populating with this new data. Here’s the code I’ll be using in my app: float[] adjustedRotationMatrix = new float[9]; SensorManager.remapCoordinateSystem(rotationMatrix, worldAxisX, worldAxisZ, adjustedRotationMatrix); Finally, we’ll call SensorManager.getOrientation and tell it to use the adjustedRotationMatrix: SensorManager.getOrientation(adjustedRotationMatrix, orientation); 3. Update the placeholder strings Since we have two sets of data (pitch and roll), we need to retrieve two separate placeholder strings, populate them with the correct values, and then update the corresponding TextView: pitchTextView.setText(getResources().getString( R.string.pitch_sensor,pitch)); rollTextView.setText(getResources().getString( R.string.roll_sensor,roll)); Displaying multiple sensor data: Completed code After performing the above steps, your MainActivity should look something like this: import android.app.Activity; import android.os.Bundle; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.widget.TextView; public class MainActivity extends Activity implements SensorEventListener { private SensorManager motionSensorManager; private Sensor motionSensor; private TextView pitchTextView; private TextView rollTextView; private static final int SENSOR_DELAY = 500 * 1000; private static final int FROM_RADS_TO_DEGS = -57; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); pitchTextView = (TextView) findViewById(R.id.pitchTextView); rollTextView = (TextView) findViewById(R.id.rollTextView); try { motionSensorManager = (SensorManager) getSystemService(Activity.SENSOR_SERVICE); motionSensor = motionSensorManager.getDefaultSensor(Sensor.TYPE_ROTATION_VECTOR); motionSensorManager.registerListener(this, motionSensor, SENSOR_DELAY); } catch (Exception e) { pitchTextView.setText(R.string.no_sensor); rollTextView.setText(R.string.no_sensor); } } @Override public void onAccuracyChanged(Sensor sensor, int accuracy) { //To do// } @Override public void onSensorChanged(SensorEvent event) { if (event.sensor == motionSensor) { update(event.values); } } private void update(float[] vectors) { //Compute the rotation matrix// float[] rotationMatrix = new float[9]; SensorManager.getRotationMatrixFromVector(rotationMatrix, vectors); int worldAxisX = SensorManager.AXIS_X; int worldAxisZ = SensorManager.AXIS_Z; //Remap the matrix based on the Activity’s current orientation// float[] adjustedRotationMatrix = new float[9]; SensorManager.remapCoordinateSystem(rotationMatrix, worldAxisX, worldAxisZ, adjustedRotationMatrix); //Compute the device's orientation// float[] orientation = new float[3]; //Supply the array of float values to the getOrientation() method// SensorManager.getOrientation(adjustedRotationMatrix, orientation); float pitch = orientation[1] * FROM_RADS_TO_DEGS; float roll = orientation[2] * FROM_RADS_TO_DEGS; //Update the TextViews with the pitch and roll values// pitchTextView.setText(getResources().getString( R.string.pitch_sensor,pitch)); rollTextView.setText(getResources().getString( R.string.roll_sensor,roll)); } } You can download the completed project from GitHub. Testing our final Android sensor application To test this rotation vector Android sensor app on a physical Android smartphone or tablet: - Install the application on your device. - Place your smartphone or tablet on a flat surface. Note that motion sensors are extremely sensitive, so it’s not unusual for a seemingly-motionless device to report fluctuations in pitch and roll values. - To test the pitch, lift the bottom of your device so that it’s tilting away from you. The pitch value should change dramatically. - To test the roll, try lifting the left-hand side of your device, so it’s tilting to the left – keep an eye on that roll value! If you’re testing your project on an emulator: - Install the application on your AVD. - Select “More,” which launches the “Extended Controls” window. - In the left-hand menu, select “Virtual sensors.” - Make sure the “Accelerometer” tab is selected. This tab contains controls that can simulate changes in the device’s position and orientation. - Try experimenting with the various sliders (Rotate: Z-Rot, X-Rot, Y-Rot; and Move: X, Y, and Z) and the various “Device Rotation” buttons, to see how they affect your application’s “Roll Sensor” and “Pitch Sensor” values. Wrapping up In this article, we saw how to retrieve data from the three main categories of Android sensors: environmental, position and motion, and how to monitor this data in real time. Have you seen any Android apps that use sensors in interesting or unique ways? Let us know in the comments below!
https://www.androidauthority.com/master-android-sensors-946024/
CC-MAIN-2022-40
en
refinedweb
<<extension>> Extensions to dds::sub More... <<extension>> Extensions to dds::sub <<extension>> This operation retrieves the information on the discovered dds::domain::DomainParticipant associated with the publication that is currently matching with the dds::sub::DataReader. Matched Participants are those with a matching dds::topic::Topic, compatible QoS and common partition that the application has not indicated should be "ignored" by means of the dds::pub::ignore operation. The publication_handle must correspond to a publication currently associated with the dds::sub::DataReader. Otherwise, the operation will fail with dds::core::InvalidArgumentError. The operation may also fail with dds::core::PreconditionNotMetError if the publication corresponds to the same dds::domain::DomainParticipant that the DataReader belongs to. Use the operation dds::sub::matched_publications to find the publications that are currently matched with the dds::sub::DataReader. Note: This operation does not retrieve the dds::topic::ParticipantBuiltinTopicData::property. The above information is available through dds::sub::DataReaderListener::on_data_available() (if a reader listener is installed on the dds::sub::DataReader<dds::topic::PublicationBuiltinTopicData>). . <<extension>> Retrieve all of the dds::sub::Subscriber created from this dds::domain::DomainParticipant <<extension>> Retrieve all of the dds::sub::Subscriber created from this dds::domain::DomainParticipant <<extension>> Finds a Subscriber by name dds::core::nullotherwise. <<extension>> Retrieve all the dds::sub::DataReader created from this dds::sub::Subscriber <<extension>> Retrieve all the readers created from a subscriber. <<extension>> Retrieves a dds::sub::DataReader with the given topic name within the dds::sub::Subscriber Use this operation on the built-in dds::sub::Subscriber (Built-in Topics) to access the built-in dds::sub::DataReader entities for the built-in topics. The built-in dds::sub::DataReader is created when this operation is called on a built-in topic for the first time. The built-in dds::sub::DataReader is deleted automatically when the dds::domain::DomainParticipant is deleted. To ensure that builtin dds::sub::DataReader entities receive all the discovery traffic, it is suggested that you lookup the builtin dds::sub::DataReader before the dds::domain::DomainParticipant is enabled. Looking up builtin dds::sub::DataReader may implicitly register builtin transports due to creation of dds::sub::Dat::sub::DataReader exists, this operation returns NULL. The returned dds::sub::DataReader may be enabled or disabled. If more than one dds::sub::DataReader is attached to the dds::sub::Subscriber, this operation may return any one of them. <<extension>> Retrieves a dds::sub::DataReader with the given name within the dds::sub::Subscriber Every dds::sub::DataReader in the system has an entity name which is configured and stored in the <<extension>> EntityName policy, ENTITY_NAME. This operation retrieves the dds::sub::DataReader within the dds::sub::Subscriber whose name matches the one specified. If there are several dds::sub::DataReader with the same name within the dds::sub::Subscriber, the operation returns the first matching occurrence. <<extension>> Retrieves a dds::sub::DataReader with the given TopicDescription within the dds::sub::Subscriber <<extension>> Retrieves a dds::sub::DataReader within the dds::domain::DomainParticipant with the given name Every dds::sub::DataReader in the system has an entity name which is configured and stored in the EntityName policy, ENTITY_NAME. Every dds::sub::Subscriber in the system has an entity name which is also configured and stored in the EntityName policy, ENTITY_NAME. This operation retrieves a dds::sub::DataReader within a dds::sub::Subscriber given the specified name which encodes both to the dds::sub::DataReader and the dds::sub::Subscriber name. If there are several dds::sub::DataReader with the same name within the corresponding dds::sub::Subscriber this function returns the first matching occurrence. The specified name might be given as a fully-qualified entity name or as a plain name. The fully qualified entity name is a concatenation of the dds::sub::Subscriber to which the dds::sub::DataReader belongs and the entity name of of the dds::sub::DataReader itself, separated by a double colon "::". For example: MySubscriberName::MyDataReaderName The plain name contains the dds::sub::DataReader name only. In this situation it is implied that the dds::sub::DataReader belongs to the implicit dds::sub::Subscriber so the use of a plain name is equivalent to specifying a fully qualified name with the dds::sub::Subscriber name part being "implicit". For example: the plain name "MyDataReaderName" is equivalent to specifiying the fully qualified name "implicit::MyDataReaderName" The dds::sub::DataReader is only looked up within the dds::sub::Subscriber specified in the fully qualified name, or within the implicit dds::sub::Subscriber if the name was not fully qualified. <<extension>> Retrieves the implicit dds::sub::Subscriber for the given dds::domain::DomainParticipant. If an implicit Subscriber does not already exist, this creates one. The implicit Subscriber is created with default dds::sub::qos::SubscriberQos and no listener. When a DomainParticipant is deleted, if there are no attached dds::sub::DataReader that belong to the implicit Subscriber, the implicit Subscriber will be implicitly deleted. \par MT Safety: UNSAFE. it is not safe to create the implicit subscriber while another thread may be simultaneously calling dds::domain::DomainParticipant::default_subscriber_qos(const dds::sub::qos::SubscriberQos & qos ). Copies the contents of a rti::sub::LoanedSample into a dds::sub::Sample. Example: Calls the operator on the data or prints [invalid data]. <<C++11>> <<extension>> Returns a collection that provides access only to samples with valid data This function transforms a LoanedSamples collection into another collection whose iterators only access valid-data samples, skipping any sample such that !sample.info().valid(). This operation is O(1) and will not copy the data samples or allocated any additional memory. The typical way to use this function is to directly call it on the return value of a read()/take() operation and use it in a for-loop. For example: samplesis invalid cannot be used after this call Compare two dds::sub::SampleInfo objects for equality. <<extension>> Returns an iterator that skips invalid samples Given a regular sample iterator, this functions creates another iterator it that behaves exactly the same except that it++ moves to the next valid sample (or to the end of the collection). That is, if it doesn't point to the end of the collection, it->info.valid() is always true. This is useful when your application doesn't need to deal with samples containing meta-information only. For example, the following code copies all the data in a LoanedSamples collection skipping any invalid samples (otherwise, attempting to copy the data from an invalid sample would throw an exception, see rti::sub::LoanedSample::operator const DataType& ()). Note that valid_data(samples.begin()) won't point to the first element if that element is not a valid sample. A similar utility is the functor rti::sub::IsValidData. Creates a TopicQueryData from a ServiceRequest. This operation will extract the content from the request body of the rti::topic::ServiceRequest to create a rti::sub::TopicQueryData object. The specified rti::topic::ServiceRequest must be a valid sample associated with the service id rti::core::ServiceRequestId_def::TOPIC_QUERY. Otherwise this operation will return false. This operation can be called within the context of a dds::pub::DataWriterListener::on_service_request_accepted to retrieve the rti::sub::TopicQueryData of a rti::topic::ServiceRequest that has been received with service id rti::core::ServiceRequestId_def::TOPIC_QUERY <<inout>> A rti::sub::TopicQueryData object where the content from the service request is extracted. <<in>> Input rti::topic::ServiceRequest that contains the rti::sub::TopicQueryData as part of its request body. Looks up a TopicQuery by its GUID. <<extension>> <<C++11>> Unpacks a SharedSamples collection into individual shared_ptr's in a vector #include <rti/sub/unpack.hpp> This function creates a reference (not a copy) to each sample with valid data in a SharedSamples container and pushes it back into a vector. Each individual sample in the vector retains a reference to the original SharedSamples that controls when the loan is returned. These references can be further shared. When all the references go out of scope, the loan is returned. This can be also useful to insert samples from different calls to read()/take() into the same vector. It is however recommended to not hold these samples indefinitely, since they use internal resources. Example: <<extension>> <<C++11>> Unpacks a SharedSamples collection into individual shared_ptr's in a vector This overload returns a new vector instead of adding into an existing one. <<extension>> <<C++11>> Unpacks a LoanedSamples collection into individual shared_ptr's in a vector This overload is a shortcut for unpack(SharedSamples<T>(loaned_samples)) to simplify code like the following:
https://community.rti.com/static/documentation/connext-dds/6.0.0/doc/api/connext_dds/api_cpp2/namespacerti_1_1sub.html
CC-MAIN-2022-40
en
refinedweb
Contents Introduction This documentation is intended to instruct developers in the authoring of custom lights. Developers should also consult the RixLight.h header file for complete details. The RixLightFactory interface is a subclass of RixShadingPlugin, and defines a shading plugin responsible for creating a RixLight object. The RixLight interface characterizes the light emitting from an analytic light source - a light source that can be described programmatically or by a formula. RixLightFactory RixLightFactory is a subclass of RixShadingPlugin, and therefore shares the same initialization, synchronization, and parameter table logic as other shading plugins. Therefore to start developing your own Light, you can #include "RixLight.h" and make sure your light factory class implements the required methods inherited from the RixShadingPlugin interface: Init(), Finalize(), Synchronize(), GetParamTable(), and CreateInstanceData(). Generally, there is one shading plugin instance of a RixLightFactory per bound RiLight (RIB) request. This instance may be active in multiple threads simultaneously. The RIX_LIGHTFACTORYCREATE() macro defines the CreateRixLightFactory() function, which is called by the renderer to create an instance of the light factory plugin. Generally, the implementation of this method should simply return a new allocated copy of your light factory class. Similarly, the RIX_LIGHTFACTORYDESTROY() macro defines the DestroyRixLightFactory() function called by the renderer to delete an instance of the light factory plugin; a typical implementation of this method is to delete the passed in light factory pointer: RIX_LIGHTFACTORYCREATE { return new MyLightFactory(); } RIX_LIGHTFACTORYDESTROY { delete ((MyLightFactory*)factory); } RixLight RixLight is the abstract base class from which you can derive your own light implementations. To illustrate the API, we have provided PxrSimpleRectLight.cpp, which implements a simple non-textured single-sided light of rectangular shape. Note that the PxrRectLight that ships with RenderMan offers more features than illustrated here, and uses more sophisticated sampling strategies. It also supports bidrectional sampling and photon emission, which the example does not. The light's constructor is called by the corresponding PxrSimpleRectLightFactory. We have two methods used to communicate geometric properties to the ray tracer. GetBounds() returns a sequence of points describing the bounding shape of the light. The bounds should be expressed in the local space of the light. For our rect light example, there are four points in the range +/- 0.5 in x and y. The rect light lies on the z=0 plane. Intersect, the second method, will compute an intersection between the light and an incoming ray. The intersection is computed in the local space of the light. A consequence of this is that the ray direction will not be normalized if the light's transform contains a scale. It's important, therefore, not to make use of any optimisations in your intersection function that does assume a unit length direction. Light selection methods There are three methods that act as helpers for the renderer's light selection scheme. Light selection is a stochastic process whereby, according to integrator settings, one or more lights are assigned to a shade point in a rendering iteration. The lights that are selected have samples generated for them (see below). The purpose of selection is to attempt to choose the lights liable to contribute most to the shade point in question, thereby keeping variance low. RixLight::GetIncidentRadianceEstimate() virtual RtFloat GetIncidentRadianceEstimate( RtPoint3 const& P, RtMatrix4x4 const& lightToCurrent, RtMatrix4x4 const& currentToLight) const = 0; To help with this calculation, the renderer will call GetIncidentRadianceEstimate() on the light, providing both the position of the shade point (in 'current' space) and a pair of transforms. In our RectLight example, we check to see if the shade point lies to the front of the light. If it does, we multiply its intensity by its area (which may be non-unity in the event of a scale transform) and the cosine of the angle between its normal and the vector between shade point and light center. We then divide by the squared distance to the light center and return the result. RixLight::GetIncidentRadianceEstimate() virtual RtFloat GetIncidentRadianceEstimate( RtPoint3 const& segmentOrigin, RtVector3 const& segmentDir, RtFloat segmentLen, RtMatrix4x4 const& lightToCurrent, RtMatrix4x4 const& currentToLight, RtFloat& minT, RtFloat& maxT) const = 0; A second overload of GetIncidentRadianceEstimate() is used to compute estimates for ray segments rather than individual points. This is used exclusively for equiangular sampling of volumes. In our example, we find the nearest point on the incoming line segment to the light and then treat that just as the shade point in the simpler case. Note that this overload has minT and maxT as return values. These can be used to 'clip' the line segment, providing a subset over which the light provides non-zero illumination. For example, since the rect light is single-sided, we could clip the segment against the light's plane. Similarly, if the light was a spot light, we could clip the segment against the cone's frustum. RixLight::GetPowerEstimate() virtual float GetPowerEstimate(RtMatrix4x4 const& xform) const = 0; GetPowerEstimate() should return the light's intensity by its area. This is a crude estimate given independent of any shade point. RixLight::GenerateSamples() struct GenerateSamplesResults { public: int& patchIndex; // only set by mesh lights RtFloat3& UVW; RtVector3& direction; float& distance; float& pdfDirect; bool const isBidirectional; float& pdfEmit; float& pdfEmitDirection; float& solidAngleToArea; RtColorRGB diffuseColor; RtColorRGB specularColor; RtNormal3& normal; }; virtual void GenerateSamples( RixLightContext const& lCtx, RixScatterPoint const& scatter, GenerateSamplesResults& results) const = 0; GenerateSamples() is the function used to create a sample on the light and put it in the GenerateSamplesResult structure, defined in RixLight.h. UVW indicates the position of the sample in the light's parametric space; direction is the normalized vector from the shade point to the light sample position in 'current' space; distance is the distance between the two points; and pdf is the pdf of the chosen point in solid angle measure. In the example case, we have a uniform probability of sampling across the light's surface, so the area pdf is 1/area. This is then converted to solid angle measure by multiplying by the cosine of the angle between light and outgoing direction, and dividing by the squared distance. The light returns both radiance in both diffuseColor and specularColor. These will be interpreted separately by a bxdf's diffuse and specular lobes, and allows for a light to contribute different radiances for each. The light should also return the local-space normal at the sampled point on the light. (The normal is constant in the example rect light.) Note that the input RixLightContext grants the function access to the sample's time in normalized shutter time (ie 0 at shutter open and 1 at shutter close); a function GetLightToCurrentTransform() will return a matrix at the appropriate time, and gives access to a random-number pair in a well-stratified sequence. A flag on the GenerateSamplesResult indicates whether the light is being used in a bidirectional setting. If so, it expected to provide three further return values (not covered by the example). solidAngleToArea is a conversion factor to convert between the two pdf measures. For a rect light, this would be the cosine of the angle between light normal and the direction vector divided by the squared distance. pdfEmit is the probability of emitting a photon from the selected sample position on the light, again expressed in a solid angle measure. (For a rect light with a uniform sampling scheme, pdfEmit would be 1/area.) pdfEmitDirection is the probability of emitting a photon in the selected direction given the selected sample position. (For a rect light with cosine emission distribution, this would be cos(theta) / PI.) RixLight::EvaluateSamples() struct EvaluateSamplesResults { float& pdfDirect; bool const isBidirectional; float& pdfEmit; float& pdfEmitDirection; float& solidAngleToArea; RtColorRGB diffuseColor; RtColorRGB specularColor; RtNormal3& normal; }; virtual void EvaluateSamples( RixLightContext const& lCtx, RixSamplePoint const& sample, RixScatterPoint const& scatter, EvaluateSamplesResults& results) const = 0; EvaluateSamples() is called so that the light can compute intensity and angular-measure pdf for an incoming ray direction (typically generated by sampling a Bxdf). EvaluateSamples() will only be called for a ray if a previous Intersect call returned true for the same ray. Results are returned in the EvaluateSamplesResult structure, definied in RixLight.h. ' pdfDirect' is the solid-angle-measure pdf for the ray; diffuseColor and specularColor are the light's contribution for diffuse and specular lobes respectively, and 'normal' is the light's surface normal at the point of intersection. The bidirectional result quantities are the same as described above for GenerateSamples(). RixLight::GenerateEmission() struct GenerateEmissionResults { int& patchIndex; // only set by mesh lights RtFloat3& UVW; RtPoint3& position; RtNormal3& normal; RtVector3& direction; float& distance; float& pdfEmit; // area measure float& pdfEmitDirection; }; virtual void GenerateEmission( RixLightContext const& lCtx, GenerateEmissionResults& results) const = 0; GenerateEmission() is the function used to create photons from the light, used in a bidirectional pathtracing context. Note that it requires four random numbers: two for picking a point on the surface (with uniform probability in our example) and two for picking a direction (with a cosine distribution). Note that in this special case, since we don't at this stage in the process of a shade point, the pdfs are not in the solid angle measure. We return pdfEmit and pdfEmitDirection (see above) and the renderer will employ a solid-angle-measure conversion once the emitted photon has struck a surface internally. RixLight::EvaluateEmissionForCamera() struct EvaluateEmissionForCameraResults { RtColorRGB cameraColor; }; virtual void EvaluateEmissionForCamera( RixLightContext const& lCtx, RixSamplePoint const& sample, RixScatterPoint const& scatter, EvaluateEmissionForCameraResults& results) const = 0; EvaluateEmissionForCamera() will be called if a light is marked as camera-visible and is intersected by a camera ray. Its result is returned in the EvaluateEmissionForCameraResults structure, which contains the single color field cameraColor. RixLight::Edit() virtual RixLight* Edit( RixContext& ctx, RtUString const name, RixParameterList const* pList, RtPointer instanceData) = 0; Edit() is the function that will be called after any changes are made to the light properties. It is expected to update the class members for any subsequent sampling. Note that in more sophisticated lighting examples, this could involve such things as computing a new CDF table for a textured light.
https://rmanwiki.pixar.com/display/REN24/Writing+Lights
CC-MAIN-2022-40
en
refinedweb
#include <Recursive_Thread_Mutex.h> #include <Recursive_Thread_Mutex.h> Collaboration diagram for ACE_Recursive_Thread_Mutex: 0 Initialize a recursive mutex. Implicitly release a recursive mutex. [private] Acquire a recursive mutex (will increment the nesting level and not deadmutex if the owner of the mutex calls this method more than once). Acquire mutex ownership. This calls <acquire> and is only here to make the <ace_recusive_thread_mutex> interface consistent with the other synchronization APIs. Dump the state of an object. Return the nesting level of the recursion. When a thread has acquired the mutex for the first time, the nesting level == 1. The nesting level is incremented every time the thread acquires the mutex recursively. Returns a reference to the recursive mutex's internal mutex;. Return the id of the thread that currently owns the mutex. Returns a reference to the recursive mutex;. Releases a recursive mutex (will not release mutex until all the nesting level drops to 0, which means the mutex is no longer held). Implicitly release a recursive mutex. Note that only one thread should call this method since it doesn't protect against race conditions. [protected] Conditionally acquire a recursive mutex (i.e., won't block). Returns -1 on failure. If we "failed" because someone else already had the lock, <errno> is set to <ebusy>. Conditionally acquire mutex (i.e., won't block). This calls <tryacquire> and is only here to make the <ace_recusive_thread_mutex> interface consistent with the other synchronization APIs. Returns -1 on failure. If we "failed" because someone else already had the lock, <errno> is set to <ebusy>. This is only here to make the <ace_recursive_thread_mutex> interface consistent with the other synchronization APIs. Assumes the caller has already acquired the mutex using one of the above calls, and returns 0 (success) always. Declare the dynamic allocation hooks. Recursive mutex....
https://www.dre.vanderbilt.edu/Doxygen/5.4.7/html/ace/classACE__Recursive__Thread__Mutex.html
CC-MAIN-2022-40
en
refinedweb
tx_status_event Struct Reference tx_status_event Struct Reference #include < hdmi_cec.h > Detailed Description Definition at line 252 of file hdmi_cec.h . Field Documentation Definition at line 254 of file hdmi_cec.h . Definition at line 253 of file hdmi_cec.h . The documentation for this struct was generated from the following file: - hardware/libhardware/include/hardware/ hdmi_cec.h
https://source.android.com/reference/hal/structtx__status__event?hl=bg
CC-MAIN-2022-40
en
refinedweb
Legacy Maemo 5 Documentation/Graphical UI Tutorial/Toolbars Toolbars: GtkWidget* hildon_find_toolbar_new (const gchar *label); GtkWidget* hildon_find_toolbar_new_with_model (const gchar *label, GtkListStore *model, gint column); the function f for set and retrieve the index in the model of the current active item on the combo. An index -1 indicates no active items in both functions. void hildon_find_toolbar_set_active (HildonFindToolbar *toolbar, gint index); gint hildon_find_toolbar_get_active (HildonFindToolbar *toolbar); To get the index of the most recently added item in the toolbar, use the following function: gint32 hildon_find_toolbar_get_last_index (HildonFindToolbar *toolbar); Alternatively, you can use a GtkTreeIter to reference the current active item. \begin{verbatim} void hildon_find_toolbar_set_active_iter (HildonFindToolbar *toolbar, GtkTreeIter *iter); gboolean hildon_find_toolbar_get_active_iter (HildonFindToolbar *toolbar, GtkTreeIter *iter); After create and properly set up the toolbar is necessary to attach it to any window. HildonWindow provides the following function to attach a toolbar: void hildon_window_add_toolbar (HildonWindow *self, GtkToolbar *toolbar); In case you need to add a common toolbar to all windows in your program, HildonProgram provides the following function to set and retrtieve a common toolbar to each window registered into the curretn program: void hildon_program_set_common_toolbar (HildonProgram *self, GtkToolbar *toolbar); GtkToolbar* hildon_program_get_common_toolbar (HildonProgram *self); Here a simple example that shows how to deal with a HildonFindToolbar. Example 4.1. Using a Find Toolbar #include <hildon/hildon.h> gboolean on_history_append (HildonFindToolbar *toolbar, gpointer user_data) { gchar *item; GtkTreeIter iter; gint index ; GtkListStore *list; /* Get last added index */ index = hildon_find_toolbar_get_last_index (toolbar); /* Get the inner list */ g_object_get (G_OBJECT (toolbar), "list",&list, NULL); /* Get the item */ gtk_tree_model_get_iter_from_string (GTK_TREE_MODEL (list) , &iter, g_strdup_printf ("%d",index)); gtk_tree_model_get (GTK_TREE_MODEL (list), &iter, 0, &item, -1); g_debug ("ADDED TO THE LIST : %s", item); return FALSE; } int main (int argc, char **argv) { HildonProgram *program; GtkWidget *window; GtkWidget *toolbar = NULL; GtkListStore *store; GtkTreeIter iter; hildon_gtk_init (&argc, &argv); program = hildon_program_get_instance (); window = hildon_window_new (); hildon_program_add_window (program, HILDON_WINDOW (window)); /* Create and populate history list model */ store = gtk_list_store_new (1, G_TYPE_STRING); gtk_list_store_append (store, &iter); gtk_list_store_set (store, &iter, 0, "Foo", -1); gtk_list_store_append (store, &iter); gtk_list_store_set (store, &iter, 0, "Bar", -1); gtk_list_store_append (store, &iter); gtk_list_store_set (store, &iter, 0, "Baz", -1); /* Create find toolbar */ toolbar = hildon_find_toolbar_new_with_model ("Find", store, 0); /* Set item on index 0 as the current active*/ hildon_find_toolbar_set_active (HILDON_FIND_TOOLBAR (toolbar), 0); /* Attach a callback to handle "history-append" signal */ g_signal_connect_after (G_OBJECT (toolbar), "history-append", G_CALLBACK (on_history_append), NULL); /* Attach toolbar to window */ hildon_window_add_toolbar (HILDON_WINDOW (window), GTK_TOOLBAR (toolbar)); gtk_widget_show_all (GTK_WIDGET (window)); gtk_main (); return 0; } In the example above a callback is set to handle the signal "history-append", emitted by the toolbar when a new item is added to the history. Other signals like, for example, "history-append" could trigger additional actions when emitted. Apart from the property which stores the internal list, other properties are available such as "max-characters" , which set the maximum length of the search string. To a complete description of the signals and properties available read the Hildon reference manual. Edit toolbars Edit toolbars are implemented by the widget HildonEditToolbar. This widget is a toolbar to be used as main control and navigation interface for the edit UI mode. The toolbar contains a label and two buttons, being one of them an arrow pointing backwards and the other a button to perform a certain action. It also display a label which explain to the users the action that the button performs and give intructions to user on how to perform the action properly. A typical example could be a view to delete several items in a list. The label would advice the user to select the items to delete and those items are deleted by clicking the button. Typically, the toolbar is attached to an edit view, meaning a HildonStackableWindow used in the program to perform a certain editing action. The action to be performing by clicking the action button should be implemented in a callback to handle the signal "button-clicked", shown in the example. To create a new HildonEditToolbar you should use: GtkWidget* hildon_edit_toolbar_new (void); GtkWidget* hildon_edit_toolbar_new_with_text (const gchar *label, const gchar *button); The second creation function allows to set the two labels of the widget. If you use the simple creation function, you should set the labels by using the following functions. void hildon_window_add_toolbar (HildonWindow *self, GtkToolbar *toolbar); Once the edit toolbar is configured you need to attach it to a window by using: void hildon_window_add_toolbar (HildonWindow *self, GtkToolbar *toolbar); As was said, the action to be done by clicking the button should be mplemented in a callback attached to the signal "button-clicked". These widgets define also another signal, "arrow-clicked", emitted when users click the arrow. Typically, the callback for the signal "arrow-clicked" should destroy the current edit view. The example below shows how to use an edit toolbar. This example builds a main window showing a list of items and a button to go to a edit view where users can select several items and deleted by clicking the action button of the toolbar. @@IMAGE@@: EDIT WINDOW FROM EXAMPLE BELLOW Example 4.2. Using an Edit Toolbar [Desktop Entry] Name=TimeOut Applet Comment=Execute an action at a given time Type=default X-Path=lib-timeout-home-applet.so The most of stuff related to HildonEditToolbar is in the function edit_window. This function creates a edit view, meaning that a new HildonStackableWindow is created showing a treeview in which users can select several items. Note that the edit window is set to fullscreen and thus displaying the HildonEditToolbar obscures the usual window controls. Using GtkToolbars in Hildon applications You can use the widget GtkToolbar as you would use it in a GTK+ application, but take the following considerations into account: - Use GtkToolbars when only one content item is visible (e.g. when editing a single image or editing a single email). - Provide no menu commands or settings for hiding or showing toolbar. The toolbar is always shown in the view where you decided to put it. Like the others toolbars, a GtkToolbar should be attached to a window by using: void hildon_window_add_toolbar (HildonWindow *self, GtkToolbar *toolbar); The following example shows how to use a GtkToolBar. The use is very close to how it would be use in a normal GTK+ application. @@IMAGE@@: FROM EXAMPLE BELLOW Example 4.3. Using a GtkToolbar in a Hildon application #include <hildon/hildon.h> void on_clicked (GtkToolButton *toolbutton, gint index) { g_debug ("Index of clicked item : %d", index); } int main (int argc, char **argv) { HildonProgram *program; GtkWidget *window; GtkWidget *toolbar; GtkToolItem *toolitem; hildon_gtk_init (&argc, &argv); program = hildon_program_get_instance (); window = hildon_window_new (); hildon_program_add_window (program, HILDON_WINDOW (window)); /* Create a toolbar */ toolbar = gtk_toolbar_new (); /* Add items to the toolbar */ toolitem = gtk_tool_button_new ( gtk_image_new_from_stock(GTK_STOCK_HOME, HILDON_ICON_PIXEL_SIZE_TOOLBAR), "Home"); g_signal_connect (G_OBJECT (toolitem), "clicked", G_CALLBACK (on_clicked), (gpointer) 0); gtk_toolbar_insert (GTK_TOOLBAR (toolbar), toolitem, 0); toolitem = gtk_tool_button_new ( gtk_image_new_from_stock(GTK_STOCK_GO_BACK, HILDON_ICON_PIXEL_SIZE_TOOLBAR), "Back"); g_signal_connect (G_OBJECT (toolitem), "clicked", G_CALLBACK (on_clicked), (gpointer) 1); gtk_toolbar_insert (GTK_TOOLBAR (toolbar), toolitem, 1); toolitem = gtk_tool_button_new ( gtk_image_new_from_stock(GTK_STOCK_GO_FORWARD, HILDON_ICON_PIXEL_SIZE_TOOLBAR), "Forward"); g_signal_connect (G_OBJECT (toolitem), "clicked", G_CALLBACK (on_clicked), (gpointer) 2); gtk_toolbar_insert (GTK_TOOLBAR (toolbar), toolitem, 2); /* Add toolbar to the window */ hildon_window_add_toolbar (HILDON_WINDOW (window), GTK_TOOLBAR(toolbar)); gtk_widget_show_all (GTK_WIDGET (window)); gtk_main (); return 0; }
https://wiki.maemo.org/index.php?title=Legacy_Maemo_5_Documentation/Graphical_UI_Tutorial/Toolbars&oldid=15510
CC-MAIN-2022-40
en
refinedweb
From: Gary Powell (Gary.Powell_at_[hidden]) Date: 2001-06-07 19:42:34 >> > -------- > > The library defines a class nil_t and an object nil. While it's nice to > have these around, they are not mentioned in the documentation, and I'm > not sure they belong in the function library. And I think the library's > design works fine without these -- it seems more a matter of personal > taste that they are included. They exist because we can't easily have: boost::function<> f; f = 0; This requires a "const int" overload of operator=, which allows the nonsensical: f = 7; Even worse, we can't detect this abuse at compile-time, so it becomes a runtime issue. nil/nil_t is meant to be freestanding and has almost nothing to do with boost::function. It is included as part of function because it is used there, but I think should be its own "library" (insofar as 6 lines constitutes a library...). << a nil and a const_nil are used by the tuples library (coming up for review June 17th!) and so boost would benefit if this was indeed a stand-alone include. The tuple versions are in their own namespace, so no conflicts unless you totally open up the namespaces with a couple of "using" statements. Yours, -gary- gary.powell_at_[hidden] Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/06/12978.php
CC-MAIN-2022-40
en
refinedweb
Containers. In this post, you will learn how to authenticate with Docker Hub to pull images from private repositories using both Amazon ECS and Amazon EKS to avoid operational disruptions as a result of the newly imposed limits and control access to your private container images. If you are not already using Docker Hub, you may consider Amazon Elastic Container Registry (Amazon ECR) as a fully managed alternative with native integrations to your AWS Cloud environment. Docker Hub authentication with Amazon ECS Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that enables you to specify the container images you want to run as part of your application in a resource called a task definition. You can store your Docker Hub username and password as a secret in AWS Secrets Manager, and leverage integration with AWS Key Management Service (AWS KMS) to encrypt that secret with a unique data key that is protected by an AWS KMS customer master key (CMK). You can then reference the secret in your task definition and assign the appropriate permission to retrieve and decrypt the secret by creating a task execution role in AWS Identity and Access Management (IAM). Solution overview: The diagram below is a high-level illustration of the solution covered in this post to authenticate with Docker Hub using Amazon ECS. By following the steps in this section of the post, you will create: - A customer master key and an alias in AWS KMS to encrypt your secret - A secret in AWS Secrets Manager to store your Docker Hub username and password - An ECS task execution role to give your task permission to decrypt and retrieve your secret - An ECS cluster and VPC resources using the Amazon ECS CLI - An Amazon ECS service running one instance of a task on your cluster using the AWS Fargate launch type Prerequisites: For this solution, you should have the following prerequisites: - An AWS account - The AWS CLI - The Amazon ECS CLI - A Docker Hub account with a private repository Push an image to a private Docker Hub repository (optional): If you want to follow the specific configurations of this post, you can pull the official Docker build for NGINX, tag the image with the name of your private repository, and push it to your Docker Hub account. Replace the <USER_NAME> variable with your Docker Hub username, the <REPO_NAME> variable with the name of your private repository, and the <TAG_NAME> variable with the tag you used. docker pull nginx docker tag nginx:latest <USER_NAME>/<REPO_NAME>:<TAG_NAME> docker push <USER_NAME>/<REPO_NAME>:<TAG_NAME> Otherwise, feel free to use the Docker image of your choice, but note that you may need to make some minor changes to the commands and configurations used in this post. Create an AWS KMS CMK and Alias: Start by creating a customer master key (CMK) and an alias in AWS KMS using the AWS CLI. This CMK will be leveraged by AWS Secrets Manager to perform envelope encryption on the unique data key it uses to encrypt your individual secrets. An alias acts as a display name for your CMK and is easier to remember than the key ID. An alias can also help simplify your applications. For example, if you use an alias in your code, you can change the underlying CMK that your code uses by associating the given alias with a different CMK. aws kms create-key --query KeyMetadata.Arn --output text The Amazon Resource Name (ARN) of the newly created key should be displayed as the output of the previous command. Replace the <CMK_ARN> variable with that ARN and the <CMK_ALIAS> variable with the alias you with to use: aws kms create-alias --alias-name alias/<CMK_ALIAS> --target-key-id <CMK_ARN> You will also need the ARN of the CMK when creating a trust policy document in an upcoming step. Creating a secret in AWS Secrets Manager: At this point you can proceed to create a secret in AWS Secrets Manager to securely store your Docker Hub username and password. Replace the <USER_NAME> variable with your Docker Hub username, the <PASSWORD> variable with your Docker Hub password, and <CMK_ALIAS> variable with the alias of your CMK from the previous step. We also recommend naming secrets in a hierarchical manner to make them easier to manage. Note that the secret name in the following command is prepended with a dev/ prefix; this stores your secret in a virtual dev folder: aws secretsmanager create-secret \ --name dev/DockerHubSecret \ --description "Docker Hub Secret" \ --kms-key-id alias/<CMK_ALIAS> \ --secret-string '{"username":"<USER_NAME>","password":"<PASSWORD>"}' The ARN of the secret should be displayed as the output of the previous command. You will need to reference this ARN when creating a trust policy document in an upcoming step. Create a task execution role in IAM: First you will need to create a trust policy document to specify the principal that can assume the role, which in this case is an ECS task: cat << EOF > ecs-trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF Next, create a permission policy document that allows the ECS task to decrypt and retrieve the secret created in AWS Secrets Manager. Replace the <SECRET_ARN> and <CMK_ARN> variables with the ARNs of the secret and CMK created in previous steps: cat << EOF > ecs-secret-permission.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:Decrypt", "secretsmanager:GetSecretValue" ], "Resource": [ "<SECRET_ARN>", "<CMK_ARN>" ] } ] } EOF You can now create the ECS task execution role using the AWS CLI. Note that you are referencing the trust policy document created in a previous step. Modify the directory path as needed to properly locate the file: aws iam create-role \ --role-name ecsTaskExecutionRole \ --assume-role-policy-document To add foundational permissions to other AWS service resources that are required to run Amazon ECS tasks, attach the AWS managed ECS task execution role policy to the newly created role: aws iam attach-role-policy \ --role-name ecsTaskExecutionRole \ --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy Finally, add an inline permission policy allowing your task to retrieve your Docker Hub username and password from AWS Secrets Manager. Note that you are referencing the permission policy document created in a previous step. Modify the directory path as needed to properly locate the file: aws iam put-role-policy \ --role-name ecsTaskExecutionRole \ --policy-name ECS-SecretsManager-Permission \ --policy-document Configure the ECS CLI (optional): The Amazon ECS Command Line Interface (ESC CLI) provides high-level commands that simplify creating an Amazon ECS cluster and the AWS resources required to set it up. After installing the ECS CLI, you can optionally configure your AWS credentials in a named ECS profile using the ecs-cli configure profile command. Profiles are stored in the ~/.ecs/credentials file. ecs-cli configure profile \ --access-key <AWS_ACCESS_KEY_ID> \ --secret-key <AWS_SECRET_ACCESS_KEY> \ —profile-name <PROFILE_NAME> You can also specify which profile to use by default with the ecs-cli configure profile default command. If you don’t configure an ECS profile or set environment variables, the default AWS profile stored in the ~/.aws/credentials file will be used. You can additionally configure the ECS cluster name, the default launch type, and the AWS Region to use with the ECS CLI with the ecs-cli configure command. The <LAUNCH_TYPE> variable can be set to either FARGATE or EC2. ecs-cli configure \ --cluster <CLUSTER_NAME> \ --default-launch-type <LAUNCH_TYPE> \ --config-name <CONFIG_NAME> \ --region <AWS_REGION> These values can also be defined or overridden using the command flags specified in the following steps. Create an Amazon ECS cluster: Create an Amazon ECS cluster using the ecs-cli up command, specifying the cluster name you wish to use, the AWS Region to use ( us-east-1 for example), and FARGATE as the launch type: ecs-cli up \ --cluster <CLUSTER_NAME> \ --region us-east-1 \ --launch-type FARGATE \ By using the FARGATE launch type, you are enlisting AWS Fargate to manage compute resources on your behalf so that you don’t need to provision your own EC2 container instances. By default, the ECS CLI will also launch an AWS CloudFormation stack to create a new VPC with an attached Internet Gateway, 2 public subnets, and a security group. You can also provide your own resources using flag options with the above command. Configure the Security Group: Once the ECS cluster has been successfully created, you should see the VPC and subnet IDs displayed in the terminal. Next, retrieve a JSON description of the newly created security group and make note of the security group ID or GroupId. Replace the <VPC_ID> variable with the ID of the newly created VPC. aws ec2 describe-security-groups \ --filters Name=vpc-id,Values=<VPC_ID> \ --region us-east-1 Add an inbound rule to the security group allowing HTTP traffic from any IPv4 address. Replace the <SG_ID> variable with the GroupId retrieved in the previous step. This inbound rule will enable you to validate that the NGINX server is running in your task and that the private image has been successfully pulled from Docker Hub. aws ec2 authorize-security-group-ingress \ --group-id <SG_ID> \ --protocol tcp \ --port 80 \ --cidr 0.0.0.0/0 \ --region us-east-1 Create an Amazon ECS service: An Amazon ECS service enables you to run and maintain multiple instances of a task definition simultaneously. The ECS CLI allows you to create a service using a Docker compose file. Create the following docker-compose.yml file, which defines a web container that exposes port 80 for inbound traffic to the web server. To reference the NGINX image previously pushed to your private Docker Hub repository, replace the <USER_NAME> variable with your Docker Hub username, the <REPO_NAME> variable with the name of your private repository, and the <TAG_NAME> variable with the tag you used. cat << EOF > docker-compose.yml version: "3" services: web: image: <USER_NAME>/<REPO_NAME>:<TAG_NAME> ports: - 80:80 EOF You will also need to create the following ecs-params.yml file to specify additional parameters for your service specific to Amazon ECS. Note that the services field bellow corresponds to the services field in the Docker Compose file above, matching the name of the container to run. When the ECS CLI creates a task definition from the compose file, the fields of the web service will be merged into the ECS container definition, including the container image it will use and the Docker Hub repository credentials it will need to access it. Replace the <SECRET_ARN> variable with the ARN of the AWS Secrets Manager secret you created earlier. Replace the <SUB_1_ID>, <SUB_2_ID>, and <SG_ID> variables with the IDs of the 2 public subnets and the security group that were created with the ECS cluster. cat << EOF > ecs-params.yml version: 1 task_definition: task_execution_role: ecsTaskExecutionRole ecs_network_mode: awsvpc task_size: mem_limit: 0.5GB cpu_limit: 256 services: web: repository_credentials: credentials_parameter: "<SECRET_ARN>" run_params: network_configuration: awsvpc_configuration: subnets: - "<SUB_1_ID>" - "<SUB_2_ID>" security_groups: - "<SG_ID>" assign_public_ip: ENABLED EOF Next, create the ECS service from your compose file using the ecs-cli compose service up command. This command will look for your docker-compose.yml and ecs-params.yml in the current directory. Replace the <CLUSTER_NAME> variable with the name of your ECS cluster and the <PROJECT_NAME> variable with the desired name of your ECS service. ecs-cli compose \ --project-name <PROJECT_NAME> \ --cluster <CLUSTER_NAME> \ service up \ --launch-type FARGATE You can now view the web container that is running in the service with ecs-cli compose service ps command. ecs-cli compose \ --project-name <PROJECT_NAME> \ --cluster <CLUSTER_NAME> \ service ps By navigating to the IP address listed on port 80 you should be able view the default NGINX welcome page, validating that your task was able to successfully pull the container image from your private Docker Hub repository using your credentials for authentication. Cleanup: Update the desired count of the service to 0and then delete the service using the ecs-cli compose service down command: ecs-cli compose \ --project-name <PROJECT_NAME> \ --cluster <CLUSTER_NAME> \ service down Delete the AWS CloudFormation stack that was created by ecs-cli up and the associated resources using the ecs-cli down command: ecs-cli down --cluster <CLUSTER_NAME> Docker Hub Authentication with Amazon EKS Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that enables you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. You can store your Docker Hub username and password as a Kubernetes secret stored in etcd, the highly available key value store used for all cluster data, and leverage integration with AWS Key Management Service (AWS KMS) to perform envelope encryption on that Secret with your own Customer Master Key (CMK). When Secrets are stored using the Kubernetes Secrets API, they are encrypted with a Kubernetes-generated data encryption key (DEK), which is then further encrypted using the CMK. You can then create a service account that references the secret and associate that service account with the pods you launch as part of a deployment, enabling the kubelet node agent to pull the private image from Docker Hub on behalf of the pods. Solution Overview: The diagram below is a high-level illustration of the solution covered in this post to authenticate with Docker Hub using Amazon EKS. By following the steps in this section of the post, you will create: - An Amazon EKS cluster with a managed node group of worker nodes - A Docker Registry secret that is encrypted and stored in etcd - A service account that serves as an identity for processes running in your pods and references the ImagePullSecret - A deployment that declaratively specifies a ReplicaSet of pods to which the service account is associated. - A LoadBalancer service that exposes the underlying pods behind the DNS endpoint of an Elastic Load Balancer Prerequisites: In addition to the prerequisites outlined in the previous section, you will also need: - The eksctl command line interface tool for creating your EKS cluster - The kubectl command line interface tool for creating and managing Kubernetes objects within your EKS cluster For the purposes of this solution, you can continue use the official Docker build for NGINX that was pushed to your private repository in the previous section. Otherwise, feel free to use the Docker image of your choice, but be aware that you may need to make some minor changes to the commands and configurations used in this post. You will also need a customer master key (CMK) with an associated alias in AWS KMS to perform envelope encryption on your Kubernetes secret. You can continue to use the CMK created in the previous section or create a new one. Create an Amazon EKS cluster: To get started, create a configuration file to use with eksctl, the official CLI for Amazon EKS. This configuration file specifies details about the Kubernetes cluster you want to create in Amazon EKS, as distinct from the default parameters that eksctl will use otherwise. Note that, in addition to specifying the cluster name and region ( us-east-1), the file also specifies a managed node group, which automates the provisioning and lifecycle management of the Amazon EC2 instances that will act as your cluster’s worker nodes. These managed nodes will be provisioned as part of an Amazon EC2 Auto Scaling group that is managed for you by Amazon EKS. The ARN of the CMK you created in AWS KMS is also referenced and will be used to encrypt the data encryption keys (DEK) generated by the Kubernetes API server in the EKS control plane. cat << EOF > eks-dev-cluster.yaml --- apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: eks-dev region: us-east-1 managedNodeGroups: - name: eks-dev-nodegroup desiredCapacity: 2 # KMS CMK for the EKS cluster to use when encrypting your Kubernetes secrets secretsEncryption: keyARN: <CMK_ARN> EOF You can retrieve the ARN of the CMK ( CMK_ARN) by specifying the <CMK_ALIAS> in the following command: aws kms describe-key --key-id alias/<CMK_ALIAS> | grep Arn Next, use the eksctl create cluster command to initiate the creation of your Kubernetes cluster in Amazon EKS according to the specifications in the configuration file: eksctl create cluster -f eks-dev-cluster.yaml This command will launch an AWS CloudFormation stack under the hood to create a fully managed EKS control plane, a dedicated VPC, and two Amazon EC2 worker nodes using the official Amazon EKS AMI. Create a new namespace: It’s generally considered best practice to deploy your applications into namespaces other than kube-system or default to better manage the interaction between your pods, so create a dev namespace in your cluster using the Kubernetes command-line tool, kubectl. kubectl create ns dev Create a Docker Registry secret: Now, create a Docker Registry secret, replacing the <USER_NAME>, <PASSWORD>, and kubectl create secret docker-registry docker-secret \ --docker-server="" \ --docker-username="<USER_NAME>" \ --docker-password="<PASSWORD>" \ --docker-email="<EMAIL>" \ --namespace="dev" When you create this secret the Kubernetes API server in the EKS control plane generates a Data Encryption Key (DEK) locally and uses it to encrypt the plaintext payload in the secret. The Kubernetes API server then calls AWS KMS to encrypt the DEK with the CMK referenced in your cluster configuration file above and stores the DEK-encrypted secret in etcd. When a pod wants to use the secret, the API server reads the encrypted secret from etcd and decrypts the secret with the DEK. Use the following command to verify that your secret was created. kubectl get secrets docker-secret --namespace=dev Create a service account: Next, create a service account in the same dev namespace to provide an identity for processes that will run in your pods.. Verify the creation of the service account using the following command. kubectl get sa dev-sa --namespace=dev Create a deployment: Now, create a configuration file that specifies the details of a deployment, which will create three replicated pods, each running a container built from the NGINX image stored in your private Docker Hub repository. Note that the service account created above is also referenced as part of the pod template specification. For the container image, replace the <USER_NAME> variable with your Docker Hub username, the <REPO_NAME> variable with the name of your private repository, and the <TAG_NAME> variable with the tag you used. The image pull policy is set to Always in order to force the kubelet to pull the image from Docker Hub each time it launches a new container rather than using a locally cached copy, requiring authentication with the Docker Registry secret created earlier. cat <<EOF > nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: dev labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: serviceAccountName: dev-sa containers: - name: nginx image: <USER_NAME>/<REPO_NAME>:<TAG_NAME> imagePullPolicy: Always ports: - containerPort: 80 EOF Apply the configuration file and create the deployment in your EKS cluster with the following command. kubectl apply -f nginx-deployment.yaml Create a LoadBalancer service: Finally, provision an external LoadBalancer type service that exposes the pods of your deployment. kubectl expose deployment nginx-deployment \ --namespace=dev \ --type=LoadBalancer \ --name=nginx-service Get the DNS endpoint of the Elastic Load Balancer associated with your service. kubectl get service/nginx-service --namespace=dev Using your browser, navigate to the DNS endpoint specified in the EXTERNAL-IP output field. Verify that you can view the default NGINX welcome page and that the pods in your deployment were able to successfully pull the container image from your Private Docker Hub repository using your credentials for authentication. Cleanup: Delete your service and the associated Elastic Load Balancer. kubectl delete service nginx-service --namespace=dev Use eksctl delete cluster command to delete your EKS cluster. eksctl delete cluster eks-dev Summary: In this post, you created two clusters using both Amazon ECS and Amazon EKS, and configured them to pull a container image from a private Docker Hub repository. Integrations with AWS Key Management Service enable you to easily implement envelope encryption for your Docker Hub credentials. By authenticating with Docker Hub, you can avoid the newly introduced rate limits for container image pulls when using your Pro or Team plan, and private repositories help you maintain access control standards for sensitive container images.
https://aws.amazon.com/blogs/containers/authenticating-with-docker-hub-for-aws-container-services/
CC-MAIN-2022-40
en
refinedweb
Terence Parr wrote: > Hi. sorry for such delay. A simple example: public class Contractor { private String name; private String description; private Set projects; //objects of type Project // constructor, getters, setters ommited } public class Project { private String name; private String description; private Contractor contractor; private Set tasks; //objects of type Task } public class Task { private Calendar start; private Calendar finish; private String todo; } With any mature O/R tool you can do something like: Contractor c = ormSession.load( Contractor.class, new Long( 1 ) ); passing only this object into view allows the view to access any descendant: var model = { contractor: c } cocoon.sendPage( "contractorsProjects", model ); same for: cocoon.sendPage( "contractorsProjectsWithTasks", model ); The model is lazy loaded meaning tasks will never load if you do not reference them in your view. Without any change to your controller you can change your view from displaying project list to project -> task list. Now imagine you want : * some properties pretty printed (project's description could be pretty printed). Moreover: the user in his/hers preferences chooses if he/she wants to pretty print projects' descriptions. * in some views (only a part of them) some dates rendered red if date is in past. 1. I do not feel like introducing these kinds of methods: Project: public Node getPrettyPrintedDescription(); Task: public boolean hasStartDateExpired(); public Node getStartDateRed(); public boolean hasFinishDateExpired(); public Node getFinishDateRed(); public Node getPrettyPrintedTodo(); 2. I also do not feel like implementing some DTOs. DTOs is created with some specific dataset in mind. So in this case DTOs with contractor data and projects data or another DTO with contractor, projects and tasks. Few more classes to implement and hell with view changes. If you are able to plug rendering logic into the view itself you call your rendering tag/function on the property that you specify. The model is not having this knowledge and that is the cause of the problem. Is my use case any clearer now? my regards -- Leszek Gawron lgawron@mobilebox.pl Project Manager MobileBox sp. z o.o. +48 (61) 855 06 67 mobile: +48 (501) 720 812 fax: +48 (61) 853 29 65
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200411.mbox/%3C4197AED3.5030006@mobilebox.pl%3E
CC-MAIN-2017-26
en
refinedweb
Could there be a way to restrict who can actually use the air drop in game? i think it's possible but who want you restrict ? Working nicely, thx :) Does this still work? I see that last post was July 12. @barackuse for the most part its worked for me, i don't get a cut key or altitude selection screen like i should, it just sets it at 500 m and drops me at the location i want like it should, just no altitude selection or cut-key but that could be my fault nice idea :) ran into trouble seems once I started adding lines to my description.ext the Altitude box no longer appears. The only way I could set Altitude was inside of atm_airdrop.sqf file. soon as I add virtual ammo box to description.ext starts to conflict, Altitude box disappears #include "VAS\menu.hpp" any idea on a fix for this? Hello I'm new to adding mods/addons editing maps..etc I'm having problems adding =ATM= Airdrop to my server..someone help!!! I did exactly what the instructions say but as soon as I save the map/mission it kicks me out..with this line "Include file C:\Users\FlyingT\Documents\Arma 3 - Other Profiles\FlyingTomato\missions\co70_invade_annex_FT.Altis\ATM_airdrop\diaglog.hpp not found." HELP ME!! :) also i noticed that in the package i also got the fallowing... =ATM=.paa with no instructions on what to do with it maybe thats the problem idk :) anyone plz help with more specific instructions for this noob..sweg ty. This script has a conflict with ACE version 3.2.1. Upon jumping and deploying the parachute as soon as you reach an altitude near the ground say within 200m the parachute you had on is cut and the reserve parachute is deployed which is the Non-steerable parachute that comes with ACE, the parachute cutting continues until you hit the ground and die. I did a test with the ACE mod disabled and the script worked normally. @pokertour Issues with altitude selection not working or cut away (options still show just not working)and removing backpacks (using VAS 2.9 & =BTC= Revive) on a modified invade and Annex 2.0 server! Update: Fixed my issues by moving the ATM folder into the root of the mission (i like to point mine to a Script folder and have all scripts in there) We just recently placed this mod on our server but have had complaints that Flares don't actually hook onto the person while parachuting day or night. We personally can get around without it but many people complained the feature is broken even though there is now Halo Jumping which was requested constantly Total comments : 12, displayed on page: 12 #include "ATM_airdrop\dialog.hpp" class CfgSounds { sounds[] = {Vent,Vent2,Para}; class Vent {name="Vent";sound[]={ATM_airdrop\data\Vent.ogg,db-11,1.0};titles[] = {};}; class Vent2 {name="Vent2";sound[]={ATM_airdrop\data\Vent2.ogg,db-11,1.0};titles[] = {};}; class Para {name="Para";sound[]={ATM_airdrop\data\parachute.ogg,db-11,1.0};titles[] = {};}; }; this addAction["<t color='#ff9900'>HALO jump</t>", "ATM_airdrop\atm_airdrop!
http://www.armaholic.com/page.php?id=21307
CC-MAIN-2017-26
en
refinedweb
The objective of this post is to explain how to parse JSON data using the ArduinoJson library. Introduction In this post, we will create a simple program to parse a JSON string simulating data from a sensor and print it to the serial port. We assume that the ESP8266 libraries for the Arduino IDE were previously installed. You can check how to do it here. In order to avoid having to manually decode the string into usable values, we will use the ArduinoJson library, which provides easy to use classes and methods to parse JSON. This very useful library allows both encoding and decoding of JSON, is very efficient and works on the ESP8266. It can be obtained via library manager of the Arduino IDE, as shown in figure 1. Figure 1 – Installation via Arduino IDE library manager. Setup First of all, we will include the library that implements the parsing functionality. #include <ArduinoJson.h> Since this library has some tricks to avoid problems while using it, this post will just show how to parse a locally created string, and thus we will not use WiFi functions. So, we will just start a Serial connection in the setup function. void setup() { Serial.begin(115200); Serial.println(); //Clear some garbage that may be printed to the serial console } Main loop On our main loop function, we will declare our JSON message, that will be parsed. The \ characters are used to escape the double quotes on the string, since JSON names require double quotes [1]. char JSONMessage[] ="{\"SensorType\":\"Temperature\", \"Value\": 10}"; This simple structure consists on 2 name/value pairs, corresponding to a sensor type and a value for that sensor. For the sake of readability, the structure is shown bellow without escaping characters. { "SensorType" : "Temperature", "Value" : 10 } Important: The JSON parser modifies the string [2] and thus its content can’t be reused. That’s the reason why. Check here some definitions about variable scopes. Then, we need to declare an object of class StaticJsonBuffer. It will correspond to a preallocated memory pool to store the object tree and its size is specified in a template parameter (the value between <> bellow), in bytes [3]. StaticJsonBuffer<300> JSONBuffer; In this case, we declared a size of 300 bytes, which is more than enough for the string we want to parse. The author of the library specifies here 2 approaches on how to determine the buffer size. Personally, I prefer to declare a buffer that has enough size for the expected message payload and check for errors in the parsing step. The library also supports dynamic memory allocation but that approach is discouraged [4]. Here we can check the differences between static and dynamic JsonBuffer. Next, we call the parseObject method on the StaticJsonBuffer object, passing the JSON string as argument. This method will return a reference to an object of class JsonObject [5]. You can check here the difference between a pointer and a reference. JsonObject& parsed= JSONBuffer.parseObject(JSONMessage); To check if the JSON was successfully parsed, we can call the success method on the JsonObject instance [6]. if (!parser.success()) { Serial.println("Parsing failed"); return; } After that, we can use the subscript operator to obtain the parsed values by their names [7]. Check here some information about the subscript operator. In other words, this means that we use square brackets and the the names of the parameters to obtain their values, as shown bellow. const char * sensorType = parsed["SensorType"]; int value = parsed["Value"];. void loop() { Serial.println("—————— -"); char JSONMessage[] = " {\"SensorType\": \"Temperature\", \"Value\": 10}"; //Original message Serial.print("Initial string value: ");.println(sensorType); Serial.println(value); Serial.print("Final string value: "); for (int i = 0; i < 31; i++) { //Print the modified string, after parsing Serial.print(JSONMessage[i]); } Serial.println(); delay(5000); } Figure 2 illustrates the result printed to the serial console. Figure 2 – Output of the program in the Arduino IDE serial console. As seen by the previous explanation, this library has some particularities that we need to take in consideration when using it, to avoid unexpected problems. Fortunately, the github page of this library is very well documented, and thus here is a section describing how to avoid pitfalls. Also, I strongly encourage you to check the example programs that come with the installation of the library, which are very good for understanding how it works. Final notes This post shows how easily is to parse JSON in the ESP8266. Naturally, this allows to change data in a well known structured format that can be easily interpreted by other applications, without the need for implementing a specific protocol. Since microcontrollers and IoT devices have many resource constraints, JSON poses much less overhead than, for example, XML, and thus is a very good choice. Nevertheless, we need to keep in mind that for some intensive applications, even JSON can pose an overhead that is not acceptable, and thus we may need to go to byte oriented protocols. But, for simple applications, such as reading data from a sensor and sending it to a remote server or receiving a set of configurations, JSON is a very good alternative. References [1] [2]- the-string-isnt-read-only [3] [4] [5] [6] [7] Technical details - ESP8266 libraries: v2.3.0. - ArduinoJson library: v5.1.1. Pingback: ESP8266: Parse JSON Arrays | techtutorialsx Pingback: ESP8266: Encoding JSON messages | techtutorialsx how to make json parser to control rgb led in esp8266 from android? LikeLiked by 1 person Basically, you need to make your ESP8266 listen to incoming HTTP requests. To do so, check the ESP8266WebServer implementation in the github page of the libraries: You can then parse the content of the http request received in the ESP and use its values to control the RGB LED. From the ESP8266 perspective, it won’t matter if the request was sent from Android or a web browser. I just made a post on how to set a simple HTTP webserver on the ESP8266. It may help you creating the application you mentioned: Hope it helps. thanx antepher I will take a look LikeLiked by 1 person Pingback: ESP32: Parsing JSON | techtutorialsx Pingback: ESP32: Creating JSON message | techtutorialsx Pingback: ESP32: Sending JSON messages over MQTT | techtutorialsx
https://techtutorialsx.com/2016/07/30/esp8266-parsing-json/
CC-MAIN-2017-26
en
refinedweb
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Working with fields.selection values Hi, How I can read or return the value chosen from a selection field? I need the user to select values from a list, and use the selected value to complete the information in other fields in the form. Thanks. Some code used to test, with error: .... 'seq_choosed': fields.selection([(11,'Product End'),(12,'Product Base')],'Sequence to use '), ... lot_sequence = self.pool.get('ir.sequence').get_id(cr, uid, seq_choosed.value, context={} ) Your question does not clearly explain what you want to do nor what you tried. You say you have an error but you do not say what is the error message (and trace). Your code example is not complete enough to understand properly. Still, here are a few pointers: - How to obtain the value chosen by the user depends on where you do it: in an on_changemethod, in a fields.function, when overriding one of the base ORM methods, etc. - Have a look at the OpenERP Technical memento, it contains examples for these various use cases. - The selection field in your example has 2 possible hardcoded integer values: 11 or 12. This is rather unusual but supported. Most selection fields use string values though. In this case when you read()or browse()a record from the model in which this field is defined, you will get the value as an int (11 or 12) or Falseif the field was left empty (if it's not required). Here's a random example based on the little information you provided: Imagine you put an on_change on your field in the XML view: <field name="seq_choosed" on_change="on_change_seq_choosed(seq_choosed)"/> <field name="seq_num"/> Then you could do something like the following: def on_change_seq_choosed(self, cr, uid, ids, seq_choosed, context=None): # on_change returns a dict result = { 'value': {} } # selected value is explicitly passed in seq_choosed parameter in XML def if seq_choosed: seq_num = self.pool.get('ir.sequence').get_id(cr, uid, seq_choosed, context=context) result['value']['seq_num'] = seq_num return result That's not a very useful example, but if you want better answers you'll have to be a lot more specific in your question ;-) Thanks a lot for your answer, and all the corrections, my English is no very good. I going to try this, and looking more about the on_change method. @Oliver Dony, for instance I want to print the value (Product Base) with key (12), how to do? If I print seq_choosed, it prints me the key of selection field. I need to print value of key chosen About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/working-with-fields-selection-values-1861
CC-MAIN-2017-26
en
refinedweb
Products and Services Downloads Store Support Education Partners About Oracle Technology Network Name: gm110360 Date: 12/05/2001 java version "1.4.0-rc" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0-rc-b88) Java HotSpot(TM) Client VM (build 1.4.0-rc-b88, mixed mode) In Windows, users can setup the minimal interval between consecutive mouse clickes for those two clicks to be recognized as double click. Also in Windows, it's permissible to move mouse position slightly to some amount between two clicks constituting a double click. As far as I know, there's no clear indication in Java API documentation about how a programmer benefits from the platform specific double click detection. There should be a clear definition or a facility to capture this platform (maybe) platform specific events. I'm using mouse pressed / released instead of mouse clicked to detect double clicks since the current AWT implementation seems to convert Windows double click messages to mouse pressed / released events with click count of 2. (Review ID: 136785) ====================================================================== Name: gm110360 Date: 12/05/2001 java version "1.4.0-rc" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0-rc-b88) Java HotSpot(TM) Client VM (build 1.4.0-rc-b88, mixed mode) Launch SwingSet2 and choose tree tab. When we double click on a node with children, it toggles between expanded and collapse states. In Windows, if we click more than twice without moving the mouse points, then the node keeps toggled. But in Swing, the toggling happens only once. This is caused by the difference in the recognization of double clicks in AWT and Windows. Windows detects double clicks done with slight mouse movement between two subsequent clicks, but Swing don't. Even though I report this as a bug specific to JTree, this discrepancy in double click detection is sometimes very annoying in most cases. So I usually utilizes a undocumented behavior that may be specific to the current AWT implementation, that is, I'm detecting double clicks by capturing mouse released or pressed events instead of mouse clicked, since the current AWT implementation takes Windows double click messages and transforms them to mouse released and pressed events with click count of 2. (Review ID: 136782) ====================================================================== EVALUATION The getClickCount() method in MouseEvent is more flexible than a double-click event would be, because it allows detection of double, triple, or any tuple click. As the code below demonstrates, it is possible to get the behavior you expect using mouseClicked() by checking that the click count is divisible by 2: // Test that checking for an event number of clicks in click count will // correctly toggle a Component import java.awt.*; import java.awt.event.*; public class BClickCountTest extends Frame implements MouseListener { public BClickCountTest () { addMouseListener(this); setSize(400, 400); } public static void main(String[] args) { BClickCountTest t = new BClickCountTest(); t.show(); } public void toggle() { if (getBackground() == Color.red) { setBackground(Color.green); } else { setBackground(Color.red); } } public void mouseClicked (MouseEvent e){ System.out.print("clickcount = " + e.getClickCount()); if (e.getClickCount() % 2 == 0) { System.out.println(", TOGGLE"); toggle(); } else { System.out.println(""); } } public void mousePressed(MouseEvent e){} public void mouseReleased(MouseEvent e){} public void mouseEntered(MouseEvent e){} public void mouseExited(MouseEvent e){} } When the mouse is double-(or quadruple-, or sextuple-, or octuple-, etc)clicked, the Frame's background color is toggled between red and green. This behaves in a similar fashion to the windows Explorer, in that each even-numbered click causes a toggle. You don't need to use mousePressed()/mouseReleased() for this. The reason that JTree doesn't behave this way is that BasicTreeUI checks if the number of clicks in the MouseEvent is equal to the desired number of clicks, not modulo: protected boolean isToggleEvent(MouseEvent event) { if(!SwingUtilities.isLeftMouseButton(event)) { return false; } int clickCount = tree.getToggleClickCount(); if(clickCount <= 0) { return false; } --> return (event.getClickCount() == clickCount); } You should be able to install your own TreeUI that uses your own isToggleEvent() which responds to every even-numbered click. However, the Windows Look & Feel should also behave this way. I'm going to pass this to Swing to fix that aspect. ###@###.### 2001-12-07 Interestingly Windows explorer does not exhibit the behavior that a click count % 2 is equivalent to a toggle, where as the tree in regedit does. I've provided a workaround that will provide windows behavior. ###@###.### 2005-1-03 18:39:19 GMT Windows doesn't appear to offer up a click count, only single and double click events. What Java generates as a quarduple click is equivalent to two double clicks on windows. So, tree should be using % here (at least for windows). The change will offer a new ui property, "Tree.useEqualsForToggle", that if flase will use %. The default is true, so that == is used. ###@###.### 2005-1-06 17:37:42 GMT After talking with Shannon we decided to make the change for all look and feels and remove the property. ###@###.### 2005-1-06 19:22:10 GMT WORK AROUND Name: gm110360 Date: 12/05/2001 Use mouse pressed / released instead of mouse clicked to detect double clicks. This works at least on Windows. ====================================================================== You can also install your own TreeUI that responds to all even-numbered clicks. ###@###.### 2001-12-07 A simpler approach is set the toggle click count to a negative number, say -1, and install a MouseListener that handles the actual toggling. Here's a rough cut at it: tree.addMouseListener(new MouseAdapter() { public void mouseClicked(MouseEvent e) { if (e.getModifiers() == InputEvent.BUTTON1_MASK && e.getClickCount() > 0 && e.getClickCount() % 2 == 0) { int row = tree.getRowForLocation(e.getX(), e.getY()); if (row != -1) { if (tree.isExpanded(row)) { tree.collapseRow(row); } else { tree.expandRow(row); } } } } }); ###@###.### 2005-1-03 18:39:19 GMT SUGGESTED FIX ------- BasicTreeUI.java ------- *** /tmp/sccs.3eaOUs Fri Dec 7 16:33:06 2001 --- BasicTreeUI.java Fri Dec 7 16:33:02 2001 *************** *** 2141,2147 **** if(clickCount <= 0) { return false; } ! return (event.getClickCount() == clickCount); } /** --- 2141,2147 ---- if(clickCount <= 0) { return false; } ! return (event.getClickCount() % clickCount == 0); } /**
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4548788
CC-MAIN-2017-26
en
refinedweb
- Form-Based Authentication - Example: Form-Based Authentication - BASIC Authentication - Example: BASIC Authentication - Configuring Tomcat to Use SSL 7.2 Example: Form-Based Authentication In this section I'll work through a small Web site for a fictional company called hot-dot-com.com. I'll start by showing the home page, then list the web.xml file, summarize the various protection mechanisms, show the password file, present the login and login-failure pages, and give the code for each of the protected resources. The Home Page Listing 7.7 shows the top-level home page for the Web application. The application is registered with a URL prefix of /hotdotcom so the home page can be accessed with the URL as shown in Figure 73. If you've forgotten how to assign URL prefixes to Web applications, review Section 4.1 (Registering Web Applications). Figure 73 Home page for hot-dot-com.com. Now, the main home page has no security protections and consequently does not absolutely require an entry in web.xml. However, many users expect URLs that list a directory but no file to invoke the default file from that directory. So, I put a welcome-file-list entry in web.xml (see Listing 7.8 in the next section) to ensure that would invoke index.jsp. Listing 7.7 index.jsp (Top-level home page) <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>hot-dot-com.com!</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">hot-dot-com.com!</TABLE> <P> <H3>Welcome to the ultimate dot-com company!</H3> Please select one of the following: <UL> <LI><A HREF="investing/">Investing</A>. Guaranteed growth for your hard-earned dollars! <LI><A HREF="business/">Business Model</A>. New economy strategy! <LI><A HREF="history/">History</A>. Fascinating company history. </UL> </BODY> </HTML> The Deployment Descriptor Listing 7.8 shows the complete deployment descriptor used with the hotdotcom Web application. Recall that the order of the subelements within the web-app element of web.xml is not arbitraryyou must use the standard ordering. For details, see Section 5.2 (The Order of Elements within the Deployment Descriptor). The hotdotcom deployment descriptor specifies several things: URLs that give a directory but no filename result in the server first trying to use index.jsp and next trying index.html. If neither file is available, the result is server specific (e.g., a directory listing). URLs that use the default servlet mapping (i.e., servlet/ServletName) are redirected to the main home page. Requests to are redirected to. Requests directly to require no redirection. Similarly, requests to are redirected to. See Section 7.5 for information on setting up Tomcat to use SSL. URLs in the investing directory can be accessed only by users in the registered-useror administratorroles. The delete-account.jsp page in the admin directory can be accessed only by users in the administratorrole. Requests for restricted resources by unauthenticated users are redirected to the login.jsp page in the admin directory. Users who are authenticated successfully get sent to the page they tried to access originally. Users who fail authentication are sent to the login-error.jsp page in the admin directory. Listing 7.8 WEB-INF/web.xml(Complete version for hot-dot-com.com) <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN" ""> <web-app> <!-- Give name to FinalizePurchaseServlet. This servlet will later be mapped to the URL /ssl/FinalizePurchase (by means of servlet-mapping and url-pattern). Then, that URL will be designated as one requiring SSL (by means of security-constraint and transport-guarantee). --> <servlet> <servlet-name> FinalizePurchaseServlet </servlet-name> <servlet-class> hotdotcom.FinalizePurchaseServlet </servlet-class> </servlet> <!-- A servlet that redirects users to the home page. --> <servlet> <servlet-name>Redirector</servlet-name> <servlet-class>hotdotcom.RedirectorServlet</servlet-class> </servlet> <!-- Associate previously named servlet with custom URL. --> <servlet-mapping> <servlet-name> FinalizePurchaseServlet </servlet-name> <url-pattern> /ssl/FinalizePurchase </url-pattern> </servlet-mapping> <!-- Turn off invoker. Send requests to index.jsp. --> <servlet-mapping> <servlet-name>Redirector</servlet-name> <url-pattern>/servlet/*</url-pattern> </servlet-mapping> <!-- If URL gives a directory but no filename, try index.jsp first and index.html second. If neither is found, the result is server-specific (e.g., a directory listing). --> <welcome-file-list> <welcome-file>index.jsp</welcome-file> <welcome-file>index.html</welcome-file> </welcome-file-list> <!-- Protect everything within the "investing" directory. --> <security-constraint> <web-resource-collection> <web-resource-name>Investing</web-resource-name> <url-pattern>/investing/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>registered-user</role-name> <role-name>administrator</role-name> </auth-constraint> </security-constraint> <!-- URLs of the form require SSL and are thus redirected to. --> <security-constraint> <web-resource-collection> <web-resource-name>Purchase</web-resource-name> <url-pattern>/ssl/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>registered-user</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <!-- Only users in the administrator role can access the delete-account.jsp page within the admin directory. --> <security-constraint> <web-resource-collection> <web-resource-name>Account Deletion</web-resource-name> <url-pattern>/admin/delete-account.jsp</url-pattern> </web-resource-collection> <auth-constraint> <role-name>administrator</role-name> </auth-constraint> </security-constraint> <!-- Tell the server to use form-based authentication. --> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/admin/login.jsp</form-login-page> <form-error-page>/admin/login-error.jsp</form-error-page> </form-login-config> </login-config> </web-app> The Password File With form-based authentication, the server (container) performs a lot of the work for you. That's good. However, shifting so much work to the server means that there is a server-specific component: the assignment of passwords and roles to individual users (see Section 7.1). Listing 7.9 shows the password file used by Tomcat for this Web application. It defines four users: john (in the registered-user role), jane (also in the registered-user role), juan (in the administrator role), and juana (in the registered-user and administrator roles). Listing 7.9 install_dir/conf/tomcat-users.xml (First four users) <?xml version="1.0" encoding="ISO-8859-1"?> <tomcat-users> <user name="john" password="nhoj" roles="registered-user" /> <user name="jane" password="enaj" roles="registered-user" /> <user name="juan" password="nauj" roles="administrator" /> <user name="juana" password="anauj" roles="administrator,registered-user" /> </tomcat-users> The Login and Login-Failure Pages This Web application uses form-based authentication. Attempts by not-yet-authenticated users to access any password-protected resource will be sent to the login.jsp page in the admin directory. This page, shown in Listing 7.10, collects the username in a field named j_username and the password in a field named j_password. The results are sent by POST to a resource called j_security_check. Successful login attempts are redirected to the page that was originally requested. Failed attempts are redirected to the login-error.jsp page in the admin directory (Listing 7.11). Listing 7.10 admin/login.jsp <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Log In</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">Log In</TABLE> <P> <H3>Sorry, you must log in before accessing this resource.</H3> <FORM ACTION="j_security_check" METHOD="POST"> <TABLE> <TR><TD>User name: <INPUT TYPE="TEXT" NAME="j_username"> <TR><TD>Password: <INPUT TYPE="PASSWORD" NAME="j_password"> <TR><TH><INPUT TYPE="SUBMIT" VALUE="Log In"> </TABLE> </FORM> </BODY> </HTML> Listing 7.11 admin/login-error.jsp <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Begone!</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">Begone!</TABLE> <H3>Begone, ye unauthorized peon.</H3> </BODY> </HTML> The investing Directory The web.xml file for the hotdotcom Web application (Listing 7.8) specifies that all URLs that begin with should be password protected, accessible only to users in the registered-user role. So, the first attempt by any user to access the home page of the investing directory (Listing 7.12) results in the login form shown earlier in Listing 7.10. Figure 74 Home page for hot-dot-com.com.shows the initial result, Figure 75 shows the result of an unsuccessful login attempt, and Figure 76 shows the investing home pagethe result of a successful login. Once authenticated, a user can browse other pages and return to a protected page without reauthentication. The system uses some variation of session tracking to remember which users have previously been authenticated. Listing 7.12 investing/index.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Investing</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">Investing</TABLE> <H3><I>hot-dot-com.com</I> welcomes the discriminating investor! </H3> Please choose one of the following: <UL> <LI><A HREF="../ssl/buy-stock.jsp">Buy stock</A>. Astronomic growth rates! <LI><A HREF="account-status.jsp">Check account status</A>. See how much you've already earned! </UL> </BODY> </HTML> Figure 74 Users who are not yet authenticated get redirected to the login page when they attempt to access the investing page. Figure 75 Failed login attempts result in the login-error.jsp page. Internet Explorer users have to turn off "friendly" HTTP error messages (under Tools, Internet Options, Advanced) to see the real error page instead of a default error page. Figure 76 Successful login attempts result in redirection back to the originally requested page. The ssl Directory The stock purchase page (Listings 7.13 and 7.14) submits data to the purchase finalization servlet (Listing 7.15) which, in turn, dispatches to the confirmation page (Listing 7.16). Note that the purchase finalization servlet is not really in the ssl directory; it is in WEB-INF/classes/hotdotcom. However, the deployment descriptor (Listing 7.8) uses servlet-mapping to assign a URL that makes the servlet appear (to the client) to be in the ssl directory. This mapping serves two purposes. First, it lets the HTML form of Listing 7.13 use a simple relative URL to refer to the servlet. This is convenient because absolute URLs require modification every time your hostname or URL prefix changes. However, if you use this approach, it is important that both the original form and the servlet it talks to are accessed with SSL. If the original form used a relative URL for the ACTION and was accessed with a normal HTTP connection, the browser would first submit the data by HTTP and then get redirected to HTTPS. Too late: an attacker with access to the network traffic could have obtained the data from the initial HTTP request. On the other hand, if the ACTION of a form is an absolute URL that uses https, it is not necessary for the original form to be accessed with SSL. Second, using servlet-mapping in this way guarantees that SSL will be used to access the servlet, even if the user tries to bypass the HTML form and access the serv-let URL directly. This guarantee is in effect because the transport-guarantee element (with a value of CONFIDENTIAL) applies to the pattern /ssl/*. Figure 77 through 79 show the results. Listing 7.13 ssl/buy-stock.jsp <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Purchase</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">Purchase</TABLE> <P> <H3><I>hot-dot-com.com</I> congratulates you on a wise investment!</H3> <jsp:useBean <UL> <LI>Current stock value: <jsp:getProperty <LI>Predicted value in one year: <jsp:getProperty </UL> <FORM ACTION="FinalizePurchase" METHOD="POST"> <DL> <DT>Number of shares: <DD><INPUT TYPE="RADIO" NAME="numShares" VALUE="1000"> 1000 <DD><INPUT TYPE="RADIO" NAME="numShares" VALUE="10000"> 10000 <DD><INPUT TYPE="RADIO" NAME="numShares" VALUE="100000" CHECKED> 100000 </DL> Full name: <INPUT TYPE="TEXT" NAME="fullName"><BR> Credit card number: <INPUT TYPE="TEXT" NAME="cardNum"><P> <CENTER><INPUT TYPE="SUBMIT" VALUE="Confirm Purchase"></CENTER> </FORM> </BODY> </HTML> Listing 7.14 (Bean used by buy-stock.jsp) package hotdotcom; public class StockInfo { public String getCurrentValue() { return("$2.00"); } public String getFutureValue() { return("$200.00"); } } Listing 7.15 WEB-INF/classes/hotdotcom/FinalizePurchaseServlet.java package hotdotcom; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; /** Servlet that reads credit card information, * performs a stock purchase, and displays confirmation page. */ public class FinalizePurchaseServlet extends HttpServlet { /** Use doPost for non-SSL access to prevent * credit card number from showing up in URL. */ public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String fullName = request.getParameter("fullName"); String cardNum = request.getParameter("cardNum"); confirmPurchase(fullName, cardNum); String destination = "/investing/sucker.jsp"; RequestDispatcher dispatcher = getServletContext().getRequestDispatcher(destination); dispatcher.forward(request, response); } /** doGet calls doPost. Servlets that are * redirected to through SSL must have doGet. */ public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doPost(request, response); } private void confirmPurchase(String fullName, String cardNum) { // Details removed to protect the guilty. } } Listing 7.16 (Dispatched to from FinalizePurchaseServlet.java) <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Thanks!</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">Thanks!</TABLE> <H3><I>hot-dot-com.com</I> thanks you for your purchase.</H3> You'll be thanking yourself soon! </BODY> </HTML> Figure 77 Warning when user first accesses FinalizePurchaseServlet when Tomcat is using a self-signed certificate. Self-signed certificates result in warnings and are for test purposes only. See Section 7.5 for details on creating them for use with Tomcat and for information on suppressing warnings for future requests. Figure 78 The stock purchase page must be accessed with SSL. Since the form's ACTION uses a simple relative URL, the initial form submission uses the same protocol as the request for the form itself. If you were concerned about overloading your SSL server (HTTPS connections are much slower than HTTP connections), you could access the form with a non-SSL connection and then supply an absolute URL specifying https for the form's ACTION. This approach, although slightly more efficient, is significantly harder to maintain. Figure 79 To protect the credit card number in transit, you must use SSL to access the FinalizePurchase servlet. Although FinalizePurchaseServlet dispatches to sucker.jsp, no web.xml entry is needed for that JSP page. Access restrictions apply to the client's URL, not to the behind-the-scenes file locations. Listing 7.17 investing/account-status.jsp <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Account Status</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">Account Status</TABLE> <P> <H3>Your stock is basically worthless now.</H3> But, hey, that makes this a buying opportunity. Why don't you <A HREF="../ssl/buy-stock.jsp">buy some more</A>? </BODY> </HTML> Figure 710 Selecting the Account Status link on the investing home page does not result in reauthentication, even if the user has accessed other pages since being authenticated. The system uses a variation of session tracking to remember which users have already been authenticated. The admin Directory URLs in the admin directory are not uniformly protected as are URLs in the investing directory. I already discussed the login and login-failure pages (Listings 7.10 and 7.11, Figure 74 and 75). This just leaves the Delete Account page (Listing 7.18). This page has been designated as accessible only to users in the administrator role. So, when users that are only in the registered-user role attempt to access the page, they are denied permission (see Figure 711). Note that the permission-denied page of Figure 711 is generated automatically by the server and applies to authenticated users whose roles do not match any of the required onesit is not the same as the login error page that applies to users who cannot be authenticated. A user in the administrator role can access the page without difficulty (Figure 712). Listing 7.18 admin/delete-account.jsp <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Delete Account</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">Delete Account</TABLE> <P> <FORM ACTION="confirm-deletion.jsp"> Username: <INPUT TYPE="TEXT" NAME="userName"><BR> <CENTER><INPUT TYPE="SUBMIT" VALUE="Confirm Deletion"></CENTER> </FORM> </BODY> </HTML> Figure 711 When John and Jane attempt to access the Delete Account page, they are denied (even though they are authenticated). That's because they belong to the registered-user role and the web.xml file stipulates that only users in the administrator role should be able to access this page. Figure 712 Once authenticated, Juan or Juana (in the administrator role) can access the Delete Account page. The Redirector Servlet Web applications that have protected servlets should always disable the invoker serv-let so that users cannot bypass security by using ServletName when the access restrictions are assigned to a custom servlet URL. In the hotdotcom application, I used the servlet and servlet-mapping elements to register the RedirectorServlet with requests to anything. This servlet, shown in Listing 7.19, simply redirects all such requests to the application's home page. Listing 7.19 WEB-INF/classes/hotdotcom/RedirectorServlet.java package hotdotcom; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; /** Servlet that simply redirects users to the * Web application home page. Registered with the * default servlet URL to prevent access to servlets * through URLs that have no security settings. */ public class RedirectorServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.sendRedirect(request.getContextPath()); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } } Unprotected Pages The fact that some pages in a Web application have access restrictions does not imply that all pages in the application need such restrictions. Resources that have no access restrictions need no special handling regarding security. There are two points to keep in mind, however. First, if you use default pages such as index.jsp or index.html, you should have an explicit welcome-file-list entry in web.xml. Without a welcome-file-list entry, servers are not required to use those files as the default file when a user supplies a URL that gives only a directory. See Section 5.7 (Specifying Welcome Pages) for details on the welcome-file-list element. Second, you should use relative URLs to refer to images or style sheets so that your pages don't need modification if the Web application's URL prefix changes. For more information, see Section 4.5 (Handling Relative URLs in Web Applications). Listings 7.20 and 7.21 (Figure 713 and 714) give two examples. Figure 713 The hotdotcom business model. Figure 714 The distinguished hotdotcom heritage. Listing 7.20 business/index.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Business Model</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">Business Model</TABLE> <P> <H3>Who needs a business model?</H3> Hey, this is the new economy. We don't need a real business model, do we? <P> OK, ok, if you insist: <OL> <LI>Start a dot-com. <LI>Have an IPO. <LI>Get a bunch of suckers to work for peanuts plus stock options. <LI>Retire. </OL> Isn't that what many other dot-coms did? </BODY> </HTML> Listing 7.21 history/index.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>History</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE">History</TABLE> <P> <H3>None yet...</H3> </BODY> </HTML>
http://www.informit.com/articles/article.aspx?p=26139&seqNum=3
CC-MAIN-2017-26
en
refinedweb
Difference between revisions of "RAP/RWT Cluster" Revision as of 09:54, 26 July. Transparent Session Failover (i.e. are directly or indirectly stored in a session attribute) must be serializable. This will mainly affect Display, Widgetand derived classes and Resource Limitations - Images that were created with one of the Graphics#getImage()methods (curently) cannot be serialized (bug 352929). -. - A further consequence of the missing Display#sleep()is of course that entry points must not contain the usual event loop (see below for an example).. The means to configure RWT to use this life cycle will change with the ongoing work on bug 347883 As this life cycle does not support Display#sleep(), the event loop must be removed from the entry point. Make your entry point look like the one below. public class SimpleEntryPoint implements IEntryPoint { public int createUI() { Display display = new Display(); Shell shell = new Shell( display ); shell.setBounds( 10, 10, 850, 600 );. Schedule We plan to ship the first version with milestone 1 of RAP 1.5 which will be available end of August 2011. The exact date depends on the Juno/Simultaneous Release Plan.
http://wiki.eclipse.org/index.php?title=RAP/RWT_Cluster&diff=prev&oldid=262721
CC-MAIN-2020-24
en
refinedweb
how to use sage in PyCharm on CoCalc How can I use Sage library in PyCharm IDE like on CoCalc? asked 2017-10-24 04:15:47 -0500 updated 2017-10-29 20:00:38 -0500 How can I use Sage library in PyCharm IDE like on CoCalc? Is there anything more needed? (I do not know anything about CoCalc.) I did it, when I run with ./sage -sh it's okay then I wrote pycharm & or pycharm.app &, I got Applications$ bash: PyCharm.app: command not found, I'm sure they are in same path Asked: 2017-10-24 04:15:47 -0500 Seen: 435 times Last updated: Oct 29 '17 using sage library in C or C++? Using multiple lines of pari/gp code in a Sage notebook Adapt the nauty_directg function which parent class for C-finite sequences? import CSV in python using jupyter notebook on cocalc cloud How to import 3rd party modules into SAGE? MixedIntegerLinearProgram strange behavior Using Psycopg2 and other libraries in sage programming of looping to print selected value of m On Linear Programming with a double sum
https://ask.sagemath.org/question/39262/how-to-use-sage-in-pycharm-on-cocalc/?sort=oldest
CC-MAIN-2020-24
en
refinedweb
Hi, We have recently upgraded to IDEA7 from IDEA5.1, and I am in the process of getting the HQL Console working in it. We have our hibernate configuration in our spring applicationContext.xml, all hibernate is annotated, so there's no *.hbm.xml files in the project. We wrote our own AnnotationSessionFactoryBean so we didnt have to keep adding the hibernate entities to the spring config file every time we created a new one, so our config looks like this... com.xy.z.database.domain org.hibernate.dialect.OracleDialect false where as traditionally it would have to look like this... com.xy.z.database.domain.MyFirstClass com.xy.z.database.domain.MySecondClass ... com.xy.z.database.domain.MyNthClass org.hibernate.dialect.OracleDialect false ]]> Now it appears if I use the first config (our preferred way) none of the classes show under the sessionFactory in the Java EE Structure view. If I add the annotatedPackage (which is for package level annotations), then the classes appear but in the HQL Console it says "Query yeilds no results" when I do "from com.xy.z.database.domain.MySecondClass". So I've been trying with the second config and that appears to work, but I'd prefer not to have to define every single entity in my applicationContext.xml file, as people on the team will forget and it makes the config files look massive (ok i could split it out) Is this the only way to achieve this?? My second problem with the HQL Console is that all our Hibernate Entities are annotated at field level, not method level... @Entity public class MyFirstClass { @Id @Column(name="ID") private Long id; public void setId(Long id) { this.id = id; } public Long getId() { return id; } } and when I run it I get the following.... "from com.xy.z.database.domain.MyFirstClass" java.lang.NoSuchFieldException: id at java.lang.Class.getField(Class.java:1520) 1. com.xy.z.database.domain.MyFirstClass@1d17f2df I've found if change the property to public rather than private (which I obviously don't want to do!) it works, or if I move the annotations to the method level rather than field level it also works. This is a lot of rework as the coding standards at our company say we have to annotate at field level (I also think it makes the file easier to read). This is a bug in my opinion, even if the database field is annotated at field level it should probably still use the getter or use reflection's setAccessible(true) on the field to read it. Again is there a way around this without spending a few days changing our entities and our coding standards? Cheers Andy. Edited by: kaylanx on Sep 24, 2008 1:14 PM Hi,
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206297789-Hibernate-HQL-Console-field-level-annotations-and-annotatedClasses
CC-MAIN-2020-24
en
refinedweb
import "k8s.io/kubernetes/pkg/util/netsh" Package netsh provides an interface and implementations for running Windows netsh commands. type Interface interface { // EnsurePortProxyRule checks if the specified redirect exists, if not creates it EnsurePortProxyRule(args []string) (bool, error) // DeletePortProxyRule deletes the specified portproxy rule. If the rule did not exist, return error. DeletePortProxyRule(args []string) error // EnsureIPAddress checks if the specified IP Address is added to vEthernet (HNSTransparent) interface, if not, add it. If the address existed, return true. EnsureIPAddress(args []string, ip net.IP) (bool, error) // DeleteIPAddress checks if the specified IP address is present and, if so, deletes it. DeleteIPAddress(args []string) error // Restore runs `netsh exec` to restore portproxy or addresses using a file. // TODO Check if this is required, most likely not Restore(args []string) error // GetInterfaceToAddIP returns the interface name where Service IP needs to be added // IP Address needs to be added for netsh portproxy to redirect traffic // Reads Environment variable INTERFACE_TO_ADD_SERVICE_IP, if it is not defined then "vEthernet (HNSTransparent)" is returned GetInterfaceToAddIP() string } Interface is an injectable interface for running netsh commands. Implementations must be goroutine-safe. New returns a new Interface which will exec netsh. Package netsh imports 8 packages (graph) and is imported by 7 packages. Updated 2019-07-26. Refresh now. Tools for package owners.
https://godoc.org/k8s.io/kubernetes/pkg/util/netsh
CC-MAIN-2019-39
en
refinedweb
12690/how-to-solve-bind-cannot-assign-requested-address-error I am running a Hyperledger Fabric My start-order.sh file to start the order looks like this: ORDERER_GENERAL_LOGLEVEL=info \ ORDERER_GENERAL_LISTENADDRESS=orderer0 \ ORDERER_GENERAL_GENESISMETHOD=file \ ORDERER_GENERAL_GENESISFILE=/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/genesis.block \ ORDERER_GENERAL_LOCALMSPID=OrdererOrg0MSP \ ORDERER_GENERAL_LOCALMSPDIR=/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/msp \ ORDERER_GENERAL_TLS_ENABLED=false \ ORDERER_GENERAL_TLS_PRIVATEKEY=/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/server.key \ ORDERER_GENERAL_TLS_CERTIFICATE=/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/server.crt \ ORDERER_GENERAL_TLS_ROOTCAS=[/root/bcnetwork/conf/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/ca.crt,/root/bcnetwork/conf/crypto-config/peerOrganizations/org0/peers/peer0.org0/tls/ca.crt,/root/bcnetwork/conf/crypto-config/peerOrganizations/org1/peers/peer2.org1/tls/ca.crt] \ CONFIGTX_ORDERER_BATCHTIMEOUT=1s \ CONFIGTX_ORDERER_ORDERERTYPE=kafka \ CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka-zookeeper:9092] \ orderer When I run this, I get the following error: 2018-02-19 12:53:31.597 UTC [orderer/main] main -> INFO 001 Starting orderer: Version: 1.0.2 Go version: go1.9 OS/Arch: linux/amd64 2018-02-19 12:53:31.602 UTC [orderer/main] initializeGrpcServer -> CRIT 002 Failed to listen: listen tcp XX.XXX.XXX.XX:7050: bind: cannot assign requested address How to solve this? You must mention in your host.conf file for the orderer to resolve to any address. Add the following to your host.conf and it should work. -e ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 -e ORDERER_GENERAL_LISTENPORT=7050 To know more on how to setup Hyperledger Fabric on multiple hosts, visit: Try restarting docker. Maybe you are getting this error because the address is already being used by a background process. $ docker stop $ docker start You might have kept docker running and up before initializing the fabric. Bring the containers down and start again. To bring the containers down, use the following command: $ docker rm $(docker ps -qa) It seems like there is no required ...READ MORE I migrated the project to a different ...READ MORE Try to install chaincode with a name ...READ MORE It seems like this error is caused ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE OCI is open container initiative.. The error ...READ MORE It seems like the problem is caused ...READ MORE OR
https://www.edureka.co/community/12690/how-to-solve-bind-cannot-assign-requested-address-error
CC-MAIN-2019-39
en
refinedweb
angel_rethink 1.1.0 rethink #. 1.1.0 # - Moved to package:rethinkdb_driver - Fixed references to old hooked event names. Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: angel_rethink: ^1.1.0 2. Install it You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. 3. Import it Now in your Dart code, you can use: import 'package:angel_rethink/angel_rethink.dart'; The package version is not analyzed, because it does not support Dart 2. Until this is resolved, the package will receive a health and maintenance score of 0. Analysis issues and suggestions Fix dependencies in pubspec.yaml. Running pub upgrade failed with the following output: ERR: The current Dart SDK version is 2.5.0. Because angel_rethink depends on rethinkdb_driver >=0.3.0 which requires SDK version <2.0.0, version solving failed. Health suggestions Format lib/angel_rethink.dart. Run dartfmt to format lib/angel_rethink.dart. Maintenance issues and suggestions Fix platform conflicts. (-20 points) Error(s) prevent platform classification: Fix dependencies in pubspec.yaml. Make sure dartdoc successfully runs on your package's source files. (-10 points) Dependencies were not resolved. Package is getting outdated. (-27.67 points) The package was last published 66 weeks ago. Maintain an example. (-10 points) Create a short demo in the example/ directory to show how to use this package. Common filename patterns include main.dart, example.dart, and angel_rethink.
https://pub.dev/packages/angel_rethink
CC-MAIN-2019-39
en
refinedweb
![if !(IE 9)]> <![endif]> The. Let me remind you what ReactOS is. This is a free and open-source operating system based on the Windows NT architecture principles. The system was developed from scratch and therefore is not based on Linux and doesn't have anything in common with the UNIX architecture. The main aim of the ReactOS project is to create a Windows binary-compatible operating system that would allow users to execute Windows-compatible applications and drivers as if they were executed in Windows itself. We analyzed this project once some time ago. The results of that check were described in the post "PVS-Studio: analyzing ReactOS's code". After re-checking the project, we have found a lot of new bugs and suspicious code fragments. This fact proves very well that static code analysis should be performed regularly, not occasionally! Doing it that way will help you to significantly reduce the number of errors at the stage of coding already, which means that errors detected will take much less time to eliminate. Note that the article describes far not all the fragments worth considering. ReactOS has become a big boy now: the solution includes 803 projects. For those, the PVS-Studio analyzer has generated a good deal of general warnings: It was natural that I didn't find enough courage just to sit down and study all these warnings in detail. So, I'll only point at the most suspicious fragments that caught my glance. There certainly must be other warnings that should be examined as attentively; and there are also diagnostics related to 64-bit errors and micro-optimizations which I didn't examine at all. The PVS-Studio demo version will be insufficient to examine all the 4887 warnings. However, we are friendly to open-source projects: if the ReactOS developers ask us, we'll give them our tool for free for a while. PVS-Studio is good at detecting various misprints. We may call it its "hobbyhorse". This is a very useful function, as misprints inevitably exist in any project. Let's see what ReactOS has to show us in this field. NTSTATUS NTAPI CreateCdRomDeviceObject(....) { .... cddata->XAFlags &= ~XA_USE_6_BYTE; cddata->XAFlags = XA_USE_READ_CD | XA_USE_10_BYTE; .... } V519 The 'cddata->XAFlags' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 1290, 1291. cdrom.c 1291 The assignment operation overwrites the previous value of the XAFlags term. The following text should be most likely written instead: cddata->XAFlags |= XA_USE_READ_CD | XA_USE_10_BYTE;. But of course I can't be quite sure, as I don't know the logic of this code. void util_blit_pixels_writemask(....) { .... if ((src_tex == dst_surface->texture && dst_surface->u.tex.level == src_level && dst_surface->u.tex.first_layer == srcZ0) || (src_tex->target != PIPE_TEXTURE_2D && src_tex->target != PIPE_TEXTURE_2D && src_tex->target != PIPE_TEXTURE_RECT)) .... } V501 There are identical sub-expressions 'src_tex->target != PIPE_TEXTURE_2D' to the left and to the right of the '&&' operator. u_blit.c 421 The check "src_tex->target != PIPE_TEXTURE_2D" is executed twice. It's another constant that the 'target' term must be compared to for the second time. Otherwise this comparison is unnecessary. Here's another error of this kind: static boolean is_legal_int_format_combo( const struct util_format_description *src, const struct util_format_description *dst ) { .... for (i = 0; i < nr; i++) { /* The signs must match. */ if (src->channel[i].type != src->channel[i].type) { return FALSE; } .... } V501 There are identical sub-expressions 'src->channel[i].type' to the left and to the right of the '!=' operator. translate_generic.c 776 The correct check seems to be as follows: src->channel[i].type != dst->channel[i].type. And one more similar error: static GpStatus draw_poly(....) { .... if((i + 2 >= count) || !(types[i + 1] & PathPointTypeBezier) || !(types[i + 1] & PathPointTypeBezier)) { ERR("Bad bezier points\n"); goto end; } .... } V501 There are identical sub-expressions '!(types[i + 1] & PathPointTypeBezier)' to the left and to the right of the '||' operator. graphics.c 1912 One more: static inline BOOL is_unc_path(const WCHAR *str) { return (str[0] == '\\' && str[0] == '\\'); } V501 There are identical sub-expressions to the left and to the right of the '&&' operator: str[0] == '\\' && str[0] == '\\' uri.c 273 By the way, this particular bug remains unfixed since the previous check. I didn't describe it in the previous article, although it is included into my error samples base. Don't remember why I didn't mention it - perhaps I was concerned about not making the article too large. The developers must have never run PVS-Studio on their project, and the bug has successfully survived inside the code for at least a couple of years. One more: VOID NTAPI UniAtaReadLunConfig(....) { if(!LunExt->IdentifyData.SectorsPerTrack || !LunExt->IdentifyData.NumberOfCylinders || !LunExt->IdentifyData.SectorsPerTrack) .... } V501 There are identical sub-expressions '!LunExt->IdentifyData.SectorsPerTrack' to the left and to the right of the '||' operator. id_init.cpp 1528 The error is quite obvious, I believe. Don't know how to fix it. Be patient - I have some other twin bugs to show you. And I can't help it... You see, these are very typical software bugs. ir_visitor_status ir_validate::visit_leave(ir_loop *ir) { if (ir->counter != NULL) { if ((ir->from == NULL) || (ir->from == NULL) || (ir->increment == NULL)) { .... } V501 There are identical sub-expressions to the left and to the right of the '||' operator: (ir->from == 0) || (ir->from == 0) ir_validate.cpp 123 One of the "ir->from == 0" comparisons must be replaced with "ir->to == NULL". The same error, caused through the copy-paste technology, can be found here: V501 There are identical sub-expressions to the left and to the right of the '||' operator: (ir->from != 0) || (ir->from != 0) ir_validate.cpp 139 We have finally got to another class of misprints - the unnecessary semicolon ';' that spoils everything. int BlockEnvToEnvironA(void) { .... for (envptr--; envptr >= _environ; envptr--); free(*envptr); .... } V529 Odd semicolon ';' after 'for' operator. environ.c 67 Note the ';' character after the 'for' operator. It results in the free() function being called only once, which leads to memory leaks. It also causes releasing a memory area that wasn't intended to be released. This is how the incorrect code works in its present state: free(envptr >= _environ ? _environ[-1] : envptr); The same semicolons can be found here: static HRESULT WINAPI JScriptSafety_SetInterfaceSafetyOptions( ...., DWORD dwEnabledOptions) { .... This->safeopt = dwEnabledOptions & dwEnabledOptions; return S_OK; } V501 There are identical sub-expressions to the left and to the right of the '&' operator: dwEnabledOptions & dwEnabledOptions jscript.c 905 One of the operands seems to have an incorrectly defined name in the expression. Here's a misprint that causes the size of a rectangle to be calculated incorrectly. GpStatus WINGDIPAPI GdipGetRegionBoundsI(....) { .... status = GdipGetRegionBounds(region, graphics, &rectf); if (status == Ok){ rect->X = gdip_round(rectf.X); rect->Y = gdip_round(rectf.X); rect->Width = gdip_round(rectf.Width); rect->Height = gdip_round(rectf.Height); } return status; } V656 Variables 'rect->X', 'rect->Y' are initialized through the call to the same function. It's probably an error or un-optimized code. Consider inspecting the 'gdip_round(rectf.X)' expression. Check lines: 718, 719. region.c 719 I'm almost sure that the following code must be written here: "rect->Y = gdip_round(rectf.Y);". If it's not so, there should be some comment on this. The following is a code fragment where a variable is assigned to itself: DWORD WINAPI DdGetDriverInfo(LPDDHAL_GETDRIVERINFODATA pData) { .... pUserColorControl->dwFlags = pUserColorControl->dwFlags; .... } V570 The 'pUserColorControl->dwFlags' variable is assigned to itself. gdientry.c 1029 The assignment is meaningless. The expression must be incomplete, or something is messed up. The same error here: V570 The 'Irp->IoStatus.Information' variable is assigned to itself. hidclass.c 461 If you have a C/C++ application, you have troubles with pointers. This is the price we have to pay for the language's efficiency. However, C++ and especially C++11 offer a number of ways to avoid handling wild pointers. But that is a subject to be discussed individually. Let's see what can be found in ReactOS regarding this kind of bugs. static void acpi_bus_notify (....) { struct acpi_device *device = NULL; .... switch (type) { .... case ACPI_NOTIFY_EJECT_REQUEST: DPRINT1("Received EJECT REQUEST " "notification for device [%s]\n", device->pnp.bus_id); break; .... } } V522 Dereferencing of the null pointer 'device' might take place. bus.c 762 If the "case ACPI_NOTIFY_EJECT_REQUEST:" branch is chosen in the 'switch' operator, the 'device' pointer will still be equal to zero at the moment. Dereferencing it in the "device->pnp.bus_id" expression will have unpleasant consequences. In the same bad way the 'device' variable is used in some other fragments: Here's another code fragment where a variable remains equal to zero by the moment it must be used: ir_texture *ir_reader::read_texture(s_expression *expr) { s_symbol *tag = NULL; .... } else if (MATCH(expr, other_pattern)) { op = ir_texture::get_opcode(tag->value()); if (op == -1) return NULL; } .... } V522 Dereferencing of the null pointer 'tag' might take place. ir_reader.cpp 904 At the moment of calling the value() function, the 'tag' variable will still equal zero. That's no good. There are some other similar null pointer dereferencing bugs found in ReactOS: BOOL GetEventCategory(....) { .... if (lpMsgBuf) { .... } else { wcscpy(CategoryName, (LPCWSTR)lpMsgBuf); } .... } V575 The null pointer is passed into 'wcscpy' function. Inspect the second argument. eventvwr.c 270 The wcscpy() function is called only if the 'lpMsgBuf' variable equals zero. This variable is passed as an argument into the 'wcscpy' function. It's hooliganism to pass zero into the 'wcscpy' function. Here, another hooligan is torturing a cat the strstr() function: VOID WinLdrSetupEms(IN PCHAR BootOptions) { PCHAR RedirectPort; .... if (RedirectPort) { .... } else { RedirectPort = strstr(RedirectPort, "usebiossettings"); .... } V575 The null pointer is passed into 'strstr' function. Inspect the first argument. headless.c 263 The _wcsicmp() function has suffered for company as well: DWORD ParseReasonCode(LPCWSTR code) { LPWSTR tmpPrefix = NULL; .... for (reasonptr = shutdownReason ; reasonptr->prefix ; reasonptr++) { if ((majorCode == reasonptr->major) && (minorCode == reasonptr->minor) && (_wcsicmp(tmpPrefix, reasonptr->prefix) != 0)) { return reasonptr->flag; } } .... } V575 The null pointer is passed into '_wcsicmp' function. Inspect the first argument. misc.c 150 By the time the _wcsicmp() function must be called, the pointer tmpPrefix is still a null pointer. There are very many code fragments where the pointer is first dereferenced and only then is checked for being a null pointer. It's not always an error. Perhaps the pointer simply cannot be a null pointer, and the check is just unnecessary. But such code usually appears due to inattention and is incorrect. It works only until the unhappy pointer suddenly becomes a null pointer through a coincidence. I will cite only one simple example here: static BOOL LookupSidInformation(....) { .... DomainName = &PolicyAccountDomainInfo->DomainName; SidNameUse = (PolicyAccountDomainInfo != NULL ? SidTypeGroup : SidTypeUser); .... } V595 The 'PolicyAccountDomainInfo' pointer was utilized before it was verified against nullptr. Check lines: 254, 257. sidcache.c 254 Look, the 'PolicyAccountDomainInfo' pointer is dereferenced first. And then it is suddenly checked for being a null pointer. Such a code is usually created as a result of swift refactoring. Variables are starting to be used when there are not checked yet. The reason why I'm describing only one error of this kind is that they all look much alike. And also because they are AWFULLY NUMEROUS. I'm not interested in examining and describing each individual case. Moreover, it's impossible to include them all into the article - it would then be a reference book instead. That's why I'll just cite the diagnostic messages for you: Macros are bad - of that I'm still dead sure. You should use regular functions wherever possible. Someone felt too lazy to make a full-fledged function stat64_to_stat() in ReactOS and contented himself/herself with creating a shit-macro. This is what it looks like: #define stat64_to_stat(buf64, buf) \ buf->st_dev = (buf64)->st_dev; \ buf->st_ino = (buf64)->st_ino; \ buf->st_mode = (buf64)->st_mode; \ buf->st_nlink = (buf64)->st_nlink; \ buf->st_uid = (buf64)->st_uid; \ buf->st_gid = (buf64)->st_gid; \ buf->st_rdev = (buf64)->st_rdev; \ buf->st_size = (_off_t)(buf64)->st_size; \ buf->st_atime = (time_t)(buf64)->st_atime; \ buf->st_mtime = (time_t)(buf64)->st_mtime; \ buf->st_ctime = (time_t)(buf64)->st_ctime; \ Let's see how this macro is used in the _tstat function: int CDECL _tstat(const _TCHAR* path, struct _stat * buf) { int ret; struct __stat64 buf64; ret = _tstat64(path, &buf64); if (!ret) stat64_to_stat(&buf64, buf); return ret; } Do you think the 'stat64_to_stat' macro is executed if the 'ret' variable equals zero? It is absolutely not. The macro is expanded into a set of separate lines. That's why only the "buf->st_dev = (buf64)->st_dev;" line refers to the 'if' operator, while all the other lines will be executed all the time! There are other fragments that employ this incorrect macro: Here's an issue when an always true condition might cause an infinite loop. #define DISKREADBUFFER_SIZE HEX(10000) typedef unsigned short USHORT, *PUSHORT; static VOID DetectBiosDisks(....) { USHORT i; .... Changed = FALSE; for (i = 0; ! Changed && i < DISKREADBUFFER_SIZE; i++) { Changed = ((PUCHAR)DISKREADBUFFER)[i] != 0xcd; } .... } V547 Expression 'i < 0x10000' is always true. The value range of unsigned short type: [0, 65535]. xboxhw.c 358 The loop is meant to search through the DISKREADBUFFER array for a byte whose value doesn't equal '0xCD'. If such a byte doesn't exist, the 'Changed' variable always has the FALSE value. In this case, the "i < DISKREADBUFFER_SIZE" expression is the truncation condition. As this expression is always true, the program will start iterating an infinite loop. The error is this: the 'i' variable has the 'unsigned short' type. It can take values within the range from 0 to 65535. These values are always below '0x10000'. A typical error I often see in many projects is the assumption that SOCKET is a signed variable. It's not so. To be more exact, it depends on the library implementation. typedef UINT_PTR SOCKET; #define ADNS_SOCKET SOCKET struct adns__state { .... ADNS_SOCKET udpsocket, tcpsocket; .... }; static int init_finish(adns_state ads) { .... if (ads->udpsocket<0) { r= errno; goto x_free; } .... } V547 Expression 'ads->udpsocket < 0' is always false. Unsigned type value is never < 0. setup.c 539 The 'udpsocket' variable is unsigned, which means that the 'ads->udpsocket < 0' condition is always false. To figure out where the error has occurred we need to use the SOCKET_ERROR constant. Similar socket handling errors can be found here: Incorrect checks may lead to buffer overflows and, consequently, to undefined behavior. Here's a sample where the exception handler fails. BOOL PrepareService(LPCTSTR ServiceName) { DWORD LeftOfBuffer = sizeof(ServiceKeyBuffer) / sizeof(ServiceKeyBuffer[0]); .... LeftOfBuffer -= _tcslen(SERVICE_KEY); .... LeftOfBuffer -= _tcslen(ServiceName); .... LeftOfBuffer -= _tcslen(PARAMETERS_KEY); .... if (LeftOfBuffer < 0) { DPRINT1("Buffer overflow for service name: '%s'\n", ServiceName); return FALSE; } .... } V547 Expression 'LeftOfBuffer < 0' is always false. Unsigned type value is never < 0. svchost.c 51 The 'LeftOfBuffer' variable should most likely be a signed one. It often happens that unsigned variables cause function return values to be checked incorrectly. Here's such a code: static INT FASTCALL MenuButtonUp(MTRACKER *Mt, HMENU PtMenu, UINT Flags) { UINT Id; .... Id = NtUserMenuItemFromPoint(....); .... if (0 <= Id && MenuGetRosMenuItemInfo(MenuInfo.Self, Id, &ItemInfo) && MenuInfo.FocusedItem == Id) .... } V547 Expression '0 <= Id' is always true. Unsigned type value is always >= 0. menu.c 2663 The NtUserMenuItemFromPoint() function can return the negative value (-1). The error occurs because of the 'Id' variable being unsigned. That results in the '0 <= Id' check being meaningless. A function parameter is checked incorrectly in the following code fragment. typedef unsigned int GLuint; const GLubyte *_mesa_get_enabled_extension( struct gl_context *ctx, GLuint index) { const GLboolean *base; size_t n; const struct extension *i; if (index < 0) return NULL; .... } V547 Expression 'index < 0' is always false. Unsigned type value is never < 0. extensions.c 936 It's not interesting to discuss V547 warnings any further, so let me just cite the remaining fragments I've noticed: You must not shift negative numbers - even if the code that has these shifts seems to work successfully for a long time. It is incorrect. It leads to undefined or unspecified behavior. The issue may reveal itself when you start using another platform or another compiler or change optimization switches. I discussed negative number shifts in detail in the article "Wade not in unknown waters. Part three". This is an incorrect code sample: static INLINE int wrap(short f, int shift) { .... if (f < (-16 << shift)) .... } V610 Undefined behavior. Check the shift operator '<<. The left operand '-16' is negative. vl_mpeg12_bitstream.c 653 No one knows what the (-16 << shift) expression is equal to. Other similar fragile code samples can be found in the following fragments: Let's have a look at several samples demonstrating incorrect ways of using variadic functions to print variable values. UINT64 Size; static HRESULT STDMETHODCALLTYPE CBindStatusCallback_OnProgress(....) { .... _tprintf(_T("Length: %ull\n"), This->Size); .... } V576 Incorrect format. Consider checking the second actual argument of the 'wprintf' function. The argument is expected to be not greater than 32-bit. dwnl.c 228 You should write "%llu" instead of "%ull" to print a 64-bit variable. Using "%u" is one more incorrect way to print the pointer value. There exists the "%p" specifier for this purpose. However, the programmer must have made a misprint in the code below, and it is "%s" that should have been written there. BOOL CALLBACK EnumPickIconResourceProc( HMODULE hModule, LPCWSTR lpszType, LPWSTR lpszName, LONG_PTR lParam) { .... swprintf(szName, L"%u", lpszName); .... } V576 Incorrect format. Consider checking the third actual argument of the 'swprintf' function. To print the value of pointer the '%p' should be used. dialogs.cpp 66 The errors when Unicode and non-Unicode strings are used together are very frequent. For example, if you need to print a UNICODE character in the fprintf() function, you should use '%C', not '%c'. Here's an incorrect code sample with that error: int WINAPI WinMain(....) { LPWSTR *argvW = NULL; .... fprintf(stderr, "Unknown option \"%c\" in Repair mode\n", argvW[i][j]); .... } V576 Incorrect format. Consider checking the third actual argument of the 'fprintf' function. The char type argument is expected. msiexec.c 655 The same bugs can be found in the following fragments: I've found several errors related to operation priorities confusion. static HRESULT BindStatusCallback_create(....) { HRESULT hr; .... if ((hr = SafeArrayGetUBound(sa, 1, &size) != S_OK)) { SafeArrayUnaccessData(sa); return hr; } .... } V593 Consider reviewing the expression of the 'A = B != C' kind. The expression is calculated as following: 'A = (B != C)'. httprequest.c 692 According to operation priorities in C/C++, the "SafeArrayGetUBound(sa, 1, &size) != S_OK" comparison is executed in the first place, while it is only then that assignment is performed. However, the condition will work well. The incorrect thing is that the 'hr' variable will store 0 or 1 instead of the status. The function will therefore return an incorrect status. Here is another very similar error: static void symt_fill_sym_info(....) { .... if (sym->tag != SymTagPublicSymbol || !(dbghelp_options & SYMOPT_UNDNAME) || (sym_info->NameLen = UnDecorateSymbolName(name, sym_info->Name, sym_info->MaxNameLen, UNDNAME_NAME_ONLY) == 0)) .... } V593 Consider reviewing the expression of the 'A = B == C' kind. The expression is calculated as following: 'A = (B == C)'. symbol.c 801 The code is difficult to read. But if you look close, you'll notice that the UnDecorateSymbolName() function's return result is compared to zero first, then the comparison result is put into the 'sym_info->NameLen' variable. FF_T_WCHAR FileName[FF_MAX_FILENAME]; FF_T_UINT32 FF_FindEntryInDir(....) { .... FF_T_WCHAR *lastPtr = pDirent->FileName + sizeof(pDirent->FileName); .... lastPtr[-1] = '\0'; .... } V594 The pointer steps out of array's bounds. ff_dir.c 260 The programmer intended 'lastPtr' to point at a memory cell after that last character in the string. That won't happen though. The string consists of WCHAR characters. It means that it's the buffer size that is added, not the number of characters. And that value is twice larger than necessary. When writing the null character, the array index out of bounds error with all its implications will occur. This is what the fixed code looks like: FF_T_WCHAR *lastPtr = pDirent->FileName + sizeof(pDirent->FileName) / sizeof(pDirent->FileName[0]); The strncat() function is pretty dangerous regarding this class of bugs. The reason is that it's not the total buffer size that the last argument should specify, but how many more characters can be put into it. Because of misunderstanding this thing, programmers write unsafe code: void shell(int argc, const char *argv[]) { char CmdLine[MAX_PATH]; .... strcpy( CmdLine, ShellCmd ); if (argc > 1) { strncat(CmdLine, " /C", MAX_PATH); } for (i=1; i<argc; i++) { strncat(CmdLine, " ", MAX_PATH); strncat(CmdLine, argv[i], MAX_PATH); } .... } V645 The 'strncat' function call could lead to the 'CmdLine' buffer overflow. The bounds should not contain the size of the buffer, but a number of characters it can hold. cmds.c 1314 V645 The 'strncat' function call could lead to the 'CmdLine' buffer overflow. The bounds should not contain the size of the buffer, but a number of characters it can hold. cmds.c 1319 V645 The 'strncat' function call could lead to the 'CmdLine' buffer overflow. The bounds should not contain the size of the buffer, but a number of characters it can hold. cmds.c 1320 It cannot be guaranteed that no buffer overflow occurs. To learn more about this class of errors, see the documentation (V645 diagnostic). A similar trouble can be found here: V645 The 'wcsncat' function call could lead to the 'szFileName' buffer overflow. The bounds should not contain the size of the buffer, but a number of characters it can hold. logfile.c 50 Repetitions are related to conditions and can be of two types. Type one. The same operations are executed regardless of the condition. For example: void CardButton::DrawRect(HDC hdc, RECT *rect, bool fNormal) { .... if(fNormal) hOld = SelectObject(hdc, hhi); else hOld = SelectObject(hdc, hhi); .... } V523 The 'then' statement is equivalent to the 'else' statement. cardbutton.cpp 86 Another example: NTSTATUS NTAPI CPortPinWavePci::HandleKsStream(IN PIRP Irp) { .... if (m_Capture) m_Position.WriteOffset += Data; else m_Position.WriteOffset += Data; .... } V523 The 'then' statement is equivalent to the 'else' statement. pin_wavepci.cpp 562 One more repetition of a large code fragment can be found here: V523 The 'then' statement is equivalent to the 'else' statement. tab.c 1043 Type two. A condition is repeated. It appears that the second condition will never hold. For example: #define LOCALE_SSHORTDATE 31 #define LOCALE_SLONGDATE 32 MSVCRT__locale_t CDECL MSVCRT__create_locale(....) { .... if (time_data[i]== LOCALE_SSHORTDATE && !lcid[LC_TIME]) { size += ....; } else if(time_data[i]== LOCALE_SSHORTDATE && !lcid[LC_TIME]) { size += ....; } else { .... } V517 The use of 'if (A) {...} else if (A) {...}' pattern was detected. There is a probability of logical error presence. Check lines: 1193, 1195. locale.c 1193 I suppose that the second check should have been written in the following way: if (time_data[i]==LOCALE_SLONGDATE && !lcid[LC_TIME]) Other similar repeating checks can be found here: Now let's have a look at diverse bugs. typedef struct _UNICODE_STRING { USHORT Length; USHORT MaximumLength; PWSTR Buffer; } UNICODE_STRING, *PUNICODE_STRING; UNICODE_STRING DosDevices = RTL_CONSTANT_STRING(L"\\DosDevices\\"); NTSTATUS CreateNewDriveLetterName(....) { .... DriveLetter->Buffer[ sizeof(DosDevices.Buffer) / sizeof(WCHAR)] = (WCHAR)Letter; .... } V514 Dividing sizeof a pointer 'sizeof (DosDevices.Buffer)' by another value. There is a probability of logical error presence. mountmgr.c 164 It seems that the "sizeof(DosDevices.Buffer) / sizeof(WCHAR)" expression was intended to calculate the number of characters in a string. But 'DosDevices.Buffer' is just a pointer. As a result, the pointer size is divided by 'sizeof(WCHAR)'. Other similar errors can be found here: Here's another case of incorrect calculation of the number of characters in strings. In the following sample it's multiplication instead of division: VOID DisplayEvent(HWND hDlg) { WCHAR szEventType[MAX_PATH]; WCHAR szTime[MAX_PATH]; WCHAR szDate[MAX_PATH]; WCHAR szUser[MAX_PATH]; WCHAR szComputer[MAX_PATH]; .... ListView_GetItemText(...., sizeof(szEventType)*sizeof(WCHAR)); ListView_GetItemText(...., sizeof(szDate)*sizeof(WCHAR)); ListView_GetItemText(...., sizeof(szTime)*sizeof(WCHAR)); ListView_GetItemText(...., sizeof(szSource)*sizeof(WCHAR)); ListView_GetItemText(...., sizeof(szCategory)*sizeof(WCHAR)); ListView_GetItemText(...., sizeof(szEventID)*sizeof(WCHAR)); ListView_GetItemText(...., sizeof(szUser)*sizeof(WCHAR)); ListView_GetItemText(...., sizeof(szComputer)*sizeof(WCHAR)); .... } It results in the ListView_GetItemText() function assuming that the buffer size is larger than it actually is. It may potentially cause a buffer overflow. #define strcmpW(s1,s2) wcscmp((s1),(s2)) static HRESULT WINAPI IEnumDMO_fnNext(....) { .... if (Names[count]) strcmpW(Names[count], szValue); .... } V530 The return value of function 'wcscmp' is required to be utilized. dmoreg.c 621 HRESULT WINAPI INetCfgComponentControl_fnApplyRegistryChanges( INetCfgComponentControl * iface) { HKEY hKey; .... if (RegCreateKeyExW(hKey, L"SYSTEM\\CurrentControlSet....", ....) == ERROR_SUCCESS) .... } V614 Uninitialized pointer 'hKey' used. Consider checking the first actual argument of the 'RegCreateKeyExW' function. tcpipconf_notify.c 3138 While calling the RegCreateKeyExW() function, the 'hKey' variable is not initialized yet. HRESULT WINAPI CRecycleBin::CompareIDs(....) { .... return MAKE_HRESULT(SEVERITY_SUCCESS, 0, (unsigned short)memcmp(pidl1->mkid.abID, pidl2->mkid.abID, pidl1->mkid.cb)); } V642 Saving the 'memcmp' function result inside the 'unsigned short' type variable is inappropriate. The significant bits could be lost breaking the program's logic. recyclebin.cpp 542 This type of errors is very much unobvious. I suggest that you read the description of the V642 diagnostic to understand the point. To put it briefly, the trouble is that the memcmp() function doesn't necessarily return only values -1, 0, and 1. It may well return, for instance, number 0x100000. When casting this number to the "unsigned short" type, it will turn into 0. I've encountered several very strange loops. They don't have the 'continue' operator yet have the unconditional operator 'break'. It means that the loop bodies are executed only once. Here's an example of that kind. VOID NTAPI IKsPin_PinCentricWorker(IN PVOID Parameter) { .... do { DPRINT("IKsPin_PinCentricWorker calling " "Pin Process Routine\n"); Status = This->Pin.Descriptor->Dispatch->Process(&This->Pin); DPRINT("IKsPin_PinCentricWorker Status %lx, " "Offset %lu Length %lu\n", Status, This->LeadingEdgeStreamPointer.Offset, This->LeadingEdgeStreamPointer.Length); break; } while(This->IrpCount); } V612 An unconditional 'break' within a loop. pin.c 1839 Other similar strange loops: There are code fragments which are probably not bugs. They are simply very strange. For example: BOOLEAN NTAPI Ext2MakeNewDirectoryEntry(....) { .... MinLength = HeaderLength + NameLength; MinLength = (HeaderLength + NameLength + 3) & 0xfffffffc; .... } V519 The 'MinLength' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 948, 949. metadata.c 949 The 'MinLength' variable is assigned different values twice in a row. Perhaps it somehow helps in debugging - I don't know. I would consider this an error, but there are many fragments of that kind throughout the code. I won't mention them, as the post is already huge enough. I fail to make any wise conclusions. ReactOS is a rapidly growing and developing project. Hence it contains quite a lot of errors. As you can see from this article, static analysis can catch a good deal of them in a project like that. If one used it regularly, the benefit would be just invaluable. Follow us on Twitter to keep track of PVS-Studio's new interesting feats in its struggle against bugs. There we also post links to interesting articles on C/C++ programming and related subjects. ...
https://www.viva64.com/en/b/0192/
CC-MAIN-2019-39
en
refinedweb
How to Use the fgets() Function for Text Input in C Programming For a general-purpose text input function in the C programming language, one that reads beyond the first white space character, try the fgets() function. Here’s the format: #include <stdio.h> char * fgets(char *restrict s, int n, FILE *restrict stream); Frightening, no? That’s because fgets() is a file function, which reads text from a file, as in “file get string.” That’s how programmers talk after an all-nighter. Because the operating system considers standard input like a file, you can use fgets() to read text from the keyboard. Here’s a simplified version of the fgets() function as it applies to reading text input: #include <stdio.h> int main() { char name[10]; printf("Who are you? "); fgets(name,10,stdin); printf("Glad to meet you, %s.n",name); return(0); } Exercise 1: Type the source code from The fgets() Function Reads a String into a new project, ex0716. Compile and run. The fgets() function in Line 8 reads in text. The text goes into the name array, which is set to a maximum of ten characters in Line 5. The number 10 specifies that fgets() reads in only nine characters, one less than the number specified. Finally, stdin is specified as the “file” from which input is read. stdin is standard input. The char array must have one extra character reserved for the at the end of a string. Its size must equal the size of input you need — plus one. Here’s how the program runs: Who are you? Danny Gookin Glad to meet you, Danny Goo. Only the first nine characters of the text typed in the first line are displayed. Why only nine? Because of the string’s terminating character — the NULL, or . The room for this character is defined when the name array is created in Line 5. If fgets() were to read in ten characters instead of nine, the array would overflow, and the program could malfunction. Exercise 2: Change the array size in the source code from The fgets() Function Reads a String to a constant value. Set the constant to allow only three characters input. The fgets() function reads text from standard input, not from the keyboard directly. The value returned by fgets() is the string that was input.
https://www.dummies.com/programming/c/how-to-use-the-fgets-function-for-text-input-in-c-programming/
CC-MAIN-2019-39
en
refinedweb
The Oracle JVM turned out to the be clear winner in this speed test. See the full details here: ... marks.html Code: Select all wiringPiSetup() thanks for this post I missthanks for this post I missh. ... Hi Robert,Hi Robert Code: Select all,Trouch. I was just wondering where was the overhead to explain the difference between oracle jvm and native c code starting from the groundI was just wondering where was the overhead to explain the difference between oracle jvm and native c code starting from the groundsavage? Thanks for that link! I had not seen that article.Thanks for that link! I had not seen that article.trouch wrote:Thanks for your post and blog article, it's a great complement to ... pio-speed/ at this line:at this line:java.io.IOException: No matching device found at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:835) Code: Select all memoryMappedFile.getChannel().map(FileChannel.MapMode.READ_WRITE, 0, 8); Code: Select all"); }; Code: Select all Code: Select all Code: Select all Just got a RPi oscilloscope as well, looking forward to posting my benchmarks!Just got a RPi oscilloscope as well, looking forward to posting my benchmarks Code: Select all fdMem = open("/dev/gpiomem", O_RDWR | O_SYNC | O_CLOEXEC); gpioAddr = (uint32_t *)mmap(NULL, BLOCK_SIZE, PROT_READ|PROT_WRITE, MAP_SHARED, fdMem, 0); return (*env)->NewDirectByteBuffer(env, gpioAddr, BLOCK_SIZE); Code: Select all public static native ByteBuffer initialise(); public static void main(String[] args) { ByteBuffer gpioReg = initialise(); } Code: Select all int i; for (i=0; i<4; i++) { printf("gpioAddr[0]=0x%x\n", gpioAddr[0]); } Code: Select all for (int i=0; i<4; i++) { System.out.format("gpioReg[%d]=0x%x%n", i, gpioReg.getInt(i*SIZE_OF_INT)); }
https://www.raspberrypi.org/forums/viewtopic.php?f=81&t=29026&p=255521
CC-MAIN-2019-39
en
refinedweb
#include <sys/types.h> #include <sys/module.h> #include <sys/socket.h> #include <sys/socketvar.h>(), passing a pointer to a struct accept_filter, allocated with malloc(9). The fields of struct accept_filter are as follows: argument field to load and unload themselves. This function can be used in the moduledata_t struct for the DECLARE_MODULE(9) macro. The accept filter concept was pioneered by David Filo at Yahoo! and refined to be a loadable module system by Alfred Perlstein. Please direct any comments about this manual page service to Ben Bullock. Privacy policy.
https://nxmnpg.lemoda.net/9/accept_filter
CC-MAIN-2019-39
en
refinedweb
/* * Copyright .sunshine.data; import android.annotation.TargetApi; import android.content.ContentProvider; import android.content.ContentValues; import android.content.UriMatcher; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.net.Uri; import android.support.annotation.NonNull; import com.example.android.sunshine.utilities.SunshineDateUtils; /** * This class serves as the ContentProvider for all of Sunshine's data. This class allows us to * bulkInsert data, query data, and delete data. * <p> * Although ContentProvider implementation requires the implementation of additional methods to * perform single inserts, updates, and the ability to get the type of the data from a URI. * However, here, they are not implemented for the sake of brevity and simplicity. If you would * like, you may implement them on your own. However, we are not going to be teaching how to do * so in this course. */ public class WeatherProvider extends ContentProvider { /* * These constant will be used to match URIs with the data they are looking for. We will take * advantage of the UriMatcher class to make that matching MUCH easier than doing something * ourselves, such as using regular expressions. */ public static final int CODE_WEATHER = 100; public static final int CODE_WEATHER_WITH_DATE = 101; /* * The URI Matcher used by this content provider. The leading "s" in this variable name * signifies that this UriMatcher is a static member variable of WeatherProvider and is a * common convention in Android programming. */ private static final UriMatcher sUriMatcher = buildUriMatcher(); private WeatherDbHelper mOpenHelper; /** * Creates the UriMatcher that will match each URI to the CODE_WEATHER and * CODE_WEATHER_WITH_DATE constants defined above. * <p> * It's possible you might be thinking, "Why create a UriMatcher when you can use regular * expressions instead? After all, we really just need to match some patterns, and we can * use regular expressions to do that right?" Because you're not crazy, that's why. * <p> * UriMatcher does all the hard work for you. You just have to tell it which code to match * with which URI, and it does the rest automagically. Remember, the best programmers try * to never reinvent the wheel. If there is a solution for a problem that exists and has * been tested and proven, you should almost always use it unless there is a compelling * reason not to. * * @return A UriMatcher that correctly matches the constants for CODE_WEATHER and CODE_WEATHER_WITH_DATE */ public static UriMatcher buildUriMatcher() { /* * All paths added to the UriMatcher have a corresponding code to return when a match is * found. The code passed into the constructor of UriMatcher here represents the code to * return for the root URI. It's common to use NO_MATCH as the code for this case. */ final UriMatcher matcher = new UriMatcher(UriMatcher.NO_MATCH); final String authority = WeatherContract.CONTENT_AUTHORITY; /* * For each type of URI you want to add, create a corresponding code. Preferably, these are * constant fields in your class so that you can use them throughout the class and you no * they aren't going to change. In Sunshine, we use CODE_WEATHER or CODE_WEATHER_WITH_DATE. */ /* This URI is content://com.example.android.sunshine/weather/ */ matcher.addURI(authority, WeatherContract.PATH_WEATHER, CODE_WEATHER); /* * This URI would look something like content://com.example.android.sunshine/weather/1472214172 * The "/#" signifies to the UriMatcher that if PATH_WEATHER is followed by ANY number, * that it should return the CODE_WEATHER_WITH_DATE code */ matcher.addURI(authority, WeatherContract.PATH_WEATHER + "/#", CODE_WEATHER_WITH_DATE); return matcher; } /** * In onCreate, we initialize our content provider on startup. This method is called for all * registered content providers on the application main thread at application launch time. * It must not perform lengthy operations, or application startup will be delayed. * * Nontrivial initialization (such as opening, upgrading, and scanning * databases) should be deferred until the content provider is used (via {@link #query}, * {@link #bulkInsert(Uri, ContentValues[])}, etc). * * Deferred initialization keeps application startup fast, avoids unnecessary work if the * provider turns out not to be needed, and stops database errors (such as a full disk) from * halting application launch. * * @return true if the provider was successfully loaded, false otherwise */ @Override public boolean onCreate() { /* * As noted in the comment above, onCreate is run on the main thread, so performing any * lengthy operations will cause lag in your app. Since WeatherDbHelper's constructor is * very lightweight, we are safe to perform that initialization here. */ mOpenHelper = new WeatherDbHelper(getContext()); return true; } /** * Handles requests to insert a set of new rows. In Sunshine, we are only going to be * inserting multiple rows of data at a time from a weather forecast. There is no use case * for inserting a single row of data into our ContentProvider, and so we are only going to * implement bulkInsert. In a normal ContentProvider's implementation, you will probably want * to provide proper functionality for the insert method as well. * * @param uri The content:// URI of the insertion request. * @param values An array of sets of column_name/value pairs to add to the database. * This must not be {@code null}. * * @return The number of values that were inserted. */ @Override public int bulkInsert(@NonNull Uri uri, @NonNull ContentValues[] values) { final SQLiteDatabase db = mOpenHelper.getWritableDatabase(); switch (sUriMatcher.match(uri)) { case CODE_WEATHER: db.beginTransaction(); int rowsInserted = 0; try { for (ContentValues value : values) { long weatherDate = value.getAsLong(WeatherContract.WeatherEntry.COLUMN_DATE); if (!SunshineDateUtils.isDateNormalized(weatherDate)) { throw new IllegalArgumentException("Date must be normalized to insert"); } long _id = db.insert(WeatherContract.WeatherEntry.TABLE_NAME, null, value); if (_id != -1) { rowsInserted++; } } db.setTransactionSuccessful(); } finally { db.endTransaction(); } if (rowsInserted > 0) { getContext().getContentResolver().notifyChange(uri, null); } return rowsInserted; default: return super.bulkInsert(uri, values); } } /** * Handles query requests from clients. We will use this method in Sunshine to query for all * of our weather data as well as to query for the weather on a particular day. * * @param uri The URI to query * @param projection The list of columns to put into the cursor. If null, all columns are * included. * @param selection A selection criteria to apply when filtering rows. If null, then all * rows are included. * @param selectionArgs You may include ?s in selection, which will be replaced by * the values from selectionArgs, in order that they appear in the * selection. * @param sortOrder How the rows in the cursor should be sorted. * @return A Cursor containing the results of the query. In our implementation, */ @Override public Cursor query(@NonNull Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { Cursor cursor; /* * Here's the switch statement that, given a URI, will determine what kind of request is * being made and query the database accordingly. */ switch (sUriMatcher.match(uri)) { /* * When sUriMatcher's match method is called with a URI that looks something like this * * content://com.example.android.sunshine/weather/1472214172 * * sUriMatcher's match method will return the code that indicates to us that we need * to return the weather for a particular date. The date in this code is encoded in * milliseconds and is at the very end of the URI (1472214172) and can be accessed * programmatically using Uri's getLastPathSegment method. * * In this case, we want to return a cursor that contains one row of weather data for * a particular date. */ case CODE_WEATHER_WITH_DATE: { /* * In order to determine the date associated with this URI, we look at the last * path segment. In the comment above, the last path segment is 1472214172 and * represents the number of seconds since the epoch, or UTC time. */ String normalizedUtcDateString = uri.getLastPathSegment(); /* * The query method accepts a string array of arguments, as there may be more * than one "?" in the selection statement. Even though in our case, we only have * one "?", we have to create a string array that only contains one element * because this method signature accepts a string array. */ String[] selectionArguments = new String[]{normalizedUtcDateString}; cursor = mOpenHelper.getReadableDatabase().query( /* Table we are going to query */ WeatherContract.WeatherEntry.TABLE_NAME, /* * A projection designates the columns we want returned in our Cursor. * Passing null will return all columns of data within the Cursor. * However, if you don't need all the data from the table, it's best * practice to limit the columns returned in the Cursor with a projection. */ projection, /* * The URI that matches CODE_WEATHER_WITH_DATE contains a date at the end * of it. We extract that date and use it with these next two lines to * specify the row of weather we want returned in the cursor. We use a * question mark here and then designate selectionArguments as the next * argument for performance reasons. Whatever Strings are contained * within the selectionArguments array will be inserted into the * selection statement by SQLite under the hood. */ WeatherContract.WeatherEntry.COLUMN_DATE + " = ? ", selectionArguments, null, null, sortOrder); break; } /* * When sUriMatcher's match method is called with a URI that looks EXACTLY like this * * content://com.example.android.sunshine/weather/ * * sUriMatcher's match method will return the code that indicates to us that we need * to return all of the weather in our weather table. * * In this case, we want to return a cursor that contains every row of weather data * in our weather table. */ case CODE_WEATHER: { cursor = mOpenHelper.getReadableDatabase().query( WeatherContract.WeatherEntry.TABLE_NAME, projection, selection, selectionArgs, null, null, sortOrder); break; } default: throw new UnsupportedOperationException("Unknown uri: " + uri); } cursor.setNotificationUri(getContext().getContentResolver(), uri); return cursor; } /** * Deletes data at a given URI with optional arguments for more fine tuned deletions. * * @param uri The full URI to query * @param selection An optional restriction to apply to rows when deleting. * @param selectionArgs Used in conjunction with the selection statement * @return The number of rows deleted */ @Override public int delete(@NonNull Uri uri, String selection, String[] selectionArgs) { /* Users of the delete method will expect the number of rows deleted to be returned. */ int numRowsDeleted; /* * If we pass null as the selection to SQLiteDatabase#delete, our entire table will be * deleted. However, if we do pass null and delete all of the rows in the table, we won't * know how many rows were deleted. According to the documentation for SQLiteDatabase, * passing "1" for the selection will delete all rows and return the number of rows * deleted, which is what the caller of this method expects. */ if (null == selection) selection = "1"; switch (sUriMatcher.match(uri)) { case CODE_WEATHER: numRowsDeleted = mOpenHelper.getWritableDatabase().delete( WeatherContract.WeatherEntry.TABLE_NAME, selection, selectionArgs); break; default: throw new UnsupportedOperationException("Unknown uri: " + uri); } /* If we actually deleted any rows, notify that a change has occurred to this URI */ if (numRowsDeleted != 0) { getContext().getContentResolver().notifyChange(uri, null); } return numRowsDeleted; } /** * In Sunshine, we aren't going to do anything with this method. However, we are required to * override it as WeatherProvider extends ContentProvider and getType is an abstract method in * ContentProvider. Normally, this method handles requests for the MIME type of the data at the * given URI. For example, if your app provided images at a particular URI, then you would * return an image URI from this method. * * @param uri the URI to query. * @return nothing in Sunshine, but normally a MIME type string, or null if there is no type. */ @Override public String getType(@NonNull Uri uri) { throw new RuntimeException("We are not implementing getType in Sunshine."); } /** * In Sunshine, we aren't going to do anything with this method. However, we are required to * override it as WeatherProvider extends ContentProvider and insert is an abstract method in * ContentProvider. Rather than the single insert method, we are only going to implement * {@link WeatherProvider#bulkInsert}. * * @param uri The URI of the insertion request. This must not be null. * @param values A set of column_name/value pairs to add to the database. * This must not be null * @return nothing in Sunshine, but normally the URI for the newly inserted item. */ @Override public Uri insert(@NonNull Uri uri, ContentValues values) { throw new RuntimeException( "We are not implementing insert in Sunshine. Use bulkInsert instead"); } @Override public int update(@NonNull Uri uri, ContentValues values, String selection, String[] selectionArgs) { throw new RuntimeException("We are not implementing update in Sunshine"); } /** * You do not need to call this method. This is a method specifically to assist the testing * framework in running smoothly. You can read more at: * */ @Override @TargetApi(11) public void shutdown() { mOpenHelper.close(); super.shutdown(); } }
https://www.programcreek.com/java-api-examples/?code=gmontoya2483/GoUbiquitous/GoUbiquitous-master/app/src/main/java/com/example/android/sunshine/data/WeatherProvider.java
CC-MAIN-2019-39
en
refinedweb
We will create a new PushButton class to represent a pushbutton connected to our board that can use either a pull-up or a pull-down resistor. The following lines show the code for the new PushButton class that works with the mraa library. The code file for the sample is iot_python_chapter_05_01.py. import mraa import time from datetime import date class PushButton: def __init__(self, pin, pull_up=True): self.pin = pin self.pull_up = pull_up self.gpio = mraa.Gpio(pin) self.gpio.dir(mraa.DIR_IN) @property def is_pressed(self): push_button_status = self.gpio.read() if self.pull_up: # Pull-up resistor connected return push_button_status == 0 else: # Pull-down resistor connected return ... No credit card required
https://www.oreilly.com/library/view/internet-of-things/9781785881381/ch05s03.html
CC-MAIN-2019-39
en
refinedweb
trying to save an image with its name intact php trying to save an image with its name intact php - You can use basename() to determine the last part (= filename) of a path. $ image_name See the docs over at .php. Stop saving image as images/name.jpg of image - PHP - Hi i have the upload script but it keeps uploading the image name with i have tried to take off the image but then it doesnt upload the file int the images folder how because we have to keep the image ratio intact or it will be deformed define rename - Manual - Attempting to call rename() with such a destination filesystem will cause an " Operation .. to change evry file name with extention html into php in the directory dir . And as noted, your 'old' directory will remain on the server totally intact, which can be very confusing. .. I was using rename() 'batch-move' a bunch of images. php - Saving an image in MYSQL vs saving to a folder - I would keep them out of the database, store them in the OS's file system and just store a relative path in a database table. This will allow you to How to Export Basic HTML, Part 2: Images - InDesign CS2 or CS3, basic formatting intact, using a little XML Tags trickery. If in doubt, try both methods and show them to your web team; see Instead, all we've got is this, the Image section of the Export XML dialog box: . This is the folder name that InDesign CS2 automatically creates on export Why Are Pictures not Showing in Email? - Plain Text email is, as the name implies, plain text and nothing more. . If not, and if the email you're looking at is trying to fetch images remotely, that could easily be the cause.- eBay saved search notices WITH PHOTOS intact…. so the problem The Essential Guide to Dreamweaver CS4 with CSS, Ajax, and PHP - Image formats! They also saved an extra 13.8% using MozJPEG .. -type f - name '*.jpg' -exec jpeg-recompress {} {} \;. and trim those On Mac, try the Quick Look plugin for WebP (qlImageSize). To override this behavior and deliver a transformed image with its metadata intact, add the keep_iptc flag. Automating image optimization | Web Fundamentals - The alpha channel data is left intact just deactivated. .. As an example, to add contrast to an image with offsets, try this command: .. The value can be the name of a PNG chunk-type such as bKGD , a comma-separated list . It will preserve the opacity mask of a layer and add it back to the layer when the image is saved. replace name in php Replace all occurrences of the search string with the - If search and replace are arrays, then str_replace() takes a value from each array .. We want to replace the name "ERICA" with "JON" and the word "AMERICA" PHP - The str_replace() is a built-in function in PHP and is used to replace all the the search string or array of search strings by replacement string or array of replacement How to get the function name from within that function using JavaScript ? PHP - The rename() function in PHP is an inbuilt function which is used to rename a file or directory. It makes an attempt to change an old name of a file or directory PHP script that will find a replace %{variable_name}% - So that [NAME] in for instance a mass email is replaced by the contents of $ Reference:. PHP substr_replace() Function - substr_replace(string,replacement,start,length) A positive number - Start replacing at the specified position in the string; Negative number - Start replacing at 8 examples of PHP str_replace function to replace strings - The str_replace function of PHP is used to replace a string with the replacement string. .. <label>To be Replaced with: </label><select name="replacestr">. MySQL REPLACE() function - MySQL REPLACE() replaces all the occurrances of a substring within a <meta name="description" content="example-replace-function - php Replacing a built-in PHP function when testing a component – Rob - Replacing a built-in PHP function when testing a component. Recently I first before finding one with the same name in the global namespace. Database Search and Replace Script in PHP - Search Replace DB is a powerful tool for developers, allowing them to run a search received by email, and install it to a secret folder with an obfuscated name. PHP: Replace an array key. - This is a tutorial on how to replace an array key in PHP. Now, let's say that you want to replace the key user_name with a new key called name. Firstly, you will how to move images in php move_uploaded_file - Manual - 2. convert image to standard formate (in this case jpg) and scale .. move_uploaded_file() [function.move-uploaded-file]: Unable to move '/tmp/ somefilename' to PHP - Move a file into a different folder on the server - If you want to move the file in new path with keep original file name. use this: $ source_file = 'foo/image.jpg'; $destination_path = 'bar/'; php and moving images to new folders - You will need to be sure that your web server has write access to the uploads directory where you want them to reside. Then you can just do: Uploading Files with PHP - Here we'll cover the basic upload process using PHP limiting by filetype article move on to additional things like image resizing and cropping. Php: How To Display Only Images From A Directory Using Php [ with - Well organized and easy to understand Web building tutorials with lots of examples of how to use HTML, CSS, JavaScript, SQL, PHP, Python, Bootstrap, Java 51: Upload Files and Images to Website in PHP - to use HTML, CSS, JavaScript, SQL, PHP, Python, Bootstrap, Java and XML. To center an image, set left and right margin to auto and make it into a block HTML Images - <title>Placing Text Over an Image in CSS</title> top: 40%; /* Adjust this value to move the positioned div up and down */; text-align: center;; width: 60%; /* Set How To Center an Image - How To Show Image From A Folder Using Php Source Code:. blogspot.com Randomly Display Some Images From A Set of Images: PHP - Upload Files and Images to Website in PHP | PHP Tutorial | Learn PHP Programming How to Position Text Over an Image Using CSS -- php/ Using how to move image file in php PHP - Move a file into a different folder on the server - If you want to move the file in new path with keep original file name. use this: $ source_file = 'foo/image.jpg'; $destination_path = 'bar/'; Moves an uploaded file to a new location - further validation/sanitation of the filename may be appropriate $name = basename($_FILES["pictures"]["name"][$key]); move_uploaded_file($tmp_name Move a file with PHP. - This is a tutorial on how to move files using PHP. To do this, we will be using PHP's rename function, which essentially renames a given file or directory. For this Move file to other directory - PHP - OK, I am trying to move an image file from one directory to another. I am using the code below $rename1 = "SpecialPictures/Advertisers/CampBC.png"; Move image to another folder and write path to database - when i upload a file to my folder called images a script is run called thumb save and creates a thumbnail of that original file but places it in the Php : How To Move Uploaded File To Another Directory [ with - if (rename($dir.'/'.$file,$dirNew.'/'.$file)). {. echo " Files Copyed Successfully";. echo ": $dirNew/$file";. //if files you are moving are images you can print it from. PHP move all files and folders from one directory to a new directory - PHP move_uploaded_file() Function. ❮ Complete PHP Filesystem Reference. Definition and Usage. The move_uploaded_file() function moves an uploaded file PHP move_uploaded_file() Function - You should check for file upload errors before trying to move the file. See PHP: Error Messages Explained - Manual[^]. [Solved] Image unable to move to temporary files using - I was looking for a quick code example on recursively moving files with PHP and I only saw code snippets using PHP 5. I like using the new and PHP 5: Recursively move or copy files - Php How To Move Uploaded File To Another Folder Source Code: .blogspot. com/2015/10 copy image from one folder to another in php How to copy an image from one folder to another using php - You should write your code same as below : <?php $imagePath = "/var/www/ projectName/Images/somepic.jpg"; $newPath = "/test/Uploads/"; Copy image file from one folder to another - PHP - What is a simple method to copy an image in one folder on a sever to another folder on the same server. also how to delete a image file on a how to copy a file from one folder to another with php? - how to copy a file from one folder to another with php? $file = '/usr/home/guest/example.txt'; $newfile = '/usr/home/guest/example.txt.bak'; if (! copy($file, $newfile)) { echo "failed to copy $file\n"; }else{ echo "copied $file into $newfile\n"; copy - Manual - If you try to copy a file to itself - e.g. if the target directory is just a symlink to the Copying large files under Windows 8.1, from one NTFS filesystem to another NTFS . if ( preg_match('@^image/(\w+)@ims', $arrHeader[2], $arrTokens) > 0 ) { php - To copy an image from one folder to another, you'll have to read from the image file that you would like to copy, then create another file PHP copy() Function - PHP copy() Function. ❮ Complete PHP Filesystem Reference. Definition and Usage. The copy() function copies a file. This function returns TRUE on success Copying images from one directory to another - I want to go into a number of directories, get all the .jpgs, and copy them to another set of corresponding directories This is my pathetic attempt How to Copy files from one folder to another folder using PHP - To copy files from source folder to destination php provide a copy function. This function takes two argument first is source path (source dir) and PHP - The copy() function in PHP is an inbuilt function which is used to make a copy of a specified file. $dest: It is used to specify the path to the destination file. how to copy a file from one folder to another with php? - You asked how to copy -- so either digitok's or ub3r's method would work. digitok's method uses an inbuilt function in PHP, while ub3r's method
http://www.brokencontrollers.com/article/10152202.shtml
CC-MAIN-2019-39
en
refinedweb
Asked by: Svcutil.exe issues: generates twice the same elements or generates codes that do not compile Question Hello, I ran into trouble with svcutil tool. I'm starting from a hand-written WSDL file and want to generate service code out of it. The WSDL file contains references to external schemas (that are given as parameter to svcutil.exe as we are generating from local files). The first problem occurs when I use the /ser:Auto switch (no /ser switch at all). Some of the classes generated (as partial) are defined multiple times with same fields. The generated code resulting contains errors. The second issue occurs when I use the /ser:XmlSerializer switch to force svcutil.exe to use the same serializer for all the elements found. Again, the generated code contains errors. I found another thread reporting similar error: The hand-written WSDL files was working fine until I added soap:fault elements in it. After some test, I found what was the pattern. It seems that when an element is used both in soap:header/soap:body AND in soap:fault in a message, svcutil.exe don't understand that these are the same XML elements. This, however, can be forced by using the /ser:XmlSerializer switch. However, as stated before, when forcing the XmlSerializer, the generated code contains errors at the annotation level of the operation that can launch the fault. The error in the code is that the typeof paramter of the FaultContractAttribute annotation refers to the XML file namespace, not the .NET replaced one (specified using the /n switch). On top of this, the FaultMessage don't exist in the generated code, so manually modifying the namespace in the generated code is not sufficent (event if that would not be good, as modification of generated code is not good practice I guess...). Some background on the project: The aim is to build interoperable code, hence the hand written WSDL file. Services will be implemented both with WCF and Axis2. This is why we would like to solve this not by rearranging the WSDL file, as it must be BP 1.1 compliant and as clean as possilble. PS: I originally posted this in another part of the forum: I don't know why but in this part of the forum, the code looks ugly and the editor complains of my original post beeing too long, so I skiped the code parts... All replies From the post in the other forum, it looks like your wsdl have some problems. // CODEGEN: Generating message contract since the operation GetQuote is neither RPC nor document wrapped. 1) Put your headers in a separate message part. 2) Rename the "body" message part to "parameters". 3) Remove references to the message part in the soap:body binding. (Each of your message should have only one part. Hence, the soap:body binding will use that one.) An First, thanks for helping! Doing this generates a code that is a bit different, but the fault annotation is still wrong. This is what I end up with:Code Snippet // CODEGEN: Generating message contract since the wrapper name (Name) of message GetQuoteRequest does not match the default value (GetQuote) [System.ServiceModel.OperationContractAttribute(Action="urn:GetQuote", ReplyAction="*")] [System.ServiceModel.FaultContractAttribute(typeof(), Action="urn:GetQuote", Name="FaultMessage")] [System.ServiceModel.XmlSerializerFormatAttribute()] WsdlToCode.Generated.GetQuoteResponse GetQuote(WsdlToCode.Generated.GetQuoteRequest request); The comment is different, but I can't manage to see what's wrong.
https://social.msdn.microsoft.com/Forums/vstudio/en-US/abfa7b04-8a7f-4785-bdde-3da26e9f2d30/svcutilexe-issues-generates-twice-the-same-elements-or-generates-codes-that-do-not-compile?forum=wcf
CC-MAIN-2019-39
en
refinedweb
Java concurrency (multi-threading). This article describes how to do concurrent programming with Java. It covers the concepts of parallel programming, immutability, threads, the executor framework (thread pools), futures, callables CompletableFuture and the fork-join framework. 1. Concurrency 1.1. What is concurrency? Concurrency is the ability to run several programs or several parts of a program in parallel. If a time consuming task can be performed asynchronously or in parallel, this improves the throughput and the interactivity of the program. A modern computer has several CPU’s or several cores within one CPU. The ability to leverage these multi-cores can be the key for a successful high-volume application. 1.2. Process vs. threads. A Java application runs by default in one process. Within a Java application you work with several threads to achieve parallel processing or asynchronous behavior. 2. Improvements and issues with concurrency 2.1. Limits of concurrency gains Within a Java application you work with several threads to achieve parallel processing or asynchronous behavior. Concurrency promises to perform certain task faster as these tasks can be divided into subtasks and these subtasks can be executed in parallel. Of course the runtime is limited by parts of the task which can be performed in parallel. The theoretical possible performance gain can be calculated by the following rule which is referred to as Amdahl’s Law. If F is the percentage of the program which can not run in parallel and N is the number of processes, then the maximum performance gain is 1 / (F+ ((1-F)/N)). 2.2. Concurrency issues Threads have their own call stack, but can also access shared data. Therefore you have two basic problems, visibility and access problems. A visibility problem occurs if thread A reads shared data which is later changed by thread B and thread A is unaware of this change. An access problem can occur if several threads access and change the same shared data at the same time. Visibility and access problem can lead to: Liveness failure: The program does not react anymore due to problems in the concurrent access of data, e.g. deadlocks. Safety failure: The program creates incorrect data.. You can use the synchronized keyword for the definition of a method. This would ensure that only one thread can enter this method at the same time. Another thread which is calling this method would wait until the first thread leaves this method. public synchronized void critial() { // some thread critical stuff // here }. For example the following data structure will ensure that only one thread can access the inner block of the add() and next() methods. package de.vogella.pagerank.crawler; import java.util.ArrayList; import java.util.List; /** * Data structure for a web crawler. Keeps track of the visited sites and keeps * a list of sites which needs still to be crawled. * * @author Lars Vogel * */ public class CrawledSites { private List<String> crawledSites = new ArrayList<String>(); private List<String> linkedSites = new ArrayList<String>(); public void add(String site) { synchronized (this) { if (!crawledSites.contains(site)) { linkedSites.add(site); } } } /** * Get next site to crawl. Can return null (if nothing to crawl) */ public String next() { if (linkedSites.size() == 0) { return null; } synchronized (this) { // Need to check again if size has changed if (linkedSites.size() > 0) { String s = linkedSites.get(0); linkedSites.remove(0); crawledSites.add(s); return s; } return null; } } } 3.3. Volatile If a variable is declared with the volatile keyword then it is guaranteed that any thread that reads the field will see the most recently written value. The volatile keyword will not perform any mutual exclusive lock on the variable. As of Java 5, write access to a volatile variable will also update non-volatile variables which were modified by the same thread. This can also be used to update values within a reference variable, e.g. for a volatile variable person. In this case you must use a temporary variable person and use the setter to initialize the variable and then assign the temporary variable to the final variable. This will then make the address changes of this variable and the values visible to other threads. 4. The Java memory model 4.1. Overview The Java memory model describes the communication between the memory of the threads and the main memory of the application. It defines the rules how changes in the memory done by threads are propagated to other threads. The Java memory model also defines the situations in which a thread re-freshes its own memory from the main memory. It also describes which operations are atomic and the ordering of the operations. are declared with the volatile keyword. Assume i is defined as int. The i++ (increment) operation it not an atomic operation in Java. This also applies for the other numeric types, e.g. long. The i++ operation first reads the value which is currently stored in i (atomic operations) and then it adds one to it (atomic operation). But between the read and the write the value of i might have changed. Since Java 1.5 the java language provides atomic variables, e.g. AtomicInteger or AtomicLong which provide methods like getAndDecrement(), getAndIncrement() and getAndSet() which are atomic. 4.3. Memory updates in synchronized code The Java memory model guarantees that each thread entering a synchronized block of code sees the effects of all previous modifications that were guarded by the same lock. 5. Immutability and defensive Copies 5.1. Immutability The simplest way to avoid problems with concurrency is to share only immutable data between threads. Immutable data is data which cannot be changed. To make a class immutable define the class and all its fields as final. Also ensure that no reference to fields escape during construction. Therefore any field must: be private have no setter method be copied in the constructor if it is a mutable object to avoid changes of this data from outside never be directly returned or otherwise exposed to a caller not change or if a change happens this change must not be visible outside An immutable class may have some mutable data which is used to manage its state but from the outside neither can change the data 5.2. Defensive Copies You must protect your classes from calling code. Assume that calling code will do its best to change your data in a way you didn’t expect it. While this is especially true in the case of immutable data, it is also true for non-immutable data which you don’t expect to be changed from outside your class. To protect your class against that, you should copy data which is the java.lang.Threads class. A Thread executes an object of type java.lang.Runnable. Runnable is an interface with defines the run() method. This method is called by the Thread object and contains the work which should be done. Therefore the Runnable is the task to perform. The Thread is the worker who is doing this task. The following demonstrates a task (Runnable) which counts the sum of a given range of numbers. Create a new Java project called de.vogella.concurrency.threads for the example code of this section. package de.vogella.concurrency.threads; /** *); } } The following example demonstrates the usage of the Thread the Thread class the direct usage of `Threads`. This package is described in the next section. 7. Threads pools with the Executor Framework Thread pools manage a pool of worker threads. The thread pools contain a work queue which holds tasks waiting to get executed. A thread pool can be described as a collection of Runnable objects (work queue) and a connection of running threads. These threads are constantly running and are checking the work query for new work. If there is new work to be done they execute this Runnable. The Thread class itself provides a method, e.g. execute(Runnable r) to add a new Runnable object to the work queue. The Executor framework provides example implementation of the java.util.concurrent.Executor interface, e.g. Executors.newFixedThreadPool(int n) which will create n worker threads. The ExecutorService adds life cycle methods to the Executor, which allows to shutdown the Executor and to wait for termination. Create again the Runnable. package de.vogella.concurrency.threadpools; /** *); } } Now you run your runnables with the executor framework. package de.vogella.concurrency.threadpools; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; public class Main { private static final int NTHREDS = 10; public static void main(String[] args) { ExecutorService executor = Executors.newFixedThreadPool(NTHREDS); for (int i = 0; i < 500; i++) { Runnable worker = new MyRunnable(10000000L + i); executor.execute(worker); } // This will make the executor accept no new threads // and finish all existing threads in the queue executor.shutdown(); // Wait until all threads are finish executor.awaitTermination(); System.out.println("Finished all threads"); } } In case the threads should return some value (result-bearing threads) then you can use the java.util.concurrent.Callable class. 8. CompletableFuture Any time consuming task should be preferable done asynchronously. Two basic approaches to asynchronous task handling are available to a Java application: application logic blocks until a task completes application logic is called once the task completes, this is called a nonblocking approach. CompletableFuture which extends the Future interface supports asynchronous calls. It thenApply can be used to define a callback which is executed once the CompletableFuture.supplyAsync finishes.; } } You can also start a CompletableFuture delayed as of Java 9. CompletableFuture<Integer> future = new CompletableFuture<>(); future.completeAsync(() -> { System.out.println("inside future: processing data..."); return 1; }, CompletableFuture.delayedExecutor(3, TimeUnit.SECONDS)) .thenAccept(result -> System.out.println("accept: " + result)); 9. Nonblocking algorithms Java 5.0 provides support for additional atomic operations. This allows to develop algorithms which are non-blocking algorithms, e.g. which do not require synchronization, but are based on low-level atomic hardware primitives such as compare-and-swap (CAS). A compare-and-swap operation checks if a variable has a certain value and if it has that value it will perform an operation. Non-blocking algorithms are typically faster than blocking algorithms, as the synchronization of threads appears on a much finer level (hardware). For example this creates a non-blocking counter which always increases. This example is contained in the project called de.vogella.concurrency.nonblocking.counter. package de.vogella.concurrency.nonblocking.counter; import java.util.concurrent.atomic.AtomicInteger; public class Counter { private AtomicInteger value = new AtomicInteger(); public int getValue(){ return value.get(); } public int increment(){ return value.incrementAndGet(); } // Alternative implementation as increment but just make the // implementation explicit public int incrementLongVersion(){ int oldValue = value.get(); while (!value.compareAndSet(oldValue, oldValue+1)){ oldValue = value.get(); } return oldValue+1; } } And a test. package de.vogella.concurrency.nonblocking.counter; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; public class Test { private static final int NTHREDS = 10; public static void main(String[] args) { final Counter counter = new Counter(); List> future : list) { try { set.add(future.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } if (list.size()!=set.size()){ throw new RuntimeException("Double-entries!!!"); } } } The interesting part is how the incrementAndGet() method is implemented. It uses a CAS operation. public final int incrementAndGet() { for (;;) { int current = get(); int next = current + 1; if (compareAndSet(current, next)) return next; } } The JDK itself makes more and more use of non-blocking algorithms to increase performance for every developer. Developing correct non-blocking algorithms is not a trivial task. For more information on non-blocking algorithms, e.g. examples for a non-blocking Stack and non-blocking LinkedList, please see . 10. Fork-Join in Java 7 Java 7 introduced add jsr166y.jar to the classpath. Create first an algorithm package and then the following class. package algorithm; import java.util.Random; /** * * This class defines a long list of integers which defines the problem we will * later try to solve * */ public class Problem { private final int[] list = new int[2000000]; public Problem() { Random generator = new Random(19580427); for (int i = 0; i < list.length; i++) { list[i] = generator.nextInt(500000); } } public int[] getList() { return list; } } Define now the Solver class as shown in the following example coding. package algorithm; import java.util.Arrays; import jsr166y.forkjoin.RecursiveAction; public class Solver extends RecursiveAction { private int[] list; public long result; public Solver(int[] array) { this.list = array; } @Override protected void compute() { if (list.length == 1) { result = list[0]; } else { int midpoint = list.length / 2; int[] l1 = Arrays.copyOfRange(list, 0, midpoint); int[] l2 = Arrays.copyOfRange(list, midpoint, list.length); Solver s1 = new Solver(l1); Solver s2 = new Solver(l2); forkJoin(s1, s2); result = s1.result + s2.result; } } } Now define a small test class for testing it efficiently. package testing; import jsr166y.forkjoin.ForkJoinExecutor; import jsr166y.forkjoin.ForkJoinPool; import algorithm.Problem; import algorithm.Solver; public class Test { public static void main(String[] args) { Problem test = new Problem(); // check the number of available processors int nThreads = Runtime.getRuntime().availableProcessors(); System.out.println(nThreads); Solver mfj = new Solver(test.getList()); ForkJoinExecutor pool = new ForkJoinPool(nThreads); pool.invoke(mfj); long result = mfj.getResult(); System.out.println("Done. Result: " + result); long sum = 0; // check if the result was ok for (int i = 0; i < test.getList().length; i++) { sum += test.getList()[i]; } System.out.println("Done. Result: " + sum); } } 11. Deadlock A concurrent application has the risk of a deadlock. A set of processes are deadlocked if all processes are waiting for an event which another process in the same set has to cause. For example if thread A waits for a lock on object Z which thread B holds and thread B waits for a lock on object Y which is hold by process A, then these two processes are locked and cannot continue in their processing. This can be compared to a traffic jam, where cars(threads) require the access to a certain street(resource), which is currently blocked by another car(lock). 12. Links and Literature 12.1. Concurrency Resources
https://www.vogella.com/tutorials/JavaConcurrency/article.html
CC-MAIN-2021-21
en
refinedweb
Subject: Re: [boost] [UUID] PODness Revisited From: Vladimir Batov (batov_at_[hidden]) Date: 2008-12-25 23:08:21 Adam, Wow, that was one passionate reply. Was it something that I said? ;-) > While you may not see the "magic" in POD types, I can't fathom what > exactly > you have against them either. Well, I most certainly do not have anything against anything. It's nothing personal. If my emails came across as such, my humble apologies. > Do you find them more confusing or harder to > use? Do you find static initialization syntax aesthetically offensive? Is > it > your (no offense, but extremely misguided, IMO) lingering impression that > POD > types are a legacy of C that should be ignored whenever possible? A list > of > examples in favor of making UUID a POD type was presented, and you've > argued > against those examples without actually saying what you think the drawback > is. That's quite an emotionally charged list you compiled. It did not have to be such. I certainly do not find aggregates confusing or anything of that sort. In fact, I was very happy with them for about 10 years while coding in C before I switched to C++ in early 90ies. PODs've come from C and, therefore, they *are* a legacy of C and are not called Plain Old Data for nothing. In C++ though do find aggregates limiting. With regard to uuid it'd be no user-provided constructors, no guaranteed invariant, no private or protected non-static data members. And that is fundamental (my view of course) to C++ -- "it is important and fundamental to have constructors acquire resources and establish a simple invariant" (Stroustrup E.3.5). Then, "One of the most important aims of a design is to provide interfaces that can remain stable in the face of changes" (Stroustrup 23.4.3.5). PODs do restrict interfaces and are wide-open implementation-wise. That opens the door for mis-use, complicates long-term maintainability. So, unless PODs provide some killer feature in return (that cannot be achieved otherwise), I do not see the point of paying that price. >> That's what *I* see (caveat: I admit not knowing much about Boost.MPI and >> Boost.Interprocess requirements and expectations). > > Again, no offense intended, but I find it a bit discomfiting that the > person > arguing most vocally on this issue would make this admission. Just because > you > don't have personal knowledge of a use case where UUID being a POD type > would > be greatly beneficial doesn't mean such a use case doesn't exist. First, you are right about "most vocally". I too had that growing concern that there was somewhat too much of me lately on the list. Apologies. In my defence I might say I do not usually do that. My weak point is that once I get onto something, I tend to follow it through to completion (well, some might consider that to be a good thing). Point taken though, I'll try answering your email (hopefully to your satisfaction) and will turn it down. Secondly, I personally do not see anything wrong with the admission -- I use some libs extensively, some occasionally and do not use some at all. I suspect it is quite typical. Stating your knowledge IMO clears up a lot of possible and unnecessary confusion and many other emotions. Thirdly, I am not sure I said "such a use case doesn't exist", did I? If I did, I probably did not mean that. :-) What I am questioning though is the "greatly beneficial" part. I am glad to see that part is already obvious to you. I hope it's not a just hunch and you have hard data to back it up. >> 1. Boost.MPI efficiency does not seem to rely on PODness. Rather it seem >> to >> be due to serialization (or rather ability to bypass it). > > This isn't technically correct, I think; in MPI's case (though not > Interprocess'), the type must be serializable regardless, but the ideal > efficiency scenario comes from specializing both > boost::mpi::is_mpi_datatype > and boost::serialization::is_bitwise_serializable. Note that the > documentation > for these traits ([1] and [2], respectively) both specifically mention POD > types -- this is no coincidence. > > [1] > > [2] > Yes, my wording was somewhat crude. I presume you have a lot of practical experience with MPI and you can say with authority that PODness is a must for MPI's efficiency. Would you mind providing some experimental data that you observed? My knowledge of MPI is from reading docs (I probably should stop making these discomforting admissions). There I got that impression that serializable non-aggregate classes could be made efficient too. > ... I think you're missing the larger point. In > modern C++, types intentionally created as POD types are often (not > always) > done so to absolutely maximize the efficiency of copying that type. I do not understand "PODness to absolutely maximize the efficiency of copying" as I believe class NonAggregateFoo { ... int int_; }; is copied as efficiently as a raw 'int'. And NonAggregateFoo bunch[] can be memcopied as well PODFoo bunch[] (I am not advocating that but simply stating the fact). And I do not expect the respective template<class Archive> void serialize(Archive ar, unsigned int) { ar & int_; } to be that slow (with appropriately chosen Archive). Again, here you might well know more than I do. Tell me then. > The > existance of the is_pod type trait in boost.type_traits/TR1/C++0x > reinforces > this -- e.g. in many implementations, std::copy will use memcpy to perform > an > ideally efficient copy when is_fundamental<T>::value || is_pod<T>::value. > Additionally, a POD type's synthesized copy constructor is generally > merely a > memcpy. Understood. It does not make copying of non-aggregates inefficient though. Non-automatic 'yes', inefficient 'no'. > ... >> If it is a MPI implementation-specific restriction/limitation, I'd expect >> we'd look at addressing it in MPI rather than shaping other classes to >> match >> it. > > This is an unreasonable thought process, IMO. If a type has an good use > case > with another library (in this case, UUID with > Serialization/MPI/Interprocess), > it's up to the type to conform to the library in an ideal fashion, not the > other way around. E.g., lexical_cast and serialization don't go out of > their > way to work with every other type in Boost, but many types in boost have > serialization and lexical_cast support. Well, again my initial wording was somewhat crude. I still stand by its meaning though. A general-purpose library should be accommodating/considerate rather than imposing. And from what I read about MPI that's the approach taken there. As for lexical_cast, it is the same -- it imposes the requirement of op>>, op<<, the def. cnstr. However, instead of rejecting non-conformant classes, it leaves the door open and accommodates those via specialization and at least as efficiently. Boost.Serialization? Same. In fact, they *do* "go out of their way to work" with as many types as possible. I think I can talk about Boost.Serialization with a little bit of confidence (as I've been using it quite extensively). I know that the library tries so remarkably hard to keep everyone happy -- optimization? yes; no-default constructors? no problem; separate load/save logic? bring it on; intrusive/non-intrusive serialization? piece of cake... the list is long. >> 2. Scott, you correctly mention that most often we "don't want to send >> UUIDs >> by themselves". The thing is that chances of that bigger class being a >> POD >> are diminishing dramatically (if not already infinitely close to 0). > > This is extremely off base, and points back to your lack of knowledge > regarding MPI, I think. Uhm, what exactly is extremely off-base here? And what does MPI have to do with it? The bigger a class, the smaller the chance it can conform to the limitations of POD. I am currently "serving time" in the railway industry and dealing with Trains, TrackCircuits, Signals, Stations, (damn long list). All use uuids and are used in inter-process inter-machine communications. I cannot imagine those classes to be PODs. > When writing an app/library/algorithm intended for use > in a high-performance parallel context, one goes out of their way to use > POD > types extensively, for the sake of performance. Yes, the fact that MPI > works > with boost.serialization is nice, but when performance is critical, > memcpy'able types are key; First, I am under impression that non-aggregate non-virtual objects are as memcopyable (with usual caveats) as PODs are. Second, I feel boost.serialization still can be optimized for performance. See,. Plus binary archives (or your custom archives) can carry a very limited overhead. Still, I do not know much about MPI (Oops, I did it again! ;-)). > ... I think to > argue that a type such as UUID (which is a low-level, fundamental value > type, > and specifically *very* likely to be used in an inter-process context) > should > *not* automatically work in an ideal fashion in this scenario, one must > have > an *extremely* convincing argument, IMO. And so far, I haven't seen one > presented. ;-) As for inter-process context, then if it is on the same machine (in shared memory), then there is no that exclusive PODness quality that allows objects to be stored/accessed in shared memory -- non-aggregate non-virtual objects are as good for that as PODs. If that is over the network, then I suspect we have many more things to worry about efficiency- and data consistency/integrity-wise. Say, network latency, synchronization, node dropouts, (a long list). As for "an *extremely* convincing argument", then I somehow haven't seen one either so that I'd say "indeed, non-aggregates cannot do that, POD is the king". But I might not know something you do (gosh, it's turning into some "disturbing" confession now ;-)) but that's OK, right? >> 3. As for deployment of an object in shared memory, it does not have to >> be a >> POD either. > > Please take another look at the specific link Scott provided ([4]); > boost::interprocess::message_queue only copies raw bytes between > processes, so > for non-POD types generally that requires that an object be binary > serialized > before sending. However, for a POD type, binary serialization is a > completely > redundant process (read: a complete waste of CPU cycles); one can just > send > the bytes of the object directly, and as an added bonus, avoid becoming > dependant on the somewhat heavy serialization library altogether. Yes, I hear you. I just do not know how big deal that is. I can only argue this point with any conviction after I try optimized binary serialization vs. memcopy. If you tried, then I'd love to hear that. If you did not, then I am still unsure of *real* tangible benefits on PODness. > Again, the fact that this might be possible even if UUID were not a POD > type > is somewhat irrelevant, I disagree. It is relevant to me and surely many others working on higher abstraction levels. POD comes with conditions. I need to know if I want to pay that price. Therefore, I never buy into theoretical efficiency debates -- I write stuff, I profile the stuff, I fix the actual (not imagined) bottlenecks. > ... > I want to touch on a few other points as well, were UUID to be a POD type: > > 1. The default constructor behavior/existance debate would be put to rest. > ;-) Well, at the expence of initial invalid invariant state? I think, I'd rather agree to the nil-behavior of uuid. Again, "it is important and fundamental to have constructors acquire resources and establish a simple invariant" (Stroustrup E.3.5). > 2. The efficiency of lexical_cast would be better than *any* default > constructor behavior, regardless of which one was ultimately decided > upon. I think you are referring to the non-initialized instance in the default lexical_cast<uuid>(string). It might or might not be correct though -- writing to and reading from those streams might have real impact instead of initialization or no initialization. Not profiled that though. > ... > 4. Initializing a nil UUID would become more succinct. Contrast > 'uuid id(uuid::nil());' and 'uuid id = {};', or 'id(uuid::nil())' and > 'id()' in a constructor initialization list. Assuming any level of > familiarity with aggregates, the latter are much more concise, IMO. (And > C++0x will certainly introduce that familiarity if one doesn't have it > already.) Here comes Vladimir disagreeing again (and not because he is not familiar with or afraid of aggregates). It is because I feel that "uuid id = {0};" exposes too much implementation detail and assumes the user knows that the invalid uuid is all zeros. If, say, tomorrow the Standard changes the value of nil, all my code becomes invalid. It might not be the case with uuid. However, it is the principle/coding habit I am talking about. > 5. Static initialization has been greatly underrated so far in this > discussion. My first use case for a Boost UUID library would be to > replace > some homegrown COM/XPCOM encapsulation code. In dealing with COM/XPCOM, > it > is *extremely* common to have hardcoded UUIDs, and *many* of them. > Trivial > work though it may be, spending application/library startup time > initializing hundreds/thousands of UUIDs when they could be statically > initialized is senseless. I believe you'll be able to do that if we do class uuid { template<class Range> uuid(Range range); } Then you'll be able to feed your hard-coded initialization data to uuid. > 6. Regarding the potential for uninitialized state: I personally view UUID > as > a fundamental, borderline primitive type (others will almost certainly > disagree); uninitialized state is generally understood and accepted for > actual primitive types, so why should it be such a scary concept for > UUID? It's certainly not scary. It's just not in C++ spirit (see quotes at the top of the email) and everyone knows what primitive types are. I do not think people expect other types to behave that way. > 7. Lastly, to reiterate: this is C++. Every type, every library, every > algorithm should be written with performance and efficiency as primary > considerations. I do not think C++ was designed "with performance and efficiency as primary considerations". And I do not think applications "should be written with performance and efficiency as primary considerations". Don't get up in arms -- those considerations are important. I object to the "primary" part. I do not think I even need to debate this -- Knuth, Stroustrup and many others have done that. > ... There are demonstrable use cases where UUID can work more > efficiently as a POD type, Call me thick but I did not see those convincing use-cases showing PODs considerably more efficient than non-aggregates. Easier? Yes. *Seemingly* more efficient? Yes. How much more efficient? I dunno if that is palpably real. > but no convincing arguments have been presented > in favor of non-PODness. Oh, c'mon. How 'bout readng "The C++ Progr. Lang." and the "Evolution of C++" books? Discussions there do not revolve around aggregates. Best, V. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2008/12/146482.php
CC-MAIN-2021-21
en
refinedweb
Buffer overflow When invoking the device.connect method of a MetaTracker instance, sometimes I get a *** buffer overflow detected ***: python terminated Reading some older posts, here is some extra information: - its an Ubuntu machine, 6gb ram - ram is mostly free - a lot of swap available, not being used by any program Here is btmon output for a simple run with this error. Its a straightforward call to MetaTracker(mac_address).connect: It happens intermitently with many MetaTracker devices. One interesting fact is that a guaranteed way to get the buffer overflow error is to run a program and try to connect to two devices. The first connect invocation to a MetaTracker instance works fine, the second one fails with the buffer overflow every time. Perhaps those are two different buffer overflows. Any suggestions? Regards If you do the same thing with our standard MetaBase app, does an error occur? Does your custom code work fine with a single device? @Laura , I did the following: Open the computer, run the provided scan_connect.py from examples: Which means device.connect works fine for the same device a few times if disconnection was succesful. That already helps isolating the problem. I noticed that I would get connection errors if disconnection was abrupt (i.e. due to an error if closes the program, the next connection typically fails). Next step, now I try the same process with two metatrackers on. Now, on to the scan_connect.py script from github: Now that I was able to connect to both I go for both terminals and try again, first one on FB The conclusion is that the script (from mbientlab itself) works most of the time with individual pieces, but seems to fail every time I try to connect to two devices (either from the same python process, from my original program, or in two different python processes running from the same directory, such as this example). I can not replicate connecting to two devices from the App since the app connects to one device at a time. Any suggestions on how to tackle the buffer overflow error? Is there any flag I can pass to the C library compilation or runtime that enables more debug information so we can better detect where the buffer overflow occurs and who ? If it helps, when trying to connect to two devices at the same time, the json file in the cache folder is created for the first one (connected successfuly) but the buffer overflow occurs before creating the cache folder json file for the second device. It definitely occurs during the device.connect()invocation in the script provided (debugged it to get there). More specifically, the error occurs on self.warble.connect_async(completed) There's no direct way to install a debug version of libwarble. You can checkout the warble C code, build the debug version, and replace the libwarble.so in the pywarble pyhton package. The debug .so will be in Warble/dist/debug/lib/{arch}/libwarble.so Could be a variety of things. Connections won't always be successful on the first attempt; your app needs to accommodate for that. Are you trying to connect to two devices simultaneously? Connection attempts should be done serially not in parallel. Thanks Eric, I will compile and run with the debug settings. Answering your question, the connections are done serialized. You can think of the following codes if it was just one script: I suspect it might be the usage of a poor USB dongle on Ubuntu. I will receive a new device from another Branch tomorrow and try again. I will even try with both devices specifying the HCI Mac address for each one to see if the problem is with the devices or with some shared memory codes inside the Bluetooth library itselfI will even try with both devices specifying the HCI Mac address for each one to see if the problem is with the devices or with some shared memory buffer inside the Bluetooth library itself. Those libraries are usually stable enough, so I believe it will be just a poor quality dongle issue. I'll get back to it this week Thanks @minousoso can you tell me what's is the firmware update? @Eric @Laura my suspicion was correct, first there is an issue with the. bluetooth dongle. Some bluetooth dongles do not support more than one paired connection at a time. So after connecting with one device, the second one always fail. I got a few different models and most of them are quite stable, if they connect up to 5 devices, they always connect up to 5 devices. The problem with the buffer overflow error is that many situations result in a buffer overflow and since its a C level error (not a python error), it kills the program. I was able to isolate a few buffer overflow situations: First if the bluetooth dongle is removed while trying to make a connection, the buffer overflow occurs and kills the program. Second, when the bluetooth pairing limit is reached, it fails with the buffer overflow message. But we can't know beforehand - by code - if it reached the limit or not, since this is not something the bluetooth hardware provides us via code. Third, some random connection issues also seem to end up with a buffer overflow, I have not isolated it yet. I don't believe its an issue on mbientlab's code, it is most probably an issue with the linux bluetooth support. Unfortunately it does make it unsafe to create a product supporting linux to connect to the devices if your product allows the end user to choose their own bluetooth dongles. The program might crash with no chance of recovery. When required I will run some experiments with two different dongles and mac addresses and see what happens. regards @guilhermesilveira, Amazing work! Please keep it up and do let us know the make and model of the dongles that performed better. We will be happy to let other users know and this is extremely useful for our community. Is there a way to fix the *** buffer overflow detected ***: python terminated? Have issues while connecting to multiple sensors metawear sensors? example of the code used: from future import print_function import sys from mbientlab.metawear import MetaWear, libmetawear from mbientlab.metawear.cbindings import * from time import sleep from threading import Event def reset(MAC): if name == 'main': reset("F4:XX:XX:XX:XX:23") sleep(1.0) reset("D5:XX:XX:XX:XX:34") sleep(1.0) reset("D8:XX:XX:XX:XX:B4") sleep(1.0) reset("DF:XX:XX:XX:XX:AA") error 1592506866.952153: Error on line: 296 (src/blestatemachine.cc): Operation now in progress *** buffer overflow detected ***: python terminated In my case I: If it hangs only with more than one, you might be having the same problem that I do: the chip your usb dongle uses might not support more than N connections at once. In that case you can do as I did, buy a better chip Mine currently supports 5 connections for my needs, but it does not handle 6. It is said that other apis are stabler than the python due to the underlying libraries, I did not test any other library. regards The Javascript APIs are better for apps where you have many sensors because the bluetooth libraries are more reliable. As @guilhermesilveira mentioned, the Python libraries we use from third party vendors are very rudimentary and don't support multiple sensors and dongles as well. Please do make sure that your code is handling the multiple connections correctly. It should also handle failures in case one of your sensors isn't reset properly, your code should automatically retry. Thanks @guilhermesilveira. What usb dongle are you using? Thanks for the update @AKR Hello, I got a solution to make more than 5 sensors(10 sensors) to work at the same time. You can try to use multiple thread programming to reach this goal. @Xiaorui Nice! I actually use one thread to connect to all 5 sensors, then 1 thread per sensor to listen to their events. Do you create one thread per sensor before connecting, and inside each thread connect to them? And do you wait for the first one to be connected before launching the second thread or do you fire them all at once and request for connections simultaneously? (I don't know how lost the bluetooth device might get with 10 simultaneous requests) Hey, guilhermesilveira. I use 10 threads for ten sensors. It works. Then I tested more sensors(20 sensors) with 5 dongles on RaspberryPi / PC (Ubuntu1804) last night. Eventually, 16 sensors were connected successfully(another 4 were failed). ** Do you create one thread per sensor before connecting, and inside each thread connect to them?** Yes, one thread for one sensor. You can see the details in my code snippet. **And do you wait for the first one to be connected before launching the second thread or do you fire them all at once and request for connections simultaneously? ** Yes, I use sleep function to make sure the sensors will connect the dongle one by one. Fire them all at once can't work. Here is my hardware(5 dongles) Here is my code snippet, feel free to try it (you need to set your own dongle addresses~~~~ and sensor addresses). Hey, I don't know why the code format is so strange.....if you want I can send you the .py file.... usage: python stream_acc.py [mac1] [mac2] ... [mac(n)] from future import print_function from mbientlab.metawear import MetaWear, libmetawear, parse_value from mbientlab.warble import WarbleException from mbientlab.metawear.cbindings import * from time import sleep from threading import Event import platform import threading import time import pickle if sys.version_info[0] == 2: range = xrange class State: def init(self, device): self.device = device self.samples = 0 self.callback = FnVoid_VoidP_DataP(self.data_handler) self.data = [] def data_handler(self, ctx, data): if self.samples < 100: self.data.append([self.device.address, parse_value(data)]) exitFlag = 0 class myThread (threading.Thread): def init(self, metawarelist, hci_mac): threading.Thread.init(self) self.metawarelist = metawarelist self.hci_mac = hci_mac self.states = [] def run(self): for i in range(len(self.metawarelist)): d = MetaWear(self.metawarelist[i], hci_mac=self.hci_mac) connected = False while not connected: try: d.connect() connected = True except WarbleException: connected = False print("\33[31mTrying again to connect to \33[0m" + d.address) sleep(5.0) sensor_0 = "DB:" sensor_1 = "FE:" sensor_2 = "E6" sensor_3 = "D5" sensor_4 = "D3:" sensor_5 = "F5:" sensor_6 = "DC:" sensor_7 = "F4:C" sensor_8 = "EB:" sensor_9 = "E" sensor_10 = "DC:" sensor_11 = "DD:7" sensor_12 = "D8:" sensor_13 = "CF" sensor_14 = "EC" sensor_15 = "DF:" sensor_16 = "CAF2" sensor_17 = "E6:4" sensor_18 = "C7A:AC" sensor_19 = "E1:D55" sensors = [sensor_0, sensor_1, sensor_2, sensor_3, sensor_4, sensor_5, sensor_6, sensor_7, sensor_8, sensor_9,\ sensor_10, sensor_11, sensor_12, sensor_13, sensor_14, sensor_15, sensor_16, sensor_17, sensor_18, sensor_19] dongle_0 = "008" dongle_1 = "00:15:2" dongle_2 = "00::BE" dongle_3 = "00:15:86" dongle_4 = "00:1588" dongles = [dongle_0, dongle_1, dongle_2, dongle_3, dongle_4] thread_0 = myThread([sensor_0], dongle_0) thread_1 = myThread([sensor_1], dongle_0) thread_2 = myThread([sensor_2], dongle_0) thread_3 = myThread([sensor_3], dongle_0) thread_4 = myThread([sensor_4], dongle_1) thread_5 = myThread([sensor_5], dongle_1) thread_6 = myThread([sensor_6], dongle_1) thread_7 = myThread([sensor_7], dongle_1) thread_8 = myThread([sensor_8], dongle_2) thread_9 = myThread([sensor_9], dongle_2) thread_10 = myThread([sensor_10], dongle_2) thread_11 = myThread([sensor_11], dongle_2) thread_12 = myThread([sensor_12], dongle_3) thread_13 = myThread([sensor_13], dongle_3) thread_14 = myThread([sensor_14], dongle_3) thread_15 = myThread([sensor_15], dongle_3) thread_16 = myThread([sensor_16], dongle_4) thread_17 = myThread([sensor_17], dongle_4) thread_18 = myThread([sensor_18], dongle_4) thread_19 = myThread([sensor_19], dongle_4) sleep_delay = 5.0 thread_0.start() sleep(sleep_delay) thread_1.start() sleep(sleep_delay) thread_2.start() sleep(sleep_delay) thread_3.start() sleep(sleep_delay) sleep_delay = 5.0 thread_4.start() sleep(sleep_delay) thread_5.start() sleep(sleep_delay) thread_6.start() sleep(sleep_delay) thread_7.start() sleep(sleep_delay) sleep_delay = 5.0 thread_8.start() sleep(sleep_delay) thread_9.start() sleep(sleep_delay) thread_10.start() sleep(sleep_delay) thread_11.start() sleep(sleep_delay) thread_12.start() sleep(sleep_delay) thread_13.start() sleep(sleep_delay) thread_14.start() sleep(sleep_delay) thread_15.start() sleep(sleep_delay) thread_16.start() sleep(sleep_delay) thread_17.start() sleep(sleep_delay) thread_18.start() sleep(sleep_delay) thread_19.start() sleep(sleep_delay) thread_0.join() thread_1.join() thread_2.join() thread_3.join() thread_4.join() thread_5.join() thread_6.join() thread_7.join() thread_8.join() thread_9.join() thread_10.join() thread_11.join() thread_12.join() thread_13.join() thread_14.join() thread_15.join() thread_16.join() thread_17.join() thread_18.join() thread_19.join() Can you fix the formating or send code as an attachment?
https://mbientlab.com/community/discussion/comment/10561
CC-MAIN-2021-21
en
refinedweb
ncl_nerro - Man Page Referenced by a user to obtain the current value of the internal error flag of SETER. Synopsis NERR=NERRO(NERRF) C-Binding Synopsis #include <ncarg/ncargC.h> c_nerro(int *nerrf) Description The FORTRAN expression "NERRO(NERRF)" has the value of the internal error flag of SETER. If its value is non-zero, this indicates that a prior recoverable error occurred and has not yet been cleared. The argument NERRF is given the same value as the function reference; this is useful in some situations. The argument of NERRO is as follows: - NERRF (an output variable of type INTEGER) - Receives the same value that is returned as the value of the function itself. C-Binding Description The C-binding argument descriptions are the same as the FORTRAN argument descriptions. Examples Use the ncargex command to see the following relevant examples: tseter, arex02. Access To use nerro or c_nerro, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order. See Also Online: entsr, eprin, errof, error_handling, fdum, icfell, icloem, retsr, semess, seter, ncarg_cbind University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
https://www.mankier.com/3/ncl_nerro
CC-MAIN-2021-21
en
refinedweb
Created on 2018-12-03 16:02 by vstinner, last changed 2019-06-27 07:04 by vstinner. This issue is now closed. Currently, platform.libc_ver() opens Python binary file (ex: /usr/bin/python3) and looks for a string like "GLIBC-2.28". Maybe gnu_get_libc_version() should be exposed in Python to get the version of the running glibc version? And use it if available, or fall back on parsing the binary file (as done currenetly) otherwise. Example: $ cat x.c #include <gnu/libc-version.h> #include <stdlib.h> #include <stdio.h> int main(int argc, char *argv[]) { printf("GNU libc version: %s\n", gnu_get_libc_version()); printf("GNU libc release: %s\n", gnu_get_libc_release()); exit(EXIT_SUCCESS); } $ ./x GNU libc version: 2.28 GNU libc release: stable I'm not sure if it's possible that Python is compiled with glibc but run with a different libc implementation? -- Alternative: run a program to get the libc version which *might* be different than the libc version of Python if the libc is upgraded in the meanwhile (unlikely, but is technically possible on a server running for days): $ ldd --version ldd (GNU libc) 2.28 ... $ /lib64/libc.so.6 GNU C Library (GNU libc) stable release version 2.28. ... $ rpm -q glibc glibc-2.28-17.fc29.x86_64 ... etc. -- See also discussions on platform.libc_ver() performance: You can use confstr to get (running) glibc version: >>> os.confstr('CS_GNU_LIBC_VERSION') 'glibc 2.28' > >>> os.confstr('CS_GNU_LIBC_VERSION') > 'glibc 2.28' That's cool because it doesn't require to write new C code ;-) Currently libc_ver() can be used for other executable. See issue26544 for discussion about libc_ver(). > Currently libc_ver() can be used for other executable. See issue26544 for discussion about libc_ver(). Oh, my PR had a bug: it ignores executable. Fixed: it now only uses os.confstr() if the executable argument is not set. New changeset 476b113ed8531b9fbb0bd023a05eb3af21996600 by Victor Stinner in branch 'master': bpo-35389: platform.libc_ver() uses os.confstr() (GH-10891) > Quick benchmark on Fedora 29: > python3 -m perf command ./python -S -c 'import platform; platform.libc_ver()' > 94.9 ms +- 4.3 ms -> 33.2 ms +- 1.4 ms: 2.86x faster (-65%) Oops, my benchmark in the commit message was wrong, it includes the startup time... Correct benchmark says 44,538x faster, it's *WAY* better! [regex] 56.1 ms +- 1.9 ms -> [confstr] 1.26 us +- 0.04 us: 44537.88x faster (-100%) Nice. I never liked the "parse the executable approach", but there wasn't anything better available at the time. > Nice. I never liked the "parse the executable approach", but there wasn't anything better available at the time. Aha. Well, it's not perfect but it works and was fast enough (since libc_ver() is never used in performance critical code) :-) I'm now curious and looked at the history of this feature. "man confstr" says: > _CS_GNU_LIBC_VERSION (GNU C library only; since glibc 2.3.2) glibc 2.3.2 has been released in March 2003, so it's fine, we should get this constant in most "modern" Linux (using glibc) in 2018 :-) man gnu_get_libc_version says: > These functions first appeared in glibc in version 2.1. glibc 2.1 has been released in Feb 1999. Using this function might provide even better compatibility but I'm not sure that it's worth it to use it. As I wrote, I prefer to not write a new function C, if os.confstr() can already be used in pure Python! Sadly, these functions (confstr(_CS_GNU_LIBC_VERSION) / gnu_get_libc_version()) are specific to glibc. Sorry, I'm not interested to support other libc, I mostly care about Fedora, sorry :-) New changeset 848acf7249b5669d73d70a7cb6e5ab60689cf825 by Victor Stinner in branch 'master': bpo-35389: test.pythoninfo logs platform.libc_ver (GH-10951) New changeset a719c8f4bd3cc5b1d98700c15c4a818f4d5617a4 by Victor Stinner in branch 'master': bpo-35389: platform.platform() calls libc_ver() without executable (GH-14418)
https://bugs.python.org/issue35389
CC-MAIN-2021-21
en
refinedweb
MCP79410RK (community library) Summary Particle driver for MCP79410 RTC Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. MCP79410RK Particle library for MCP79410 real-time clock (RTC) chip with I2C interface You can find the full browsable API docs here. Hardware [image unavailable] This is a sample board that includes an MCP79410 in an Adafruit FeatherWing form-factor. The documentation and Eagle CAD files for this project can be found here. The MCP79410 is a tiny 8-pin chip: [image unavailable] With a simple application circuit: [image unavailable] You only need some pull-up resistors, a 32.768 kHz crystal, and a few capacitors. It connects by I2C and uses addresses 0x6f (registers and SRAM) and 0x57 (EEPROM). The MFP pull-up The MFP (multi-function pin) is useful when waking up from SLEEP_MODE_DEEP based on time on Gen 3 devices (Argon, Boron, Xenon). You connect MFP to D8 for this purpose. However, you must be careful: Using D8 as a wake-up pin is active high, rising. The MFP is open-collector and requires a pull-up. However, in SLEEP_MODE_DEEP, an internal pull-down (about 13K) is applied to D8 so it doesn't float if not connected. Thus you must use a small-resistance pull-up or the signal won't go high enough because of the conflicting pull-up and pull-down. A 2.2K pull-up works fine for this purpose. Or you could use an actual inverter or transistor, if you prefer. If you use a typical 10K pull-up on MFP, in wake the signal will only reach 1.9V because of the 10K pull-up and the 13K internal pull-down, and that's too low of a voltage to register as high and end sleep mode. Common Patterns RTC Clock Synchronization By default, bi-directional clock synchronization is done. - During setup() if Time is not valid but the RTC is, then Time is set from RTC. This is useful because Time is not maintained in SLEEP_MODE_DEEP. - During loop(), after Time is synchronized with the cloud, the RTC is updated. This only happens once at boot. You must call the setup() and loop() methods, as shown in the following example. Using RTC to wake from SLEEP_MODE_DEEP Here's a simple program to wake from SLEEP_MODE_DEEP: #include "MCP79410RK.h" SerialLogHandler logHandler; MCP79410 rtc; void setup() { // Make sure you call rtc.setup() from setup! rtc.setup(); } void loop() { // Make sure you call rtc.loop() from loop! rtc.loop(); // Wait 20 seconds after boot to try sleep if (millis() > 20000) { if (rtc.setAlarm(10)) { Log.info("About to SLEEP_MODE_DEEP for 10 seconds"); System.sleep(SLEEP_MODE_DEEP); } else { Log.info("Failed to setAlarm, not sleeping"); delay(10000); } } } For Gen 2 devices, you might do something like: System.sleep(SLEEP_MODE_DEEP, 10); but using the RTC it would be: if (rtc.setAlarm(10)) { System.sleep(SLEEP_MODE_DEEP); } The reason for the error check around setAlarm is that the alarm can only be set after the RTC has been set to the correct time. Upon cold boot (no 3V3, no battery) the RTC won't be set and sleep cannot be used until the first clock synchronization. This is true even when delaying by seconds. You can also use rtc.isRTCValid() to determine if the RTC is believed to be correct. If isRTCValid() returns true, then setAlarm() will typically return true as well. This is handy if you want to preflight setAlarm() before turning off the network connection, for example. Using SRAM The MCP79410 contains 64 bytes of battery-backed SRAM. This is handy if you want to store data data. This can be written to quickly and does not wear out. The data is preserved by the backup battery (CR1220 in the design above) when there is no power on 3V3. The API works like the EEPROM API in Device OS: int a = 1234; rtc.sram().put(0, a); a = 0; rtc.sram().get(0, a); // a = 1234 again The first parameter is 0, and that is a byte offset. Make sure you leave enough room! The next data should be written at 4, leaving enough room to save the 4-byte (32-bit) integer. You can get and put simple data bytes (int, bool, uint32_t, etc.) and struct. Note that you cannot put a String variable or char *! You need to set aside enough bytes and copy the string to the bytes. Using EEPROM The MCP79410 contains 128 bytes of byte-writable EEPROM. This is slower to write to, but the data is preserved forever, even with no battery. The EEPROM can also wear out; it's rated for 1 million erase-write cycles for each byte. The API works like the EEPROM API in Device OS: int a = 1234; rtc.eeprom().put(0, a); a = 0; rtc.eeprom().get(0, a); // a = 1234 again Using the Protected EEPROM Block In addition to the 128 bytes of EEPROM, there's a special block of 8 additional bytes of EEPROM. This is harder to access and accidentally erase. This is often used for things like MAC addresses, however I think it would be perfect for storing board revision and capability information. The 7-board-id-set example shows how to store data in the protected EEPROM. The 8-board-id example shows how to read it: // This example shows how you'd read an 8-byte structure with information about your board (that has the RTC on it) // from protected block EEPROM. // // You'd run code like the 7-board-id-set example to set the values during manufacture. // // You'd add code like the following to your own firmware to read the structure and presumably do something with it other // than just print it to debug serial. #include "MCP79410RK.h" SYSTEM_THREAD(ENABLED); SerialLogHandler logHandler; MCP79410 rtc; typedef union { struct { uint16_t boardType; uint16_t boardVersion; uint32_t featureFlags; } data; uint8_t bytes[MCP79410::EEPROM_PROTECTED_BLOCK_SIZE]; // 8 bytes } BoardId; void setup() { rtc.setup(); // Wait for a USB serial connection for up to 10 seconds. This is just so you can see the Log.info // statement, you'd probably leave this out of real code waitFor(Serial.isConnected, 10000); // Read the BoardId structure from the protected block EEPROM BoardId boardId; rtc.eeprom().protectedBlockRead(boardId.bytes); Log.info("boardType=%04x boardVersion=%04x featureFlags=%08x", boardId.data.boardType, boardId.data.boardVersion, boardId.data.featureFlags); } void loop() { rtc.loop(); } Version History 0.0.4 (2020-03-10) - Fix compiler warning for ambiguous requestFrom with 1.5.0-rc.2. 0.0.3 (2019-05-05) - Fixed a bug in polarity=false where the bit would not get cleared once set Browse Library Files
https://docs.particle.io/cards/libraries/m/MCP79410RK/
CC-MAIN-2021-21
en
refinedweb
This article will be a collection of Java performance measurement pointer. It describes how memory works in general and how Java use the heap and the stack. The article describes how to set the available memory for Java. It discusses then how to get the runtime and the memory consumption of a Java application. 1. Performance factors Important influence factors to the performance of a Java program can be separated into two main parts: Memory Consumption of the Java program Total runtime of a program In case the program is to some case a program which interacts with others also the response time is a very important fact of the performance of a program. This article does not cover concurrency. If you want to read about concurrency / multithreading please see Concurrency / Multithreading in Java 2. Memory handling in Java operating system than 4 GB of memory. Of course with a 64-bit OS this 4GB limitation does not exist anymore. 2.2. Memory in Java Java manages the memory for use. New objects created and placed in the heap. Once your application have no reference anymore to an object stated earlier Java objects are created and stored in the heap. The programming language does not offer the possibility to let the programmer decide if an object should be generated in the stack. But in certain cases it would be desirable to allocate an object on the stack, as the memory allocation on the stack is cheaper than. 2.6. Memory leaks The garbage collector of the JVM releases Java objects from memory as long as no other object refers to this object. If other objects still hold references to these objects, then the garbage collector of the JVM cannot release them. 3. Garbage Collector The JVM automatically re-collects the memory which is not used any more. The memory for objects which are not referred any more will be automatically released by the garbage collector. To see that the garbage collector starts working add the command line argument "-verbose:gc" to your virtual machine. An in-depth article about the garbage collector can be found here: Tuning Garbage Collection with the 5.0 Java Virtual Machine 4. Memory settings for Java virtual machine The JVM runs with fixed available memory. Once this memory is exceeded you will receive "java.lang.OutOfMemoryError". The JVM tries to make an intelligent choice about the available memory at startup (see Java settings for details) but you can overwrite the default with the following settings. To turn performance you can use certain parameters in the JVM. If you start your Java program from the command line use for example the following setting: java -Xmx1024m YourProgram. In Eclipse your can use the VM arguments in the run configuration. 5. Memory Consumption and Runtime In general a operation is considered as extensive if this operation has a long runtime or a high memory consumption. 5.1. Memory Consumption The total used / free memory of an program can be obtained in the program via java.lang.Runtime.getRuntime(); The runtime has several method which relates to the memory. The following code example demonstrate its usage. package test; import java.util.ArrayList; import java.util.List; public class PerformanceTest { private static final long MEGABYTE = 1024L * 1024L; public static long bytesToMegabytes(long bytes) { return bytes / MEGABYTE; } public static void main(String[] args) { // I assume you will know how to create a object Person yourself... List<Person> list = new ArrayList<Person>(); for (int i = 0; i <= 100000; i++) { list.add(new Person("Jim", "Knopf")); } // Get the Java runtime Runtime runtime = Runtime.getRuntime(); // Run the garbage collector runtime.gc(); // Calculate the used memory long memory = runtime.totalMemory() - runtime.freeMemory(); System.out.println("Used memory is bytes: " + memory); System.out.println("Used memory is megabytes: " + bytesToMegabytes(memory)); } } 5.2. Runtime of a Java program Use System.currentTimeMillis() to get the start time and the end time and calculate the difference. package de.vogella.performance); } } 6. Lazy initialization In case a variable is very expensive to create then sometimes it is good to defer the creation of this variable until the variable is needed. This is called lazy initialization. In general lazy initialization should only be used if a analysis has proven that this is really a very expensive operations. This is based on the fact that lazy initialization makes it more difficult to read the code. I use the project "de.vogella.performance.lazyinitialization" for the examples in this chapter. And a have a own field defined. package de.vogella.performance.lazyinitialization; public class MyField { } 6.1. Concurrency - Overview The simplest way is to use a synchronized block. Because then field access is always synchronized in case on read access this variant is slow. package de.vogella.performance.lazyinitialization; public class SynchronizedTest { private MyField myField; public synchronized MyField getMyField() { if (myField == null) { myField = new MyField(); } return myField; } } 7. Just-in-time (JIT) compiler The. 8. Using VisualVM (jvisualvm) 8.1. What is VisualVM? jvisualvm is a tool to analyse the runtime behavior of your Java application. It allows you to trace a running Java program and see its the memory and CPU consumption. You can also use it to create a memory heap dump to analyze the objects in the heap. [Visualvm] is part of the jdk distribution (as of Update 7 for jdk1.6). To start visualvm just click on jvisualvm.exe in the bin directory of your jdk installation. If this the bin directory is part of your patch, you can also start it with the jvisualvm command. 8.2. Creating a heap dump with You can use VisualVM to take a heap dump of a local running application. This creates a temporary file, until you explicitly save it. If you do not save the file, the file is deleted when the application from which you took the heapdump terminates. 9. Load Test A load test tool is a tool which emulates user and system actions on an application to measure the reaction and behavior of this system. A load test tool is commonly used for a web application to measure its behavior. Popular tools for load testing are: Apache JMeter - See Eclipse TPTP testing tool - See Grinder - See 10. Links and Literature
https://www.vogella.com/tutorials/JavaPerformance/article.html
CC-MAIN-2021-21
en
refinedweb
My .. Category : math I am sure about that there aren’t any mistakes in my class. I can’t give all codes because the program is big. The program is about radical numbers. Such as converting √8 to 2√2. I am going to use this class in my main python file. Here are the codes: def kokluyazdir(self,number): i = 1 .. I .. The Problem Statement is: You’ve to display the digits of a number. Take as input "n", the number for which digits have to be displayed. Print the digits of the number line-wise. #include<iostream> #include<cmath> using namespace std; int main(){ int n; cin>>n; int nod = 0; int temp = n; while(temp != 0){ temp = .. I implemented a CS paper to solve for geodesics on meshes. When I disable all optimizations I get exactly what I expect. If I enable optimizations however I get either NAN’s, infinity, or any number of weird results. I have checked all the stages of the algorithm and the error seems to happen on the .. I am currently working on a project for my Data Structures Course that involves a Binary Search Tree built using a doubly linked list-type format (where each node has a left, right, and parent pointer). The class managing the tree also has a root pointer and a current pointer (like a cursor). One of the .. I need to raise a fixed-point number to the third and fifth power, but the standard pow method doesn’t work. what to do in this situation Source: Windows Que.. Hey bro I need help to find prime numbers in to 10 Fibonacci numbers I mean between (0-1-1-2-3-5-8-13-21 -34). I think I should make two functions first for 10 Fibonacci numbers and second for how to find prime numbers and actually I know how to write them but how can I combine them??? (I need .. I am working on finding out the solution for problem 22 for project Euler in C++. This is the problem: Using names.txt (right click and ‘Save Link/Target As…’), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this .. Recent Comments
https://windowsquestions.com/category/math/
CC-MAIN-2021-21
en
refinedweb
Subject: Re: [boost] Switch to CMake -- Analysis From: Steven Watanabe (watanabesj_at_[hidden]) Date: 2017-07-29 03:30:13 AMDG On 07/21/2017 03:32 PM, Mateusz Loskot via Boost wrote: > On 21 July 2017 at 21:53, Artyom Beilis via Boost <boost_at_[hidden]> wrote: >> >> 1. Support of basic features given by any normal REAL WORLD build >> system barely exists (library search configuration options etc) >> 2. Documentation isn't good and it was this for years. >> 3. Knowledge exits only for few people. >> >> Do you want a proof? Please find me a tutorial how to search a 3rd >> part library or complete reference documentation for that. > > I'd like to second Artyom's comments here. > Boost.GIL may serve as another example - with I/O API depending on > number of external libraries to support raster formats. > > Back in 2010, during and long after the GIL review, I remember Christian Henning > (author and maintainer of GIL) and myself, we were having hard times over long > months trying to complete the Boost.Build setup for the library. > I don't remember if eventually Christian released GIL with user > friendly support for > third-party libraries. > I'd been trying to convince myself [1] to like Boost.Build Extensions > [2] for some time. > I distinctly remember helping Christian with the Boost.Build support and the modules are present: However, the actual test suite seems to be using those Boost.Build extensions you mentioned (In addition to hard-coding Christian's local paths). Usage: b2 -sZLIB_LIBRARY_PATH=XXX -sZLIB_INCLUDE=YYY -or- b2 -sZLIB_SOURCE=ZZZ -or- In user-config.jam: using zlib : 1.2.7 : <search>/a/path <include>/a/path ; -or- Do nothing and let zlib be found in the system path. Inside the Jamfile: import ac ; using zlib ; exe test : src.cpp : # If zlib is found... [ ac.check-library /zlib//zlib : # link to it and also include the sources that use it <library>/zlib//zlib <source>code-using-zlib.cpp ] ; In Christ, Steven Watanabe Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2017/07/237805.php
CC-MAIN-2021-21
en
refinedweb
table of contents other versions - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.10-1 - unstable 5.10-1 NAME¶fputwc, putwc - write a wide character to a FILE stream SYNOPSIS¶ #include <stdio.h> #include <wchar.h> wint_t fputwc(wchar_t wc, FILE *stream); wint_t putwc(wchar_t wc, FILE *stream); DESCRIPTION¶¶The fputwc() function returns wc if no error occurred, or WEOF to indicate an error. In the event of an error, errno is set to indicate the cause. ERRORS¶Apart from the usual ones, there is - EILSEQ - Conversion of wc to the stream's encoding fails. ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶POSIX.1-2001, POSIX.1-2008, C99. NOTES¶The.
https://manpages.debian.org/buster-backports/manpages-dev/fputwc.3.en.html
CC-MAIN-2021-21
en
refinedweb
Archives Windows Workflow Foundation Workflow is one of the new core capabilities (along with WPF aka Avalon and WCF aka Indigo) being added in the .NET Framework 3.0 release later this year. It provides an in-process workflow engine to process rules, a designer for VS 2005 to enable both developers and non-developers to define custom workflow processes graphically, and a new Workflow namespace to integrate these within code. The official site to learn more about Windows Workflow Foundation can be found here. XNA Express Beta Available for Free Download (build XBOX games in C#). Slides + Samples Posted from my TechEd LINQ Talk One). Building and using a LINQ for SQL Class Library with ASP.NET 2.0 Great New Atlas Videos Published (All Free) Joe Stagner has been busy at work publishing more Atlas videos on the website (click here for the full video listing). ASP.NET 2.0 Tips/Tricks TechEd Talk Posted Many thanks to everyone in New Zealand who attended my "ASP.NET 2.0: Tips and Tricks" talk this morning. Details on CSS Changes for IE7 The Internet Explorer team maintains a really good blog here: that I recommend subscribing to for useful information. IIS7, ASP.NET 2.0, Atlas and VS 2005 End to End Talk Many thanks to everyone in New Zealand who attended my "ASP.NET: End-to-End - Building a Complete Web Application Using ASP.NET 2.0, Visual Studio 2005, and IIS7 (Parts 1 and 2)" talk this afternoon. 10 Worst Presentation Moments While procrastinating from finishing up my TechEd talks in my hotel room here in NZ, I came across this really funny link of a Microsoft UK employee's "Ten Worst Presentation Moments" that had me laughing out loud. Tip/Trick: Creating Sub-Web Projects using the VS 2005 Web Application Project Option One. Free Windows Live Writer Application Earlier this week the Windows Live team released the new Windows Live Writer blog posting and management tool that you can download and use for free. It is a desktop application that provides a really nice editing environment for writing blog posts (spell checker, layout manager, offline editing support, etc). Heading Off to TechEd New Zealand and Australia Tomorrow... I’m about to take off for a whirlwind business trip the next 10 days – and email and blog comment responses will unfortunately be very slow while I'm away.: I'm presenting September 6th in Phoenix Arizona The Arizona .NET User's Group has one really big meeting every year, and I've been fortunate to have been invited to come out and speak again for this year's event. It will be held all day Wednesday September 6th at the Orpheum theater in Phoenix, Arizona. I’ll be on stage for a little over 3 hours total, and topics I'll be covering include ASP.NET 2.0 Tips and Tricks, IIS 7.0, Atlas, LINQ/LINQ for SQL and more.). My ASP.NET 2.0 Tips, Tricks, Recipes and Gotchas "Highlights Page" Several people have sent me email lately asking for a suggested short-list of my best/favorite past blog posts to read (I’ve done 200 posts over the last 12 months and apparently it takes too long to read them all <g>). ASP.
https://weblogs.asp.net/scottgu/archive/2006/8
CC-MAIN-2021-21
en
refinedweb
Tutorial How To Use Typescript with Create React App Introduction Create React App provides you with a set of essential packages and configurations to start building a React application. Version 2.0 introduced official TypeScript support. This allowed for JavaScript users to write with TypeScript conventions in the React frontend framework. TypeScript is a powerful tool that helps write safer, self-documenting code, allowing developers to catch bugs faster. In this article, you will set up a React app with TypeScript using Create React App. Prerequisites To follow along with this article, you will need: - Node.js installed locally, which you can do by following How to Install Node.js and Create a Local Development Environment. - Some familiarity with React. You can take a look at our How To Code in React.js series. - Some familiarity with TypeScript conventions. - A modern code editor that supports code hinting is recommended. Visual Studio Code provides this through IntelliSense. This tutorial was verified with Node v15.13.0, npm v7.8.0, react-scripts v4.0.3, react v17.0.2, and typescript v4.2.3. Starting a TypeScript Create React App First, open your terminal window and navigate to the directory you want to build your project in. Then, use create-react-app with the --template typescript flag: - npx create-react-app cra-typescript-example --template typescript Your terminal window will display an initial message: Creating a new React app in [..]/cra-typescript-example. Installing packages. This might take a couple of minutes. Installing react, react-dom, and react-scripts with cra-template-typescript... The --template typescript flag instructs the Create React App script to build using cra-template-typescript template. This will add the main TypeScript package. Note: In previous versions of Create React App, it was possible to use the --typescript flag, but this option has since been deprecated. Once installation is complete, you will have a new React application with TypeScript support. Navigate to your project directory and open it in your code editor. Examining the tsconfig.json File You may have noticed that your terminal window displayed the following message: We detected TypeScript in your project (src/App.test.tsx) and created a tsconfig.json file for you. Your tsconfig.json has been populated with default values. The tsconfig.json file is used to configure TypeScript projects, similar to how package.json is for JavaScript projects. The tsconfig.json generated by Create React App will resemble the following: { -jsx" }, "include": [ "src" ] } This configuration establishes several compilation rules and versions of ECMAScript to compile to. Examining the App.tsx File Now, let’s open the App.tsx file: import React from 'react'; import logo from './logo.svg'; import './App.css'; function App() {; If you have used Create React App before, you may have noticed that this is very similar to the App.js file that Create React App generates for non-TypeScript builds. You get the same base as the JavaScript projects, but TypeScript support has been built into the configuration. Next, let’s create a TypeScript component and explore the benefits it can provide. Creating a TypeScript Component Start by creating a functional component in the App.tsx file: function MyMessage({ message }) { return <div>My message is: {message}</div>; } This code will take a message value from the props. It will render a div with the text My message is: and the message value. Now let’s add some TypeScript to tell this function that its message parameter should be a string. If you’re familiar with TypeScript, you may think you should try to append message: string to message. However, what you have to do in this situation is define the types for all props as an object. There are a few ways you can accomplish this. Defining the types inline: function MyMessage({ message }: { message: string }) { return <div>My message is: {message}</div>; } Defining a props object: function MyMessage(props: { message: string }) { return <div>My message is: {props.message}</div>; } Using a separate interface: interface MyMessageProps { message: string; } function MyMessage(props: MyMessageProps) { return <div>My message is: {props.message}</div>; } You can create an interface and move that into a separate file so your types can live elsewhere. This may seem like a lot of writing, so let’s see what we gain from writing a bit more. We’ve told this component that it only accepts a string as the message parameter. Now let’s try using this inside our App component. Using TypeScript Components Let’s use this MyMessage component by adding it to the render logic. Start typing out the component: <MyMessage If your code editor supports code hinting, you will notice that the component’s signature will appear as you start to type out the component. This helpfully provides you with the expected values and types without having to navigate back to the component. This is especially useful when dealing with multiple components in separate files. Examining Prop Types Now, start typing out the props: <MyMessage messa As soon as you start typing message, you can see what that prop should be: This displays (JSX attribute) message: string. Examining Type Errors Try passing a numeric value for message instead of a string: <MyMessage message={10} /> If we add a number as a message, TypeScript will throw an error and help you to catch these typing bugs. React won’t even compile if there are type errors like this: This displays Type 'number' is not assignable to type 'string'. Conclusion In this tutorial, you set up a React app with TypeScript using Create React App. You can create types for all your components and props. You can benefit from code hinting with modern code editors. And you will be able to catch errors faster since TypeScript won’t even let the project compile with type errors. If you’d like to learn more about TypeScript, check out our TypeScript topic page for exercises and programming projects.
https://www.digitalocean.com/community/tutorials/using-create-react-app-v2-and-typescript
CC-MAIN-2021-21
en
refinedweb
Artificial Neural Networks Optimization using Genetic Algorithm with Python This tutorial explains the usage of the genetic algorithm for optimizing the network weights of an Artificial Neural Network for improved performance. Complete Python Implementation The Python implementation for such project has three Python files: - GA.py for implementing GA functions. - ANN.py for implementing ANN functions. - Third file for calling such functions through a number of generations. This is the main file of the project. Main Project File Implementation The third file is the main file because it connects all functions. It reads the features and the class labels files, filters features based on the standard deviation, creates the ANN architecture, generates the initial solutions, loops through a number of generations by calculating the fitness values for all solutions, selecting best parents, applying crossover and mutation, and finally creating the new population. Its implementation is given below. Such a file defines the GA parameters such as a number of solutions per population, number of selected parents, mutation percent, and number of generations. You can try different values for them. import numpy import GA import pickle import ANN import matplotlib.pyplot f = open("dataset_features.pkl", "rb") data_inputs2 = pickle.load(f) f.close() features_STDs = numpy.std(a=data_inputs2, axis=0) data_inputs = data_inputs2[:, features_STDs>50] f = open("outputs.pkl", "rb") data_outputs = pickle.load(f) f.close() #Genetic algorithm parameters: # Mating Pool Size (Number of Parents) # Population Size # Number of Generations # Mutation Percent sol_per_pop = 8 num_parents_mating = 4 num_generations = 1000 mutation_percent = 10 #Creating the initial population. initial_pop_weights = [] for curr_sol in numpy.arange(0, sol_per_pop): HL1_neurons = 150 input_HL1_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(data_inputs.shape[1], HL1_neurons)) HL2_neurons = 60 HL1_HL2_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(HL1_neurons, HL2_neurons)) output_neurons = 4 HL2_output_weights = numpy.random.uniform(low=-0.1, high=0.1, size=(HL2_neurons, output_neurons)) initial_pop_weights.append(numpy.array([input_HL1_weights, HL1_HL2_weights, HL2_output_weights])) pop_weights_mat = numpy.array(initial_pop_weights) pop_weights_vector = GA.mat_to_vector(pop_weights_mat) best_outputs = [] accuracies = numpy.empty(shape=(num_generations)) for generation in range(num_generations): print("Generation : ", generation) # converting the solutions from being vectors to matrices. pop_weights_mat = GA.vector_to_mat(pop_weights_vector, pop_weights_mat) # Measuring the fitness of each chromosome in the population. fitness = ANN.fitness(pop_weights_mat, data_inputs, data_outputs, activation="sigmoid") accuracies[generation] = fitness[0] print("Fitness") print(fitness) # Selecting the best parents in the population for mating. parents = GA.select_mating_pool(pop_weights_vector, fitness.copy(), num_parents_mating) print("Parents") print(parents) # Generating next generation using crossover. offspring_crossover = GA.crossover(parents, offspring_size=(pop_weights_vector.shape[0]-parents.shape[0], pop_weights_vector.shape[1])) print("Crossover") print(offspring_crossover) # Adding some variations to the offsrping using mutation. offspring_mutation = GA.mutation(offspring_crossover, mutation_percent=mutation_percent) print("Mutation") print(offspring_mutation) # Creating the new population based on the parents and offspring. pop_weights_vector[0:parents.shape[0], :] = parents pop_weights_vector[parents.shape[0]:, :] = offspring_mutation pop_weights_mat = GA.vector_to_mat(pop_weights_vector, pop_weights_mat) best_weights = pop_weights_mat [0, :] acc, predictions = ANN.predict_outputs(best_weights, data_inputs, data_outputs, activation="sigmoid") print("Accuracy of the best solution is : ", acc) matplotlib.pyplot.plot(accuracies, linewidth=5, color="black") matplotlib.pyplot.xlabel("Iteration", fontsize=20) matplotlib.pyplot.ylabel("Fitness", fontsize=20) matplotlib.pyplot.xticks(numpy.arange(0, num_generations+1, 100), fontsize=15) matplotlib.pyplot.yticks(numpy.arange(0, 101, 5), fontsize=15) f = open("weights_"+str(num_generations)+"_iterations_"+str(mutation_percent)+"%_mutation.pkl", "wb") pickle.dump(pop_weights_mat, f) f.close() Based on 1,000 generations, a plot is created at the end of this file using Matplotlib visualization library that shows how the accuracy changes across each generation. It is shown in the next figure. After 1,000 iterations, the accuracy is more than 97%. This is compared to 45% without using an optimization technique as in the previous tutorial. This is an evidence about why results might be bad not because there is something wrong in the model or the data but because no optimization technique is used. Of course, using different values for the parameters such as 10,000 generations might increase the accuracy. At the end of this file, it saves the parameters in matrix form to the disk for use later. GA.py Implementation The GA.py file implementation is in listed below. Note that the mutation() function accepts the mutation_percent parameter that defines the number of genes to change their values randomly. It is set to 10% in the main file. Such a file holds the 2 new functions mat_to_vector() and vector_to_mat(). import numpy import random # Converting each solution) # Converting each solution from vector to matrix.) def select_mating_pool(pop, fitness, num_parents): # Selecting the best individuals in the current generation as parents for producing the offspring of the next generation. parents = numpy.empty((num_parents, pop.shape[1])) for parent_num in range(num_parents): max_fitness_idx = numpy.where(fitness == numpy.max(fitness)) max_fitness_idx = max_fitness_idx[0][0] parents[parent_num, :] = pop[max_fitness_idx, :] fitness[max_fitness_idx] = -99999999999 return parents def crossover(parents, offspring_size): offspring = numpy.empty(offspring_size) # The point at which crossover takes place between two parents. Usually, it is at the center. crossover_point = numpy.uint8(offspring_size[1]/2) for k in range(offspring_size[0]): # Index of the first parent to mate. parent1_idx = k%parents.shape[0] # Index of the second parent to mate. parent2_idx = (k+1)%parents.shape[0] # The new offspring will have its first half of its genes taken from the first parent. offspring[k, 0:crossover_point] = parents[parent1_idx, 0:crossover_point] # The new offspring will have its second half of its genes taken from the second parent. offspring[k, crossover_point:] = parents[parent2_idx, crossover_point:] return offspring def mutation(offspring_crossover, mutation_percent): num_mutations = numpy.uint8((mutation_percent*offspring_crossover.shape[1])/100) mutation_indices = numpy.array(random.sample(range(0, offspring_crossover.shape[1]), num_mutations)) # Mutation changes a single gene in each offspring randomly. for idx in range(offspring_crossover.shape[0]): # The random value to be added to the gene. random_value = numpy.random.uniform(-1.0, 1.0, 1) offspring_crossover[idx, mutation_indices] = offspring_crossover[idx, mutation_indices] + random_value return offspring_crossover ANN.py Implementation Finally, the ANN.py is implemented according to the code listed below. It contains the implementation of the activation functions (sigmoid and ReLU) in addition to the fitness() and predict_outputs() functions to calculate the accuracy. import numpy def sigmoid(inpt): return 1.0 / (1.0 + numpy.exp(-1 * inpt)) def relu(inpt): result = inpt result[inpt < 0] = 0 return result For Contacting the Author - KDnuggets: - YouTube: - TowardsDataScience: - GitHub: Original. Reposted with permission.. Related: - Artificial Neural Network Implementation using NumPy and Image Classification - Genetic Algorithm Implementation in Python - Is Learning Rate Useful in Artificial Neural Networks?
https://www.kdnuggets.com/2019/03/artificial-neural-networks-optimization-genetic-algorithm-python.html/2
CC-MAIN-2021-21
en
refinedweb
This article covers the Tkinter Toplevel widget Tkinter works with a hierarchical system, where there is one root window from where all other widgets and windows expand from. Calling the Tk() function initializes the whole Tkinter application. Often while creating a GUI, you wish to have more than just one window. Instead of calling the Tk() function again (which is the incorrect way) you should use the Tkinter Toplevel widget instead. Differences Calling the Tk() function creates a whole Tkinter instance, while calling the Toplevel() function only creates a window under the root Tkinter instance. Destroying the Tk() function instance will destroy the whole GUI, whereas destroying the Toplevel() function only destroys that window and it’s child widgets, but not the whole program. Toplevel syntax window = Toplevel(options.....) Toplevel Options List of all relevant options available for the Toplevel widget. Toplevel Example This is a simple Toplevel function example simply to demonstrate how it works. from tkinter import * root = Tk() window = Toplevel() root.mainloop() This isn’t a very practical approach though, so we’ll discuss a more real life scenario in the next example. Toplevel Example 2 In this example we’ll show you another way calling a new window. In most software, you start off with one window and can spawn multiple windows such as a “Settings Window”. This is in contrast to the previous example where we started directly with 2 windows. The code below creates a button, that when clicked calls a function that creates a new Toplevel window with a widget in it. You might find this approach more suitable for your GUI. from tkinter import * def NewWindow(): window = Toplevel() window.geometry('150x150') newlabel = Label(window, text = "Settings Window") newlabel.pack() root = Tk() root.geometry('200x200') myframe = Frame(root) myframe.pack() mybutton = Button(myframe, text = "Settings", command = NewWindow) mybutton.pack(pady = 10) root.mainloop() Only use multiple windows when it makes sense to have more than one. It makes sense to have a separate window dedicated to settings, especially when it’s a large software when dozens of different settings. Toplevel Methods Another benefit of using Toplevel is the dozen different methods available to it that provide extra functionality. Amongst the most useful these methods are withdraw() and deiconify() which can be used to withdraw and display the window respectively. Useful if you want to make the window disappear without destroying it. And also the resizable, maxsize(), minsize() and title methods. Explanations mentioned below in the table. Most of these methods are self-explanatory enough that you shouldn’t require any explanation beyond what is written here. The rest will be covered in another article soon, or you can always google it if you ever need it. This marks the end of the Python Tkinter Toplevel article. Any suggestions or contributions for CodersLegacy are more than welcome. Relevant questions regarding the article material can be asked in the comments section below. To learn about other awesome widgets in Tkinter, follow this link!
https://coderslegacy.com/python/tkinter-toplevel/
CC-MAIN-2021-21
en
refinedweb
Author: Kevin B. Kenny <[email protected]> State: Draft Type: Project Vote: Pending Created: 25-Oct-2017 Tcl-Version: 8.7 Keywords: assertion, pragma, type, alias, compilation Post-History: Tcl-Branch: tip-480 Abstract This TIP proposes a new ensemble in the ::tcl namespace, ::tcl::pragma, that will provide a place to install commands that make structural assertions about Tcl code. Initially, two subcommands will be provided: ::tcl::pragma type, which asserts that Tcl values are lexically correct objects of a given data type, and ::tcl::pragma noalias, which describes the possible aliasing relationships among a group of variables. The assertions are provided in an ensemble, so that the set of available assertions can be expanded in the future as additional opportunities are discovered to make useful claims about program and data structure. Motivation Tcl, of course, is a typeless language: every value is a string. Moreover, it is an intensely dynamic language: the association of names with commands and variables is made very late, sometimes only when code is executed that searches for a variable by name. Nevertheless, often a programmer's intention is to have values from a restricted set of strings, or to make restrictions on what names may address what variables. For instance, it may be known that a given piece of code is prepared to accept only numeric data, well-formed lists, Boolean values, or some other restricted type of data as its input. Similarly, a great many programs that import variables using forms such as global, variable, upvar, namespace upvar, and the custom variable resolutions of systems like TclOO cannot function correctly if two or more of their variable names actually designate the same variable. A procedure like proc collect {inputVar} { upvar 1 $inputVar inputs variable collection for {set i 0} {$i < [llength $inputs]} {incr i} { lappend collection [lindex $inputs $i] } } will surely yield surprising results if called with collection as its parameter! Giving the programmer the capability to specify restrictions on data types and alias relationships would have multiple advantages: It documents what is expected. In particular, procedure, method and lambda parameters can have assertions about their structure early in a procedure, informing callers what preconditions must be met. It fails early. Rather than having mistaken values or unexpected aliases run some way into a procedure and then fail mysteriously or even silently, it can yield an informative message at the first sign of a violated condition. It aids with code optimization. While data type restrictions can be deduced by a compiler with considerable effort (1), making them explicit can still lead to more performant code. Alias restrictions are considerably harder to deduce, and the problem is Turing-complete in general. Unexpected aliases can be created at points in the program far remote from a procedure. Code like uplevel #0 {upvar 0 ::path::to::variable ::some::other::thing} will create an alias without any procedure accessing one or another of the variables being any the wiser. Proposal The ::tcl::pragma ensemble will be added. Initially, it will have two ::tcl::pragma type and ::tcl::pragma noalias. tcl::pragma type The ::tcl::pragma type command will have the syntax: ::tcl::pragma type typeName $value1 $value2... In this usage, typeName is a description of the acceptable type of the given values. The values will be checked for whether they are instances of the given type, and a run-time error will be thrown if any value is not. Initially, the following types will be supported: boolean: Indicates that the value is a Boolean: 0, 1, off, on, true, false, yes, no: in general, a value that will pass the test of string is boolean -strict. int32: Indicates that the value is an integer, small enough to fit in a C intvalue on the current platform. int64: Indicates that the value is an integer, small enough to fit in a Tcl_WideIntvalue on the current platform. integer: Indicates that the value is an integer, without constraint on its size. double: Indicates that the value is representable as a double-precision floating point number (including the special values for Infinity and Not-a-Number). number: Indicates that the value is representable as a number, which is the union of values accepted as an integer and values accepted as a double. list: Indicates that the value is representable as a Tcl list. The elements of the list are not constrained. dict: : Indicates that the value is representable as a Tcl dictionary. The keys and values of the dictionary are not constrained. It is anticipated that further TIP's will be proposed that expand the available set of types. In particular, lists and dictionaries with constrained content types are foreseen as being useful things to include. Note that this command operates on values, not variables. A command like: ::tcl::pragma type int $a does not declare that a is an integer variable, and does not require future assigmnents to it to have the given type. It merely asserts that at the current point in the program, the value of a will be an integer small enough to fit in a C int. One may think of this assertion as syntactic sugar for the longer codeburst: if {![string is integer -strict $a]} { return -code error -level 0 "expected an integer but got $a" } and in fact the bytecode compiler will be free to compile that, or similar code. (The description is slightly oversimplified, since other error options must also be manipulated.) tcl::pragma noalias The syntax for the ::tcl::pragma noalias command shall be: ::tcl::pragma noalias set1 set2... In this usage, set1, set2, ... are lists of variable names. The syntax expresses the assertion that variables that are mentioned in the call are not aliases of each other at the time the command is executed, except that variables in the same set are permitted to alias. The most common usage will be simply to use singleton sets. For instance, the collect procedure above might contain ::tcl::pragma noalias inputs collection following the command upvar 1 $inputsVar inputs This command would have the effect of asserting that inputs and collection designate distinct variables, avoiding strange behaviour of modifying the inputs while an iteration is in progress. It is possible for any combination of aliases to be permitted by including the possibility on the command line. For instance to assert that a may be an alias of b or c, but b and c must not alias each other, the command: ::tcl::pragma noalias {a b} {a c} might be used. (The program could specify, redundantly, b and c on the command line, but the noalias command will enforce that any variable mentioned anywhere in its arguments is not aliased to any other, except as specified. As a final note, it is anticipated that ::tcl::pragma noalias {*}[info locals] will be a common usage - most programs do not tolerate any unexpected aliasing at all. It is therefore further anticipated that this specific usage may receive special handling in the implementation. As with type, noalias is an assertion of the state of the program at a given point in the flow of execution. It does not establish a permanent constraint. A subsequent command such as upvar may change the aliasing relation, and there will be no prevention of such a change. It is worth noting that the necessary interfaces to implement this command are not yet available at the Tcl level at all. A Tcl script has no easy way to determine whether one variable is an alias for another. This command has no counterpart in today's Tcl. A quick view may lead one to suspect that noalias will require quadratic time to check the relationships at runtime. In at least the common cases, though, it is to be expected that noalias will run in time O(N), where N is the number of included variables. Instead of comparing all pairs, it will be easier to maintain a hash table of variable addresses, and check for collisions by looking for existing hash entries. Discussion The Naming of Names An appropriate name for this ensemble is a difficult choice. A very early draft of this proposal, circulated privately, suggested ::tcl::assume (since it was seen as a claim that it is safe for a compiler to make a given assumption). This name was roundly rejected by the reviewers. An alternative that was counterproposed was ::tcl::assert. The disadvantage to the latter name is that it is easy to imagine a piece of code wanting to namespace import both ::tcl::assert and ::control::assert leading to a name collision. Moreover, ::tcl::assert does not take a Boolean expression but rather a different sort of expression of a constraint. The similarity of the names would therefore be confusing. In names, as in many other aspects of life, "the good ones are already taken." Runtime Behaviour The assertions described in this TIP are not without cost at runtime. In an interpreted environment, it may be desirable to control, on a per-namespace basis, whether the assertions are enforced. In a compiled environment, many of these assertions will either enable more aggressive optimization, be removable themselves with appropriate analysis to prove they are unnecessary, or both. For this reason, the proponent wishes to consider enabling and disabling of structural assertions to be Out Of Scope at the present time. If it does prove to be necessary, it can be done with a mechanism analogous to the way that today's ::control::assert works. References - Kenny, Kevin B. and Donal K. Fellows. 'The State of Quadcode 2017.' Proc. 24th Annual Tcl/Tk Conf. Houston, Tex.: Tcl Community Association, October 2017.
https://core.tcl-lang.org/tips/doc/trunk/tip/480.md
CC-MAIN-2021-21
en
refinedweb
For the codelabs in this pathway, you will be building a Dice Roller Android app. When the user "rolls the dice," a random result will be generated. The result takes into account the number of sides of the dice. For example, only values from 1-6 can be rolled from a 6-sided dice. This is what the final app will look like. To help you focus on the new programming concepts for this app, you will use the browser-based Kotlin programming tool to create core app functionality. The program will output your results to the console. Later you will implement the user interface in Android Studio. In this first codelab, you will create a Kotlin program that simulates rolling dice and outputs a random number, just like a dice would. Prerequisites - How to open, edit, and run code in - Create and run a Kotlin program that uses variables and functions, and prints a result to the console. - Format numbers within text using a string template with the ${variable}notation. What you'll learn - How to programmatically generate random numbers to simulate dice rolls. - How to structure your code by creating a Diceclass with a variable and a method. - How to create an object instance of a class, modify its variables, and call its methods. What you'll build - A Kotlin program in the browser-based Kotlin programming tool that can perform a random dice roll. What you need - A computer with an internet connection Games often have a random element to them. You could earn a random prize or advance a random number of steps on the game board. In your everyday life, you can use random numbers and letters to generate safer passwords! Instead of rolling actual dice, you can write a program that simulates rolling dice for you. Each time you roll the dice, the outcome can be any number within the range of possible values. Fortunately, you don't have to build your own random-number generator for such a program. Most programming languages, including Kotlin, have a built-in way for you to generate random numbers. In this task, you will use the Kotlin code to generate a random number. Set up your starter code - In your browser, open the website. - Delete all the existing code in the code editor and replace it with the code below. This is the main()function you worked with in earlier codelabs. fun main() { } Use the random function To roll a dice, you need a way to represent all the valid dice roll values. For a regular 6-sided dice, the acceptable dice rolls are: 1, 2, 3, 4, 5, and 6. Previously, you learned that there are types of data like Int for integer numbers and String for text. IntRange is another data type, and it represents a range of integer numbers from a starting point to an endpoint. IntRange is a suitable data type for representing the possible values a dice roll can produce. - Inside your main()function, define a variable as a valcalled diceRange. Assign it to an IntRangefrom 1 to 6, representing the range of integer numbers that a 6-sided dice can roll. val diceRange = 1..6 You can tell that 1..6 is a Kotlin range because it has a start number, two dots, followed by an ending number (no spaces in between). Other examples of integer ranges are 2..5 for the numbers 2 through 5, and 100..200 for the numbers 100 through 200. Similar to how calling println() tells the system to print the given text, you can use a function called random() to generate and return a random number for you for a given range. As before, you can store the result in a variable. - Inside main(), define a variable as a valcalled randomNumber. - Make randomNumberhave the value of the result of calling random()on the diceRangerange, as shown below. val randomNumber = diceRange.random() Notice that you are calling random() on diceRange using a period, or dot, between the variable and the function call. You can read this as "generating a random number from diceRange". The result is then stored in the randomNumber variable. - To see your randomly generated number, use the string formatting notation (also called a "string template") ${randomNumber}to print it, as shown below. println("Random number: ${randomNumber}") Your finished code should look like this. fun main() { val diceRange = 1..6 val randomNumber = diceRange.random() println("Random number: ${randomNumber}") } - Run your code several times. Each time, you should see output as below, with different random numbers. Random number: 4 When you roll dice, they are real objects in your hands. While the code you just wrote works perfectly fine, it's hard to imagine that it's about actual dice. Organizing a program to be more like the things it represents makes it easier to understand. So, it would be cool to have programmatic dice that you can roll! All dice work essentially the same. They have the same properties, such as sides, and they have the same behavior, such as that they can be rolled. In Kotlin, you can create a programmatic blueprint of a dice that says that dice have sides and can roll a random number. This blueprint is called a class. From that class, you can then create actual dice objects, called object instances. For example, you can create a 12-sided dice, or a 4-sided dice. Define a Dice class In the following steps, you will define a new class called Dice to represent a rollable dice. - To start afresh, clear out the code in the main()function so that you end up with the code as shown below. fun main() { } - Below this main()function, add a blank line, and then add code to create the Diceclass. As shown below, start with the keyword class, followed by the name of the class, followed by an opening and closing curly brace. Leave space in between the curly braces to put your code for the class. class Dice { } Inside a class definition, you can specify one or more properties for the class using variables. Real dice can have a number of sides, a color, or a weight. In this task, you'll focus on the property of number of sides of the dice. - Inside the Diceclass, add a varcalled sidesfor the number of sides your dice will have. Set sidesto 6. class Dice { var sides = 6 } That's it. You now have a very simple class representing dice. Create an instance of the Dice class With this Dice class, you have a blueprint of what a dice is. To have an actual dice in your program, you need to create a Dice object instance. (And if you needed to have three dice, you would create three object instances.) - To create an object instance of Dice, in the main()function, create a valcalled myFirstDiceand initialize it as an instance of the Diceclass. Notice the parentheses after the class name, which denote that you are creating a new object instance from the class. fun main() { val myFirstDice = Dice() } Now that you have a myFirstDice object, a thing made from the blueprint, you can access its properties. The only property of Dice is its sides. You access a property using the "dot notation". So, to access the sides property of myFirstDice, you call myFirstDice.sides which is pronounced " myFirstDice dot sides". - Below the declaration of myFirstDice, add a println()statement to output the number of sidesof myFirstDice. println(myFirstDice.sides) Your code should look like this. fun main() { val myFirstDice = Dice() println(myFirstDice.sides) } class Dice { var sides = 6 } - Run your program and it should output the number of sidesdefined in the Diceclass. 6 You now have a Dice class and an actual dice myFirstDice with 6 sides. Let's make the dice roll! Make the Dice Roll You previously used a function to perform the action of printing cake layers. Rolling dice is also an action that can be implemented as a function. And since all dice can be rolled, you can add a function for it inside the Dice class. A function that is defined inside a class is also called a method. - In the Diceclass, below the sidesvariable, insert a blank line and then create a new function for rolling the dice. Start with the Kotlin keyword fun, followed by the name of the method, followed by parentheses (), followed by opening and closing curly braces {}. You can leave a blank line in between the curly braces to make room for more code, as shown below. Your class should look like this. class Dice { var sides = 6 fun roll() { } } When you roll a six-sided dice, it produces a random number between 1 and 6. - Inside the roll()method, create a val randomNumber. Assign it a random number in the 1..6range. Use the dot notation to call random()on the range. val randomNumber = (1..6).random() - After generating the random number, print it to the console. Your finished roll()method should look like the code below. fun roll() { val randomNumber = (1..6).random() println(randomNumber) } - To actually roll myFirstDice, in main(), call the roll()method on myFirstDice. You call a method using the "dot notation". So, to call the roll()method of myFirstDice, you type myFirstDice.roll()which is pronounced " myFirstDicedot roll()". myFirstDice.roll() Your completed code should look like this. fun main() { val myFirstDice = Dice() println(myFirstDice.sides) myFirstDice.roll() } class Dice { var sides = 6 fun roll() { val randomNumber = (1..6).random() println(randomNumber) } } - Run your code! You should see the result of a random dice roll below the number of sides. Run your code several times, and notice that the number of sides stays the same, and the dice roll value changes. 6 4 Congratulations! You have defined a Dice class with a sides variable and a roll() function. In the main() function, you created a new Dice object instance and then you called the roll() method on it to produce a random number. Currently you are printing out the value of the randomNumber in your roll() function and that works great! But sometimes it's more useful to return the result of a function to whatever called the function. For example, you could assign the result of the roll() method to a variable, and then move a player by that amount! Let's see how that's done. - In main()modify the line that says myFirstDice.roll(). Create a valcalled diceRoll. Set it equal to the value returned by the roll()method. val diceRoll = myFirstDice.roll() This doesn't do anything yet, because roll() doesn't return anything yet. In order for this code to work as intended, roll() has to return something. In previous codelabs you learned that you need to specify a data type for input arguments to functions. In the same way, you have to specify a data type for data that a function returns. - Change the roll()function to specify what type of data will be returned. In this case, the random number is an Int, so the return type is Int. The syntax for specifying the return type is: After the name of the function, after the parentheses, add a colon, space, and then the Intkeyword for the return type of the function. The function definition should look like the code below. fun roll(): Int { - Run this code. You will see an error in the Problems View. It says: A ‘return' expression is required in a function with a block body. You changed the function definition to return an Int, but the system is complaining that your code doesn't actually return an Int. "Block body" or "function body" refers to the code between the curly braces of a function. You can fix this error by returning a value from a function using a return statement at the end of the function body. - In roll(), remove the println()statement and replace it with a returnstatement for randomNumber. Your roll()function should look like the code below. fun roll(): Int { val randomNumber = (1..6).random() return randomNumber } - In main()remove the print statement for the sides of the dice. - Add a statement to print out the value of sidesand diceRollin an informative sentence. Your finished main()function should look similar to the code below. fun main() { val myFirstDice = Dice() val diceRoll = myFirstDice.roll() println("Your ${myFirstDice.sides} sided dice rolled ${diceRoll}!") } - Run your code and your output should be like this. Your 6 sided dice rolled 4! Here is all your code so far. fun main() { val myFirstDice = Dice() val diceRoll = myFirstDice.roll() println("Your ${myFirstDice.sides} sided dice rolled ${diceRoll}!") } class Dice { var sides = 6 fun roll(): Int { val randomNumber = (1..6).random() return randomNumber } } Not all dice have 6 sides! Dice come in all shapes and sizes: 4 sides, 8 sides, up to 120 sides! - In your Diceclass, in your roll()method, change the hard-coded 1..6to use sidesinstead, so that the range, and thus the random number rolled, will always be right for the number of sides. val randomNumber = (1..sides).random() - In the main()function, below and after printing the dice roll, change sidesof my FirstDiceto be set to 20. myFirstDice.sides = 20 - Copy and paste the existing print statement below after where you changed the number of sides. - Replace the printing of diceRollwith printing the result of calling the roll()method on myFirstDice. println("Your ${myFirstDice.sides} sided dice has rolled a ${myFirstDice.roll()}!") Your program should look like this. fun main() { val myFirstDice = Dice() val diceRoll = myFirstDice.roll() println("Your ${myFirstDice.sides} sided dice rolled ${diceRoll}!") myFirstDice.sides = 20 println("Your ${myFirstDice.sides} sided dice rolled ${myFirstDice.roll()}!") } class Dice { var sides = 6 fun roll(): Int { val randomNumber = (1..sides).random() return randomNumber } } - Run your program and you should see a message for the 6-sided dice, and a second message for the 20-sided dice. Your 6 sided dice rolled 3! Your 20 sided dice rolled 15! The idea of a class is to represent a thing, often something physical in the real world. In this case, a Dice class does represent a physical dice. In the real world, dice cannot change their number of sides. If you want a different number of sides, you need to get a different dice. Programmatically, this means that instead of changing the sides property of an existing Dice object instance, you should create a new dice object instance with the number of sides you need. In this task, you are going to modify the Dice class so that you can specify the number of sides when you create a new instance. Change the Dice class definition so you can supply the number of sides. This is similar to how a function can accept arguments for input. - Modify the Diceclass definition to accept an integer called numSides. The code inside your class does not change. class Dice(val numSides: Int) { // Code inside does not change. } - Inside the Diceclass, delete the sidesvariable, as you can now use numSides. - Also, fix the range to use numSides. Your Dice class should look like this. class Dice (val numSides: Int) { fun roll(): Int { val randomNumber = (1..numSides).random() return randomNumber } } If you run this code, you will see a lot of errors, because you need to update main() to work with the changes to the Dice class. - In main(), to create myFirstDicewith 6 sides, you must now supply in the number of sides as an argument to the Diceclass, as shown below. val myFirstDice = Dice(6) - In the print statement, change sidesto numSides. - Below that, delete the code that changes sidesto 20, because that variable does not exist anymore. - Delete the printlnstatement underneath it as well. Your main() function should look like the code below, and if you run it, there should be no errors. fun main() { val myFirstDice = Dice(6) val diceRoll = myFirstDice.roll() println("Your ${myFirstDice.numSides} sided dice rolled ${diceRoll}!") } - After printing the first dice roll, add code to create and print a second Diceobject called mySecondDicewith 20 sides. val mySecondDice = Dice(20) - Add a print statement that rolls and prints the returned value. println("Your ${mySecondDice.numSides} sided dice rolled ${mySecondDice.roll()}!") - Your finished main()function should look like this. { val randomNumber = (1..numSides).random() return randomNumber } } - Run your finished program, and your output should look like this. Your 6 sided dice rolled 5! Your 20 sided dice rolled 7! When writing code, concise is better. You can get rid of the randomNumber variable and return the random number directly. - Change the returnstatement to return the random number directly. fun roll(): Int { return (1..numSides).random() } In the second print statement, you put the call to get the random number into the string template. You can get rid of the diceRoll variable by doing the same thing in the first print statement. - Call myFirstDice.roll()in the string template and delete the diceRollvariable. The first two lines of your main()code now look like this. val myFirstDice = Dice(6) println("Your ${myFirstDice.numSides} sided dice rolled ${myFirstDice.roll()}!") - Run your code and there should be no difference in the output. This is your final code after refactoring it . fun main() { val myFirstDice = Dice(6) println("Your ${myFirstDice.numSides} sided dice rolled ${myFirstDice.roll()}!") val mySecondDice = Dice(20) println("Your ${mySecondDice.numSides} sided dice rolled ${mySecondDice.roll()}!") } class Dice (val numSides: Int) { fun roll(): Int { return (1..numSides).random() } } { return (1..numSides).random() } } - Call the random()function on an IntRangeto generate a random number: (1..6).random() - Classes are like a blueprint of an object. They can have properties and behaviors, implemented as variables and functions. - An instance of a class represents an object, often a physical object, such as a dice. You can call the actions on the object and change its attributes. - You can supply values to a class when you create an instance. For example: class Dice(val numSides: Int)and then create an instance with Dice(6). - Functions can return something. Specify the data type to be returned in the function definition, and use a returnstatement in the function body to return something. For example: fun example(): Int { return 5 } Do the following: - Give your Diceclass another attribute of color and create multiple instances of dice with different numbers of sides and colors! - Create a Coinclass, give it the ability to flip, create an instance of the class and flip some coins! How would you use the random()function with a range to accomplish the coin flip?
https://developer.android.com/codelabs/basic-android-kotlin-training-create-dice-roller-in-kotlin
CC-MAIN-2021-21
en
refinedweb
I have a 2D array of Integers. I want them to be put into a HashMap. But I want to access the elements from the HashMap based on Array Index. Something like: For A[2][5], map.get(2,5) which returns a value associated with that key. But how do I create a hashMap with a pair of keys? Or in general, multiple keys: Map<((key1, key2,..,keyN), Value) in a way that I can access the element with using get(key1,key2,…keyN). EDIT : 3 years after posting the question, I want to add a bit more to it I came across another way for NxN matrix. Array indices, i and j can be represented as a single key the following way: int key = i * N + j; //map.put(key, a[i][j]); // queue.add(key); And the indices can be retrevied from the key in this way: int i = key / N; int j = key % N; There are several options: 2 dimensions Map of maps Map<Integer, Map<Integer, V>> map = //... //... map.get(2).get(5); Wrapper key object public class Key { private final int x; private final int y; public Key(int x, int y) { this.x = x; this.y = y; } @Override public boolean equals(Object o) { if (this == o) return true; if (!(o instanceof Key)) return false; Key key = (Key) o; return x == key.x && y == key.y; } @Override public int hashCode() { int result = x; result = 31 * result + y; return result; } } Implementing equals() and hashCode() is crucial here. Then you simply use: Map<Key, V> map = //... and: map.get(new Key(2, 5)); Table from Guava Table<Integer, Integer, V> table = HashBasedTable.create(); //... table.get(2, 5); Table uses map of maps underneath. N dimensions Notice that special Key class is the only approach that scales to n-dimensions. You might also consider: Map<List<Integer>, V> map = //... but that’s terrible from performance perspective, as well as readability and correctness (no easy way to enforce list size). Maybe take a look at Scala where you have tuples and case classes (replacing whole Key class with one-liner). When you create your own key pair object, you should face a few thing. First, you should be aware of implementing hashCode() and equals(). You will need to do this. Second, when implementing hashCode(), make sure you understand how it works. The given user example public int hashCode() { return this.x ^ this.y; } is actually one of the worst implementations you can do. The reason is simple: you have a lot of equal hashes! And the hashCode() should return int values that tend to be rare, unique at it’s best. Use something like this: public int hashCode() { return (X << 16) + Y; } This is fast and returns unique hashes for keys between -2^16 and 2^16-1 (-65536 to 65535). This fits in almost any case. Very rarely you are out of this bounds. Third, when implementing equals() also know what it is used for and be aware of how you create your keys, since they are objects. Often you do unnecessary if statements cause you will always have the same result. If you create keys like this: map.put(new Key(x,y),V); you will never compare the references of your keys. Cause everytime you want to acces the map, you will do something like map.get(new Key(x,y));. Therefore your equals() does not need a statement like if (this == obj). It will never occure. Instead of if (getClass() != obj.getClass()) in your equals() better use if (!(obj instanceof this)). It will be valid even for subclasses. So the only thing you need to compare is actually X and Y. So the best equals() implementation in this case would be: public boolean equals (final Object O) { if (!(O instanceof Key)) return false; if (((Key) O).X != X) return false; if (((Key) O).Y != Y) return false; return true; } So in the end your key class is like this: public class Key { public final int X; public final int Y; public Key(final int X, final int Y) { this.X = X; this.Y = Y; } public boolean equals (final Object O) { if (!(O instanceof Key)) return false; if (((Key) O).X != X) return false; if (((Key) O).Y != Y) return false; return true; } public int hashCode() { return (X << 16) + Y; } } You can give your dimension indices X and Y a public access level, due to the fact they are final and do not contain sensitive information. I’m not a 100% sure whether private access level works correctly in any case when casting the Object to a Key. If you wonder about the finals, I declare anything as final which value is set on instancing and never changes – and therefore is an object constant. You can’t have an hash map with multiple keys, but you can have an object that takes multiple parameters as the key. Create an object called Index that takes an x and y value. public class Index { private int x; private int y; public Index(int x, int y) { this.x = x; this.y = y; } @Override public int hashCode() { return this.x ^ this.y; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Index other = (Index) obj; if (x != other.x) return false; if (y != other.y) return false; return true; } } Then have your HashMap<Index, Value> to get your result. 🙂 Two possibilities. Either use a combined key: class MyKey { int firstIndex; int secondIndex; // important: override hashCode() and equals() } Or a Map of Map: Map<Integer, Map<Integer, Integer>> myMap; Implemented in common-collections [MultiKeyMap] () Create a value class that will represent your compound key, such as: class Index2D { int first, second; // overrides equals and hashCode properly here } taking care to override equals() and hashCode() correctly. If that seems like a lot of work, you might consider some ready made generic containers, such as Pair provided by apache commons among others. There are also many similar questions here, with other ideas, such as using Guava’s Table, although allows the keys to have different types, which might be overkill (in memory use and complexity) in your case since I understand your keys are both integers. If they are two integers you can try a quick and dirty trick: Map<String, ?> using the key as i+"#"+j. If the key i+"#"+j is the same as j+"#"+i try min(i,j)+"#"+max(i,j). You could create your key object something like this: public class MapKey { public Object key1; public Object key2; public Object getKey1() { return key1; } public void setKey1(Object key1) { this.key1 = key1; } public Object getKey2() { return key2; } public void setKey2(Object key2) { this.key2 = key2; } public boolean equals(Object keyObject){ if(keyObject==null) return false; if (keyObject.getClass()!= MapKey.class) return false; MapKey key = (MapKey)keyObject; if(key.key1!=null && this.key1==null) return false; if(key.key2 !=null && this.key2==null) return false; if(this.key1==null && key.key1 !=null) return false; if(this.key2==null && key.key2 !=null) return false; if(this.key1==null && key.key1==null && this.key2 !=null && key.key2 !=null) return this.key2.equals(key.key2); if(this.key2==null && key.key2==null && this.key1 !=null && key.key1 !=null) return this.key1.equals(key.key1); return (this.key1.equals(key.key1) && this.key2.equals(key2)); } public int hashCode(){ int key1HashCode=key1.hashCode(); int key2HashCode=key2.hashCode(); return key1HashCode >> 3 + key2HashCode << 5; } } The advantage of this is: It will always make sure you are covering all the scenario’s of Equals as well. NOTE: Your key1 and key2 should be immutable. Only then will you be able to construct a stable key Object. we can create a class to pass more than one key or value and the object of this class can be used as a parameter in map. import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; import java.util.*; public class key1 { String b; String a; key1(String a,String b) { this.a=a; this.b=b; } } public class read2 { private static final String FILENAME = "E:/studies/JAVA/ReadFile_Project/nn.txt"; public static void main(String[] args) { BufferedReader br = null; FileReader fr = null; Map<key1,String> map=new HashMap<key1,String>(); try { fr = new FileReader(FILENAME); br = new BufferedReader(fr); String sCurrentLine; br = new BufferedReader(new FileReader(FILENAME)); while ((sCurrentLine = br.readLine()) != null) { String[] s1 = sCurrentLine.split(","); key1 k1 = new key1(s1[0],s1[2]); map.put(k1,s1[2]); } for(Map.Entry<key1,String> m:map.entrySet()){ key1 key = m.getKey(); String s3 = m.getValue(); System.out.println(key.a+","+key.b+" : "+s3); } // } } catch (IOException e) { e.printStackTrace(); } finally { try { if (br != null) br.close(); if (fr != null) fr.close(); } catch (IOException ex) { ex.printStackTrace(); } } } } Use a Pair as keys for the HashMap. JDK has no Pair, but you can either use a 3rd party libraray such as or write a Pair taype of your own. You can also use guava Table implementation for this. Table represents a special map where two keys can be specified in combined fashion to refer to a single value. It is similar to creating a map of maps. //create a table Table<String, String, String> employeeTable = HashBasedTable.create(); //initialize the table with employee details employeeTable.put("IBM", "101","Mahesh"); employeeTable.put("IBM", "102","Ramesh"); employeeTable.put("IBM", "103","Suresh"); employeeTable.put("Microsoft", "111","Sohan"); employeeTable.put("Microsoft", "112","Mohan"); employeeTable.put("Microsoft", "113","Rohan"); employeeTable.put("TCS", "121","Ram"); employeeTable.put("TCS", "122","Shyam"); employeeTable.put("TCS", "123","Sunil"); //get Map corresponding to IBM Map<String,String> ibmEmployees = employeeTable.row("IBM");
https://exceptionshub.com/how-to-create-a-hashmap-with-two-keys-key-pair-value.html
CC-MAIN-2021-21
en
refinedweb
freud.density.GaussianDensity¶ The freud.density module is intended to compute a variety of quantities that relate spatial distributions of particles with other particles. In this notebook, we demonstrate freud’s Gaussian density calculation, which provides a way to interpolate particle configurations onto a regular grid in a meaningful way that can then be processed by other algorithms that require regularity, such as a Fast Fourier Transform. [1]: import numpy as np from scipy import stats import freud import matplotlib.pyplot as plt To illustrate the basic concept, consider a toy example: a simple set of point particles with unit mass on a line. For analytical purposes, the standard way to accomplish this would be using Dirac delta functions. [2]: n_p = 10000 np.random.seed(129) x = np.linspace(0, 1, n_p) y = np.zeros(n_p) points = np.random.rand(10) y[(points*n_p).astype('int')] = 1 plt.plot(x, y); plt.show() However, delta functions can be cumbersome to work with, so we might instead want to smooth out these particles. One option is to instead represent particles as Gaussians centered at the location of the points. In that case, the total particle density at any point in the interval \([0, 1]\) represented above would be based on the sum of the densities of those Gaussians at those points. [3]: # Note that we use a Gaussian with a small standard deviation # to emphasize the differences on this small scale dists = [stats.norm(loc=i, scale=0.1) for i in points] y_gaussian = 0 for dist in dists: y_gaussian += dist.pdf(x) plt.plot(x, y_gaussian); plt.show() The goal of the GaussianDensity class is to perform the same interpolation for points on a 2D or 3D grid, accounting for Box periodicity. [4]: N = 1000 # Number of points L = 10 # Box length box, points = freud.data.make_random_system(L, N, is2D=True, seed=0)_2<< The effects are much more striking if we explicitly construct our points to be centered at certain regions. [5]: N = 1000 # Number of points L = 10 # Box length box = freud.box.Box.square(L) centers = np.array([[L/4, L/4, 0], [-L/4, L/4, 0], [L/4, -L/4, 0], [-L/4, -L/4, 0]]) points = [] for center in centers: points.append(np.random.multivariate_normal(center, cov=np.diag([1, 1, 0]), size=(int(N/4),))) points = box.wrap(np.concatenate(points))_3<<
https://freud.readthedocs.io/en/fix-rtd-libgfortran-version/gettingstarted/examples/module_intros/density.GaussianDensity.html
CC-MAIN-2021-21
en
refinedweb
Citrix XenApp 7.15 automating logon By ajorgensen, in AutoIt General Help and Support Recommended Posts Recently Browsing 0 members No registered users viewing this page. Similar Content - - Sandy89 Hi, I have a script that uses image identification for selection, and it works fine in my local environment. but when I try running it in citrix desktop, images are not getting identified. I didn't install autoIT in citrix since I don't have admin rights, but copied the entire application folder with images and .dll files into a folder in citrix. can anyone help to resolve this? - By BigDaddyO I have a bunch of scripts that I need to run on systems that are either accessed from RDP or Citrix. The problem I have had all along is that if you close the RDP or Citrix screen then the scripts will fail even if the user is still logged into the system you were connected to. I finally found something that will tell me if the session is still active but it's a command line tool called qwinsta.exe. I prefer not to do stdoutread if there is any other way so I'm wondering if anyone has an idea on how to get the session state for the currently logged in user as this script does but not using stdoutread? $RunFrom = EnvGet("Sessionname") ConsoleWrite("Active SessionName = " & $RunFrom & @CRLF & @CRLF) Local $iPID = Run('"C:\Windows\System32\qwinsta.exe" ' & @UserName, "", @SW_HIDE, 2) ProcessWaitClose($iPID) ;Need to wait for it to finish before we get the StdOutput values $sOutput = StdoutRead($iPID) ;Retrieve whatever returned ConsoleWrite("Active StdOutRead = " & @CRLF & $sOutput & @CRLF & @CRLF) ;---------------------------------------------------------------------------------------- Sleep(15000) ;Need to disconnect at this point so we can see what happens next!!! ;---------------------------------------------------------------------------------------- $RunFrom = EnvGet("Sessionname") ConsoleWrite("Disconnected SessionName = " & $RunFrom & @CRLF & @CRLF) ;After Lock, again get session name and session state and write to console Local $iPID = Run('"C:\Windows\System32\qwinsta.exe" ' & @UserName, "", @SW_HIDE, 2) ProcessWaitClose($iPID) ;Need to wait for it to finish before we get the StdOutput values $sOutput = StdoutRead($iPID) ;Retrieve whatever returned ConsoleWrite("Disconnected StdOutRead = " & @CRLF & $sOutput & @CRLF) Below is what i'm seeing returned by the script. What I need is just the STATE field. From RDP: Active SessionName = RDP-Tcp#0 Active StdOutRead = SESSIONNAME USERNAME ID STATE TYPE DEVICE >rdp-tcp#0 MyUsername 3 Active rdpwd Disconnected SessionName = RDP-Tcp#0 Disconnected StdOutRead = SESSIONNAME USERNAME ID STATE TYPE DEVICE > MyUsername 3 Disc From Citrix: Active SessionName = ICA-CGP#9 Active StdOutRead = SESSIONNAME USERNAME ID STATE TYPE DEVICE >ica-cgp#9 MyUsername 43 Active wdica Disconnected SessionName = ICA-CGP#9 Disconnected StdOutRead = SESSIONNAME USERNAME ID STATE TYPE DEVICE > MyUsername 43 Disc Thanks, Mike - By MuffinMan to load AutoIT and sciTE on the Citrix server, but I have not heard back from the admin yet. I do have a folder on the Citrix server where I can copy my EXEs to for testing. I'm using straight AutoIT help examples below to show the issues I am seeing so that they will be easy to recreate. When I compile the WinList HelpFile Example and run it from the Citrix server it sees my browser instances as windows: #include <MsgBoxConstants.au3> Example() Func Example() ; Retrieve a list of window handles. Local $aList = WinList() ; Loop through the array displaying only visable windows with a title. For $i = 1 To $aList[0][0] If $aList[$i][0] <> "" And BitAND(WinGetState($aList[$i][1]), 2) Then MsgBox($MB_SYSTEMMODAL, "", "Title: " & $aList[$i][0] & @CRLF & "Handle: " & $aList[$i][1]) EndIf Next EndFunc ;==>Example But when I compile and run one of the HelpFile _IEAttach examples (below) and run it from the Citrix server, it immediately errors out with: Line 201 (File "M:StickyNotesIEInstance.exe" ): Error: Variable must be of type "Object". #include <IE.au3> #include <MsgBoxConstants.au3> Local $aIE[1] $aIE[0] = 0 Local $i = 1, $oIE While 1 $oIE = _IEAttach("", "instance", $i) If @error = $_IEStatus_NoMatch Then ExitLoop ReDim $aIE[$i + 1] $aIE[$i] = $oIE $aIE[0] = $i $i += 1 WEnd MsgBox($MB_SYSTEMMODAL, "Browsers Found", "Number of browser instances in the array: " & $aIE[0]) I am really so close on this and I would really appreciate any help you guys could spare. -
https://www.autoitscript.com/forum/topic/199613-citrix-xenapp-715-automating-logon/
CC-MAIN-2021-21
en
refinedweb
Package for read/write binary file (.adibin format) Project description binfilepy Software library to read and write binary file (.adibin format). Example to write a binary file: from binfilepy import BinFile from binfilepy import CFWBINARY from binfilepy import CFWBCHANNEL with BinFile(filename, "w") as f: header = CFWBINARY() header.setValue(1.0 / 240.0, 2019, 1, 28, 8, 30, 0.0, 0.0, 2, 0) f.setHeader(header) channel1 = CFWBCHANNEL() channel1.setValue("I", "mmHg", 1.0, 0.0) f.addChannel(channel1) channel2 = CFWBCHANNEL("II", "mmHg", 1.0, 0.0) f.addChannel(channel2) chanData = [] d1 = [1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6, 5, 4, 3, 2, 1] d2 = [8, 7, 6, 5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8] chanData.append(data1) chanData.append(data2) f.writeHeader() f.writeChannelData(chanData) f.updateSamplesPerChannel(16, True) Example to read a binary file: from binfilepy import BinFile with BinFile(filename, "r") as f: # You must read header first before you can read channel data f.readHeader() # readChannelData() supports reading in random location (Ex: Read 10 secs of data at 1 min mark) data = f.readChannelData(offset=60, length=10, useSecForOffset=True, useSecForLength=True) File open mode Currently, there are three modes to open a file: - "w": For writing to a new file. You need to make sure the file doesn't exist. - "r": For reading from an existing file. You need to make sure the file exists. - "r+": For appending data to an existing file. You need to make sure the file exists. You can use either syntax: with BinFile(filename, "w") as f: ... ... or f = BinFile(filename, "w") f.open() ... ... f.close() Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/binfilepy/0.1.2/
CC-MAIN-2021-17
en
refinedweb
You may have heard that we recently launched our Digital Asset Management (DAM) system. On top of other challenges, that was our first launch targeted at the nondeveloper audience, to whom the UI is a central element of a product’s capabilities. Releasing robust products on time require comprehensive automated testing at all levels: Unit , API, and UI testing. This post describes our solution for end-to-end UI automation testing, which serves both QA engineers and front-end developers in minimizing bugs in releases and in speeding up testing for faster development and shorter release cycles. Objectives Right off the bat, we had these objectives in mind: - Create a JavaScript-based test automation framework with Node.js so that developers can stay in their “comfort zone.” - Facilitate root-cause error analysis through informative reports for a thorough investigation of failed tests. The reports must be human friendly, containing as many relevant details as possible, such as logs, screenshots, and page sources. - Execute in parallel for fast, continuous integration feedback. - Build configurable test suites for various platforms and setups (deployment, pull requests, on demand, and such). - Enhance browser coverage by running tests on various browsers with local and remote setups. - Develop reusable components with a functional interface that mimics user actions and that supports test maintenance and readability. Implementation Let’s now introduce our new test automation framework wdio-allure-ts, which promises to help you quickly start authoring end-to-end JavaScript ui tests with a useful, informative report. First, some background: We considered and tested many other leading testing solutions, such as Allure Reporter, which offers a pleasing and functional design. You can also easily integrate it with Node.js and Jenkins. However, Allure Reporter falls short in several ways: - The Reporter logs WebDriver commands ( GETand POST) that give no clues on failures. - Page-source and browser-console logs of failed tests are absent in the reports. - Most of the errors reflect timeouts only. No clear failure reasons. WebdriverIO, a Node.js testing utility, meets our needs. With a sizable community coupled with simple setups for configuration and customization, WebdriverIO supports third-party integrations (automation testing reports, test runners), Page Object Model, and synchronous execution. Also, we chose TypeScript instead of plain JavaScript for IntelliSense support, which spells fast and seamless development with typed variables.. Subsequently, we blended those tools into our new, open-source solution for end-to-end functional testing: wdio-allure-ts.That solution wraps the most common WebdriverIO actions, generating intuitive error messages in case of failure, custom logs for the Allure Reporter, more validations for enhanced stability, and last, but not least, IntelliSense. Example With Pure WebdriverIO Now take a look at an example of an action that, after validating that a particular element is visible, clicks it, logs every step to the Reporter, and throws meaningful errors for failures, if any. const selector: string = "someSelector"; logger(`Click an element with selector: ${selector}`); try { logger(`Validate element with selector ${selector} is visible`); browser.isVisible(selector); } catch (error) { throw new Error(`Tried to click not visible element, ${error}`); } try { logger("Perform click action"); browser.click(selector); } catch (error) { throw new Error( `Failed to click an element by given selector ${selector}. ${error}` ); } Example with wdio-allure-ts: You can see that our new framework offers the same capabilities with much cleaner code. Because the framework automatically handles logging and error reporting, automation developers can focus on testing the business logic. You can add more report logs with a simple Reporter API for log levels: step, debug, error, info, and warning. The logs are displayed on the terminal and reflected in the report. Example: import { Reporter } from 'wdio-allure-ts'; Reporter.step('Step log entry'); Reporter.error('Error log entry'); Terminal Output wdio-allure-ts. Original Report wdio-allure-ts Report(live report example): That's It. Ready to Try? For you to take a crack at it, we've created a sample project with a quick introduction to our framework and its usage on a real-world application The project contains examples for the following: - Tests for implementation. - Page Object Model. - Allure Reporter, integrated and configured for attaching screenshots, browser logs, and HTML source on test failures. - Configurations for local and CI execution. - Selenium grid for test execution. Just clone the repo, read the README, and create new tests according to the examples. We'd love to hear your thoughts and ideas for integrating wdio-allure-ts into your testing workflow. Send them to us in the Comments section below please.
https://cloudinary.com/blog/testing_functional_ui_the_cloudinary_way
CC-MAIN-2021-17
en
refinedweb
Azure Command Line Interface to manage Azure Service Bus resources? We are excited to announce the Azure CLI 2.0 support for Azure Service Bus. Interact with your Azure Resource Manager and management endpoints of Service Bus using the CLI commands. Manage you Geo-DR configurations or CRUD on your resources and entities, we have all these fully supported. How easy is this? Here's a couple examples, Create a Service Bus Namespace az servicebus namespace create --resource-group myresourcegroup --name mynamespace --location westus --tags tag1=value1 tag2=value2 --sku Standard Delete a topic auth-rule az servicebus topic authorization-rule delete --resource-group myresourcegroup --namespace-name mynamespace --topic-name mytopic --name myauthorule Invoke a failover to your secondary az servicebus georecovery-alias fail-over --resource-group myresourcegroup --namespace-name secondarynamespace --alias myaliasname All these and more. Explore the various resource management operations this functionality provides and let us know what you think. Happy message-ing!
https://docs.microsoft.com/en-us/archive/blogs/servicebus/azure-command-line-interface-to-manage-azure-service-bus-resources
CC-MAIN-2021-17
en
refinedweb
Chapter 10 Clustering 10.1 Motivation Clustering is an unsupervised learning procedure that is used in scRNA-seq data analysis to empirically define groups of cells with similar expression profiles. Its primary purpose is to summarize the data in a digestible format for human interpretation. This allows us to describe population heterogeneity in terms of discrete labels that are easily understood, rather than attempting to comprehend the high-dimensional manifold on which the cells truly reside. After annotation based on marker genes, the clusters can be treated as proxies for more abstract biological concepts such as cell types or states. Clustering is thus a critical step for extracting biological insights from scRNA-seq data. Here, we demonstrate the application of several commonly used methods(3): PCA TSNE UMAP ## altExpNames(0): 10.2 What is the “true clustering”? At this point, it is worth stressing the distinction between clusters and cell types. The former is an empirical construct while the latter is a biological truth (albeit a vaguely defined one). For this reason, questions like “what is the true number of clusters?” are usually meaningless. We can define as many clusters as we like, with whatever algorithm we like - each clustering will represent its own partitioning of the high-dimensional expression space, and is as “real” as any other clustering. A more relevant question is “how well do the clusters approximate the cell types?” Unfortunately, this is difficult to answer given the context-dependent interpretation of biological truth. Some analysts will be satisfied with resolution of the major cell types; other analysts may want resolution of subtypes; and others still may require resolution of different states (e.g., metabolic activity, stress) within those subtypes. Moreover, two clusterings can be highly inconsistent yet both valid, simply partitioning the cells based on different aspects of biology. Indeed, asking for an unqualified “best” clustering is akin to asking for the best magnification on a microscope without any context. It is helpful to realize that clustering, like a microscope, is simply a tool to explore the data. We can zoom in and out by changing the resolution of the clustering parameters, and we can experiment with different clustering algorithms to obtain alternative perspectives of the data. This iterative approach is entirely permissible for data exploration, which constitutes the majority of all scRNA-seq data analyses. 10.3 Graph-based clustering 10.3.1 Background Popularized by its use in Seurat, graph-based clustering is a flexible and scalable technique for clustering large scRNA-seq datasets. We first build a graph where each node is a cell that is connected to its nearest neighbors in the high-dimensional space. Edges are weighted based on the similarity between the cells involved, with higher weight given to cells that are more closely related. We then apply algorithms to identify “communities” of cells that are more connected to cells in the same community than they are to cells of different communities. Each community represents a cluster that we can use for downstream interpretation. The major advantage of graph-based clustering lies in its scalability. It only requires a \(k\)-nearest neighbor search that can be done in log-linear time on average, in contrast to hierachical clustering methods with runtimes that are quadratic with respect to the number of cells. Graph construction avoids making strong assumptions about the shape of the clusters or the distribution of cells within each cluster, compared to other methods like \(k\)-means (that favor spherical clusters) or Gaussian mixture models (that require normality). From a practical perspective, each cell is forcibly connected to a minimum number of neighboring cells, which reduces the risk of generating many uninformative clusters consisting of one or two outlier cells. The main drawback of graph-based methods is that, after graph construction, no information is retained about relationships beyond the neighboring cells1. This has some practical consequences in datasets that exhibit differences in cell density, as more steps through the graph are required to move the same distance through a region of higher cell density. From the perspective of community detection algorithms, this effect “inflates” the high-density regions such that any internal substructure or noise is more likely to cause formation of subclusters. The resolution of clustering thus becomes dependent on the density of cells, which can occasionally be misleading if it overstates the heterogeneity in the data. 10.3.2 Implementation There are several considerations in the practical execution of a graph-based clustering method: - How many neighbors are considered when constructing the graph. - What scheme is used to weight the edges. - Which community detection algorithm is used to define the clusters. For example, the following code uses the 10 nearest neighbors of each cell to construct a shared nearest neighbor graph. Two cells are connected by an edge if any of their nearest neighbors are shared, with the edge weight defined from the highest average rank of the shared neighbors (Xu and Su 2015). The Walktrap method from the igraph package is then used to identify communities. All calculations are performed using the top PCs to take advantage of data compression and denoising. library(scran) g <- buildSNNGraph(sce.pbmc, k=10, use.dimred = 'PCA') clust <- igraph::cluster_walktrap(g)$membership table(clust) ## clust ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ## 205 508 541 56 374 125 46 432 302 867 47 155 166 61 84 16 Alternatively, users may prefer to use the clusterRows() function from the bluster package. This calls the exact same series of functions when run in graph-based mode with the NNGraphParam() argument; however, it is often more convenient if we want try out different clustering procedures, as we can simply change the second argument to use a different set of parameters or a different algorithm altogether. library(bluster) clust2 <- clusterRows(reducedDim(sce.pbmc, "PCA"), NNGraphParam()) table(clust2) # same as above. ## clust2 ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ## 205 508 541 56 374 125 46 432 302 867 47 155 166 61 84 16 We assign the cluster assignments back into our SingleCellExperiment object as a factor in the column metadata. This allows us to conveniently visualize the distribution of clusters in a \(t\)-SNE plot (Figure 10.1). library(scater) colLabels(sce.pbmc) <- factor(clust) plotReducedDim(sce.pbmc, "TSNE", colour_by="label") Figure 10.1: \(t\)-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from graph-based clustering. One of the most important parameters is k, the number of nearest neighbors used to construct the graph. This controls the resolution of the clustering where higher k yields a more inter-connected graph and broader clusters. Users can exploit this by experimenting with different values of k to obtain a satisfactory resolution. # More resolved. g.5 <- buildSNNGraph(sce.pbmc, k=5, use.dimred = 'PCA') clust.5 <- igraph::cluster_walktrap(g.5)$membership table(clust.5) ## clust.5 ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ## 523 302 125 45 172 573 249 439 293 95 772 142 38 18 62 38 30 16 15 9 ## 21 22 ## 16 13 # Less resolved. g.50 <- buildSNNGraph(sce.pbmc, k=50, use.dimred = 'PCA') clust.50 <- igraph::cluster_walktrap(g.50)$membership table(clust.50) ## clust.50 ## 1 2 3 4 5 6 7 8 9 10 ## 869 514 194 478 539 944 138 175 89 45 The graph itself can be visualized using a force-directed layout (Figure 10.2). This yields a dimensionality reduction result that is closely related to \(t\)-SNE and UMAP, though which of these is the most aesthetically pleasing is left to the eye of the beholder. set.seed(11000) reducedDim(sce.pbmc, "force") <- igraph::layout_with_fr(g) plotReducedDim(sce.pbmc, colour_by="label", dimred="force") Figure 10.2: Force-directed layout for the shared nearest-neighbor graph of the PBMC dataset. Each point represents a cell and is coloured according to its assigned cluster identity. 10.3.3 Other parameters Further tweaking can be performed by changing the edge weighting scheme during graph construction. Setting type="number" will weight edges based on the number of nearest neighbors that are shared between two cells. Similarly, type="jaccard" will weight edges according to the Jaccard index of the two sets of neighbors. We can also disable weighting altogether by using buildKNNGraph(), which is occasionally useful for downstream graph operations that do not support weights. g.num <- buildSNNGraph(sce.pbmc, use.dimred="PCA", type="number") g.jaccard <- buildSNNGraph(sce.pbmc, use.dimred="PCA", type="jaccard") g.none <- buildKNNGraph(sce.pbmc, use.dimred="PCA") All of these g variables are graph objects from the igraph package and can be used with any of the community detection algorithms provided by igraph. We have already mentioned the Walktrap approach, but many others are available to choose from: clust.louvain <- igraph::cluster_louvain(g)$membership clust.infomap <- igraph::cluster_infomap(g)$membership clust.fast <- igraph::cluster_fast_greedy(g)$membership clust.labprop <- igraph::cluster_label_prop(g)$membership clust.eigen <- igraph::cluster_leading_eigen(g)$membership It is then straightforward to compare two clustering strategies to see how they differ. For example, Figure 10.3 suggests that Infomap yields finer clusters than Walktrap while fast-greedy yields coarser clusters. library(pheatmap) # Using a large pseudo-count for a smoother color transition # between 0 and 1 cell in each 'tab'. tab <- table(paste("Infomap", clust.infomap), paste("Walktrap", clust)) ivw <- pheatmap(log10(tab+10), main="Infomap vs Walktrap", color=viridis::viridis(100), silent=TRUE) tab <- table(paste("Fast", clust.fast), paste("Walktrap", clust)) fvw <- pheatmap(log10(tab+10), main="Fast-greedy vs Walktrap", color=viridis::viridis(100), silent=TRUE) gridExtra::grid.arrange(ivw[[4]], fvw[[4]]) Figure 10.3: Number of cells assigned to combinations of cluster labels with different community detection algorithms in the PBMC dataset. Each entry of each heatmap represents a pair of labels, coloured proportionally to the log-number of cells with those labels. Pipelines involving scran default to rank-based weights followed by Walktrap clustering. In contrast, Seurat uses Jaccard-based weights followed by Louvain clustering. Both of these strategies work well, and it is likely that the same could be said for many other combinations of weighting schemes and community detection algorithms. Some community detection algorithms operate by agglomeration and thus can be used to construct a hierarchical dendrogram based on the pattern of merges between clusters. The dendrogram itself is not particularly informative as it simply describes the order of merge steps performed by the algorithm; unlike the dendrograms produced by hierarchical clustering (Section 10.5), it does not capture the magnitude of differences between subpopulations. However, it does provide a convenient avenue for manually tuning the clustering resolution by generating nested clusterings using the cut_at() function, as shown below. ## ## 1 2 3 4 5 ## 3546 221 125 46 47 ## ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ## 462 374 125 437 46 432 302 173 867 47 155 166 104 40 61 84 46 32 16 16 If cut_at()-like functionality is desired for non-hierarchical methods, bluster provides a mergeCommunities() function to retrospectively tune the clustering resolution. This function will greedily merge pairs of clusters until a specified number of clusters is achieved, where pairs are chosen to maximize the modularity at each merge step. ## ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ## 353 152 376 857 48 46 68 661 287 53 541 198 219 126 ## Error in igraph::cut_at(community.louvain, n = 10) : ## Not a hierarchical communitity structure ## merged ## 1 3 4 7 8 9 10 11 12 14 ## 353 528 857 162 661 287 272 541 198 126 10.3.4 Assessing cluster separation When dealing with graphs, the modularity is a natural metric for evaluating the separation between communities/clusters. This is defined as the (scaled) difference between the observed total weight of edges between nodes in the same cluster and the expected total weight if edge weights were randomly distributed across all pairs of nodes. Larger modularity values indicate that there most edges occur within clusters, suggesting that the clusters are sufficiently well separated to avoid edges forming between neighboring cells in different clusters. The standard approach is to report a single modularity value for a clustering on a given graph. This is useful for comparing different clusterings on the same graph - and indeed, some community detection algorithms are designed with the aim of maximizing the modularity - but it is less helpful for interpreting a given clustering. Rather, we use the pairwiseModularity() function from bluster with as.ratio=TRUE, which returns the ratio of the observed to expected sum of weights between each pair of clusters. We use the ratio instead of the difference as the former is less dependent on the number of cells in each cluster. ## [1] 16 16 In this matrix, each row/column corresponds to a cluster and each entry contains the ratio of the observed to total weight of edges between cells in the respective clusters. A dataset containing well-separated clusters should contain most of the observed total weight on the diagonal entries, i.e., most edges occur between cells in the same cluster. Indeed, concentration of the weight on the diagonal of (Figure 10.4) indicates that most of the clusters are well-separated, while some modest off-diagonal entries represent closely related clusters with more inter-connecting edges. library(pheatmap) pheatmap(log2(ratio+1), cluster_rows=FALSE, cluster_cols=FALSE, color=colorRampPalette(c("white", "blue"))(100)) Figure 10.4: Heatmap of the log2-ratio of the total weight between nodes in the same cluster or in different clusters, relative to the total weight expected under a null model of random links. One useful approach is to use the ratio matrix to form another graph where the nodes are clusters rather than cells. Edges between nodes are weighted according to the ratio of observed to expected edge weights between cells in those clusters. We can then repeat our graph operations on this new cluster-level graph to explore the relationships between clusters. For example, we could obtain clusters of clusters, or we could simply create a new cluster-based layout for visualization (Figure 10.5). This is analogous to the “graph abstraction” approach described by Wolf et al. (2017), which can be used to identify trajectories in the data based on high-weight paths between clusters. cluster.gr <- igraph::graph_from_adjacency_matrix(log2(ratio+1), mode="upper", weighted=TRUE, diag=FALSE) # Increasing the weight to increase the visibility of the lines. set.seed(11001010) plot(cluster.gr, edge.width=igraph::E(cluster.gr)$weight*5, layout=igraph::layout_with_lgl) Figure 10.5: Force-based layout showing the relationships between clusters based on the log-ratio of observed to expected total weights between nodes in different clusters. The thickness of the edge between a pair of clusters is proportional to the corresponding log-ratio. Incidentally, some readers may have noticed that all igraph commands were prefixed with igraph::. We have done this deliberately to avoid bringing igraph::normalize into the global namespace. Rather unfortunately, this normalize function accepts any argument and returns NULL, which causes difficult-to-diagnose bugs when it overwrites normalize from BiocGenerics. 10.4 \(k\)-means clustering 10.4.1 Background \(k\)-means clustering is a classic technique that aims to partition cells into \(k\) clusters. Each cell is assigned to the cluster with the closest centroid, which is done by minimizing the within-cluster sum of squares using a random starting configuration for the \(k\) centroids. The main advantage of this approach lies in its speed, given the simplicity and ease of implementation of the algorithm. However, it suffers from a number of serious shortcomings that reduce its appeal for obtaining interpretable clusters: - It implicitly favors spherical clusters of equal radius. This can lead to unintuitive partitionings on real datasets that contain groupings with irregular sizes and shapes. - The number of clusters \(k\) must be specified beforehand and represents a hard cap on the resolution of the clustering.. For example, setting \(k\) to be below the number of cell types will always lead to co-clustering of two cell types, regardless of how well separated they are. In contrast, other methods like graph-based clustering will respect strong separation even if the relevant resolution parameter is set to a low value. - It is dependent on the randomly chosen initial coordinates. This requires multiple runs to verify that the clustering is stable. That said, \(k\)-means clustering is still one of the best approaches for sample-based data compression. In this application, we set \(k\) to a large value such as the square root of the number of cells to obtain fine-grained clusters. These are not meant to be interpreted directly, but rather, the centroids are treated as “samples” for further analyses. The idea here is to obtain a single representative of each region of the expression space, reducing the number of samples and computational work in later steps like, e.g., trajectory reconstruction (Ji and Ji 2016). This approach will also eliminate differences in cell density across the expression space, ensuring that the most abundant cell type does not dominate downstream results. 10.4.2 Base implementation Base R provides the kmeans() function that does as its name suggests. We call this on our top PCs to obtain a clustering for a specified number of clusters in the centers= argument, after setting the random seed to ensure that the results are reproducible. In general, the \(k\)-means clusters correspond to the visual clusters on the \(t\)-SNE plot in Figure 10.6, though there are some divergences that are not observed in, say, Figure 10.1. (This is at least partially due to the fact that \(t\)-SNE is itself graph-based and so will naturally agree more with a graph-based clustering strategy.) set.seed(100) clust.kmeans <- kmeans(reducedDim(sce.pbmc, "PCA"), centers=10) table(clust.kmeans$cluster) ## ## 1 2 3 4 5 6 7 8 9 10 ## 548 46 408 270 539 199 148 783 163 881 colLabels(sce.pbmc) <- factor(clust.kmeans$cluster) plotReducedDim(sce.pbmc, "TSNE", colour_by="label") Figure 10.6: \(t\)-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from \(k\)-means clustering. If we were so inclined, we could obtain a “reasonable” choice of \(k\) by computing the gap statistic using methods from the cluster package. This is the log-ratio of the expected to observed within-cluster sum of squares, where the expected value is computed by randomly distributing cells within the minimum bounding box of the original data. A larger gap statistic represents a lower observed sum of squares - and thus better clustering - compared to a population with no structure. Ideally, we would choose the \(k\) that maximizes the gap statistic, but this is often unhelpful as the tendency of \(k\)-means to favor spherical clusters drives a large \(k\) to capture different cluster shapes. Instead, we choose the most parsimonious \(k\) beyond which the increases in the gap statistic are considered insignificant (Figure 10.7). It must be said, though, that this process is time-consuming and the resulting choice of \(k\) is not always stable. library(cluster) set.seed(110010101) gaps <- clusGap(reducedDim(sce.pbmc, "PCA"), kmeans, K.max=20) best.k <- maxSE(gaps$Tab[,"gap"], gaps$Tab[,"SE.sim"]) best.k ## [1] 8 Figure 10.7: Gap statistic with respect to increasing number of \(k\)-means clusters in the 10X PBMC dataset. The red line represents the chosen \(k\). A more practical use of \(k\)-means is to deliberately set \(k\) to a large value to achieve overclustering. This will forcibly partition cells inside broad clusters that do not have well-defined internal structure. For example, we might be interested in the change in expression from one “side” of a cluster to the other, but the lack of any clear separation within the cluster makes it difficult to separate with graph-based methods, even at the highest resolution. \(k\)-means has no such problems and will readily split these broad clusters for greater resolution, though obviously one must be prepared for the additional work involved in interpreting a greater number of clusters. set.seed(100) clust.kmeans2 <- kmeans(reducedDim(sce.pbmc, "PCA"), centers=20) table(clust.kmeans2$cluster) ## ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ## 243 28 202 361 282 166 388 150 114 537 170 96 46 131 162 118 201 257 288 45 colLabels(sce.pbmc) <- factor(clust.kmeans2$cluster) plotTSNE(sce.pbmc, colour_by="label", text_by="label") Figure 10.8: \(t\)-SNE plot of the 10X PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from \(k\)-means clustering with \(k=20\). As an aside: if we were already using clusterRows() from bluster, we can easily switch to \(k\)-means clustering by supplying a KmeansParam() as the second argument. This requires the number of clusters as a fixed integer or as a function of the number of cells - the example below sets the number of clusters to the square root of the number of cells, which is an effective rule-of-thumb for vector quantization. set.seed(10000) sq.clusts <- clusterRows(reducedDim(sce.pbmc, "PCA"), KmeansParam(centers=sqrt)) nlevels(sq.clusts) ## [1] 63 10.4.3 Assessing cluster separation The within-cluster sum of squares (WCSS) for each cluster is the most relevant diagnostic for \(k\)-means, given that the algorithm aims to find a clustering that minimizes the WCSS. Specifically, we use the WCSS to compute the root-mean-squared deviation (RMSD) that represents the spread of cells within each cluster. A cluster is more likely to have a low RMSD if it has no internal structure and is separated from other clusters (such that there are not many cells on the boundaries between clusters, which would result in a higher sum of squares from the centroid). ncells <- tabulate(clust.kmeans2$cluster) tab <- data.frame(wcss=clust.kmeans2$withinss, ncells=ncells) tab$rms <- sqrt(tab$wcss/tab$ncells) tab ## wcss ncells rms ## 1 3270 243 3.669 ## 2 2837 28 10.066 ## 3 3240 202 4.005 ## 4 3499 361 3.113 ## 5 4483 282 3.987 ## 6 3325 166 4.476 ## 7 6834 388 4.197 ## 8 3843 150 5.062 ## 9 2277 114 4.470 ## 10 4439 537 2.875 ## 11 2003 170 3.433 ## 12 3342 96 5.900 ## 13 6531 46 11.915 ## 14 2130 131 4.032 ## 15 3627 162 4.731 ## 16 3108 118 5.132 ## 17 4790 201 4.882 ## 18 4663 257 4.260 ## 19 6966 288 4.918 ## 20 1205 45 5.175 (As an aside, the RMSDs of the clusters are poorly correlated with their sizes in Figure 10.8. This highlights the risks of attempting to quantitatively interpret the sizes of visual clusters in \(t\)-SNE plots.) To explore the relationships between \(k\)-means clusters, a natural approach is to compute distances between their centroids. This directly lends itself to visualization as a tree after hierarchical clustering (Figure 10.9). Figure 10.9: Hierarchy of \(k\)-means cluster centroids, using Ward’s minimum variance method. 10.4.4 In two-step procedures As previously mentioned, \(k\)-means is most effective in its role of vector quantization, i.e., compressing adjacent cells into a single representative point. This allows \(k\)-means to be used as a prelude to more sophisticated and interpretable - but expensive - clustering algorithms. The clusterRows() function supports a “two-step” mode where \(k\)-means is initially used to obtain representative centroids that are subjected to graph-based clustering. Each cell is then placed in the same graph-based cluster that its \(k\)-means centroid was assigned to (Figure 10.10). # Setting the seed due to the randomness of k-means. set.seed(0101010) kgraph.clusters <- clusterRows(reducedDim(sce.pbmc, "PCA"), TwoStepParam( first=KmeansParam(centers=1000), second=NNGraphParam(k=5) ) ) table(kgraph.clusters) ## kgraph.clusters ## 1 2 3 4 5 6 7 8 9 10 11 12 ## 191 854 506 541 541 892 46 120 29 132 47 86 Figure 10.10: \(t\)-SNE plot of the PBMC dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from combined \(k\)-means/graph-based clustering. The obvious benefit of this approach over direct graph-based clustering is the speed improvement. We avoid the need to identifying nearest neighbors for each cell and the construction of a large intermediate graph, while benefiting from the relative interpretability of graph-based clusters compared to those from \(k\)-means. This approach also mitigates the “inflation” effect discussed in Section 10.3. Each centroid serves as a representative of a region of space that is roughly similar in volume, ameliorating differences in cell density that can cause (potentially undesirable) differences in resolution. The choice of the number of \(k\)-means clusters (defined here by the kmeans.clusters= argument) determines the trade-off between speed and fidelity. Larger values provide a more faithful representation of the underlying distribution of cells, at the cost of requiring more computational work by the second-stage clustering procedure. Note that the second step operates on the centroids, so increasing kmeans.clusters= may have further implications if the second-stage procedure is sensitive to the total number of input observations. For example, increasing the number of centroids would require an concomitant increase in k= (the number of neighbors in graph construction) to maintain the same level of resolution in the final output. 10.5 Hierarchical clustering 10.5.1 Background Hierarchical clustering is an ancient technique that aims to generate a dendrogram containing a hierarchy of samples. This is most commonly done by greedily agglomerating samples into clusters, then agglomerating those clusters into larger clusters, and so on until all samples belong to a single cluster. Variants of hierarchical clustering methods primarily differ in how they choose to perform the agglomerations. For example, complete linkage aims to merge clusters with the smallest maximum distance between their elements, while Ward’s method aims to minimize the increase in within-cluster variance. In the context of scRNA-seq, the main advantage of hierarchical clustering lies in the production of the dendrogram. This is a rich summary that describes the relationships between cells and subpopulations at various resolutions and in a quantitative manner based on the branch lengths. Users can easily “cut” the tree at different heights to define clusters with different granularity, where clusters defined at high resolution are guaranteed to be nested within those defined at a lower resolution. (Guaranteed nesting can be helpful for interpretation, as discussed in Section 10.7.) The dendrogram is also a natural representation of the data in situations where cells have descended from a relatively recent common ancestor. In practice, hierachical clustering is too slow to be used for anything but the smallest scRNA-seq datasets. Most variants require a cell-cell distance matrix that is prohibitively expensive to compute for many cells. Greedy agglomeration is also likely to result in a quantitatively suboptimal partitioning (as defined by the agglomeration measure) at higher levels of the dendrogram when the number of cells and merge steps is high. Nonetheless, we will still demonstrate the application of hierarchical clustering here, as it can occasionally be useful for squeezing more information out of datasets with very few cells. 10.5.2 Implementation As the PBMC dataset is too large, we will demonstrate on the 416B dataset instead. #---(2): PCA TSNE ## altExpNames(2): ERCC SIRV We compute a cell-cell distance matrix using the top PCs and we apply hierarchical clustering with Ward’s method. The resulting tree in Figure 10.11 shows a clear split in the population caused by oncogene induction. While both Ward’s method and complete linkage ( hclust()’s default) yield compact clusters, we prefer the former it is less affected by differences in variance between clusters. dist.416b <- dist(reducedDim(sce.416b, "PCA")) tree.416b <- hclust(dist.416b, "ward.D2") # Making a prettier dendrogram. library(dendextend) tree.416b$labels <- seq_along(tree.416b$labels) dend <- as.dendrogram(tree.416b, hang=0.1) combined.fac <- paste0(sce.416b$block, ".", sub(" .*", "", sce.416b$phenotype)) labels_colors(dend) <- c( `20160113.wild`="blue", `20160113.induced`="red", `20160325.wild`="dodgerblue", `20160325.induced`="salmon" )[combined.fac][order.dendrogram(dend)] plot(dend) Figure 10.11: Hierarchy of cells in the 416B data set after hierarchical clustering, where each leaf node is a cell that is coloured according to its oncogene induction status (red is induced, blue is control) and plate of origin (light or dark). To obtain explicit clusters, we “cut” the tree by removing internal branches such that every subtree represents a distinct cluster. This is most simply done by removing internal branches above a certain height of the tree, as performed by the cutree() function. A more sophisticated variant of this approach is implemented in the dynamicTreeCut package, which uses the shape of the branches to obtain a better partitioning for complex dendrograms (Figure 10.12). library(dynamicTreeCut) # minClusterSize needs to be turned down for small datasets. # deepSplit controls the resolution of the partitioning. clust.416b <- cutreeDynamic(tree.416b, distM=as.matrix(dist.416b), minClusterSize=10, deepSplit=1) ## ..cutHeight not given, setting it to 783 ===> 99% of the (truncated) height range in dendro. ## ..done. ## clust.416b ## 1 2 3 4 ## 78 69 24 14 Figure 10.12: Hierarchy of cells in the 416B data set after hierarchical clustering, where each leaf node is a cell that is coloured according to its assigned cluster identity from a dynamic tree cut. This generally corresponds well to the grouping of cells on a \(t\)-SNE plot (Figure 10.13). The exception is cluster 2, which is split across two visual clusters in the plot. We attribute this to a distortion introduced by \(t\)-SNE rather than inappropriate behavior of the clustering algorithm, based on the examination of some later diagnostics. Figure 10.13: \(t\)-SNE plot of the 416B dataset, where each point represents a cell and is coloured according to the identity of the assigned cluster from hierarchical clustering. Note that the series of calls required to obtain the clusters is also wrapped by clusterRows() for more convenient execution. In this case, we can reproduce clust.416b with the following: clust.416b.again <- clusterRows(reducedDim(sce.416b, "PCA"), HclustParam(method="ward.D2", cut.dynamic=TRUE, minClusterSize=10, deepSplit=1)) table(clust.416b.again) ## clust.416b.again ## 1 2 3 4 ## 78 69 24 14 10.5.3 Assessing cluster separation We check the separation of the clusters using the silhouette width (Figure 10.14). For each cell, we compute the average distance to all cells in the same cluster. We also compute the average distance to all cells in another cluster, taking the minimum of the averages across all other clusters. The silhouette width for each cell is defined as the difference between these two values divided by their maximum. Cells with large positive silhouette widths are closer to other cells in the same cluster than to cells in different clusters. Each cluster would ideally contain large positive silhouette widths, indicating that it is well-separated from other clusters. This is indeed the case in Figure 10.14 - and in fact, cluster 2 has the largest width of all, indicating that it is a more coherent cluster than portrayed in Figure 10.13. Smaller widths can arise from the presence of internal subclusters, which inflates the within-cluster distance; or overclustering, where cells at the boundary of a partition are closer to the neighboring cluster than their own cluster. Figure 10.14: Silhouette widths for cells in each cluster in the 416B dataset. Each bar represents a cell, grouped by the cluster to which it is assigned. For a more detailed examination, we identify the closest neighboring cluster for cells with negative widths. This provides a perspective on the relationships between clusters that is closer to the raw data than the dendrogram in Figure 10.12. ## Neighbor ## Cluster 1 2 3 ## 2 0 0 3 ## 3 1 3 0 The average silhouette width across all cells can also be used to choose clustering parameters. The aim is to maximize the average silhouette width in order to obtain well-separated clusters. This can be helpful to automatically obtain a “reasonable” clustering, though in practice, the clustering that yields the strongest separation often does not provide the most biological insight. 10.6 General-purpose cluster diagnostics 10.6.1 Cluster separation, redux We previously introduced the silhouette width in the context of hierarchical clustering (Section 10.5.3). While this can be applied with other clustering algorithms, it requires calculation of all pairwise distances between cells and is not scalable for larger cdatasets. In such cases, we instead use an approximate approach that replaces the average of the distances with the distance to the average (i.e., centroid) of each cluster, with some tweaks to account for the distance due to the within-cluster variance. This is implemented in the approxSilhouette() function from bluster, allowing us to quickly identify poorly separate clusters with mostly negative widths (Figure 10.15). # Performing the calculations on the PC coordinates, like before. sil.approx <- approxSilhouette(reducedDim(sce.pbmc, "PCA"), clusters=clust) sil.data <- as.data.frame(sil.approx) sil.data$closest <- factor(ifelse(sil.data$width > 0, clust, sil.data$other)) sil.data$cluster <- factor(clust) ggplot(sil.data, aes(x=cluster, y=width, colour=closest)) + ggbeeswarm::geom_quasirandom(method="smiley") Figure 10.15: Distribution of the approximate silhouette width across cells in each cluster of the PBMC dataset. Each point represents a cell and colored with the identity of its own cluster if its silhouette width is positive and that of the closest other cluster if the width is negative. Alternatively, we can quantify the degree to which cells from multiple clusters intermingle in expression space. The “clustering purity” is defined for each cell as the proportion of neighboring cells that are assigned to the same cluster. Well-separated clusters should exhibit little intermingling and thus high purity values for all member cells, as demonstrated below in Figure 10.16. Median purity values are consistently greater than 0.9, indicating that most cells in each cluster are primarily surrounded by other cells from the same cluster. pure.pbmc <- neighborPurity(reducedDim(sce.pbmc, "PCA"), clusters=clust) pure.data <- as.data.frame(pure.pbmc) pure.data$maximum <- factor(pure.data$maximum) pure.data$cluster <- factor(clust) ggplot(pure.data, aes(x=cluster, y=purity, colour=maximum)) + ggbeeswarm::geom_quasirandom(method="smiley") Figure 10.16: Distribution of cluster purities across cells in each cluster of the PBMC dataset. Each point represents a cell and colored with the identity of the cluster contributing the largest proportion of its neighbors. The main difference between these two methods is that the purity is ignorant of the intra-cluster variance. This may or may not be desirable depending on what level of heterogeneity is of interest. In addition, the purity will - on average - only decrease with increasing cluster number/resolution, making it less effective for choosing between different clusterings. However, regardless of the chosen method, it is worth keeping in mind that poor separation is not synonymous with poor quality. In fact, poorly separated clusters will often be observed in non-trivial analyses of scRNA-seq data where the aim is to characterize closely related subtypes or states. These diagnostics are best used to guide interpretation by highlighting clusters that require more investigation rather than to rule out poorly separated clusters altogether. 10.6.2 Comparing different clusterings As previously mentioned, clustering’s main purpose is to obtain a discrete summary of the data for further interpretation. The diversity of available methods (and the subsequent variation in the clustering results) reflects the many different “perspectives” that can be derived from a high-dimensional scRNA-seq dataset. It is helpful to determine how these perspectives relate to each other by comparing the clustering results. More concretely, we want to know which clusters map to each other across algorithms; inconsistencies may be indicative of complex variation that is summarized differently by each clustering procedure. A simple yet effective approach for comparing two clusterings of the same dataset is to create a 2-dimensional table of label frequencies (Figure 10.3). We can further improve the interpretability of this table by computing the proportions of cell assignments, which avoids difficulties with dynamic range when visualizing clusters of differing abundances, For example, we may be interested in how our Walktrap clusters from Section 10.3 are redistributed when we switch to using Louvain community detection (Figure 10.17). Note that this heatmap is best interpreted on a row-by-row basis as the proportions are computed within each row and cannot be easily compared between rows. tab <- table(Walktrap=clust, Louvain=clust.louvain) tab <- tab/rowSums(tab) pheatmap(tab, color=viridis::viridis(100), cluster_cols=FALSE, cluster_rows=FALSE) Figure 10.17: Heatmap of the proportions of cells from each Walktrap cluster (rows) across the Louvain clusters (columns) in the PBMC dataset. Each row represents the distribution of cells across Louvain clusters for a given Walktrap cluster. For clusterings that differ primarily in resolution (usually from different parameterizations of the same algorithm), we can use the clustree package to visualize the relationships between them. Here, the aim is to capture the redistribution of cells from one clustering to another at progressively higher resolution, providing a convenient depiction of how clusters split apart (Figure 10.18). This approach is most effective when the clusterings exhibit a clear gradation in resolution but is less useful for comparisons involving theoretically distinct clustering procedures. library(clustree) combined <- cbind(k.50=clust.50, k.10=clust, k.5=clust.5) clustree(combined, prefix="k.", edge_arrow=FALSE) Figure 10.18: Graph of the relationships between the Walktrap clusterings of the PBMC dataset, generated with varying \(k\) during the nearest-neighbor graph construction. (A higher \(k\) generally corresponds to a lower resolution clustering.) The size of the nodes is proportional to the number of cells in each cluster, and the edges depict cells in one cluster that are reassigned to another cluster at a different resolution. The color of the edges is defined according to the number of reassigned cells and the opacity is defined from the corresponding proportion relative to the size of the lower-resolution cluster. We can quantify the agreement between two clusterings by computing the Rand index with bluster’s pairwiseRand(). This is defined as the proportion of pairs of cells that retain the same status (i.e., both cells in the same cluster, or each cell in different clusters) in both clusterings. In practice, we usually compute the adjusted Rand index (ARI) where we subtract the number of concordant pairs expected under random permutations of the clusterings; this accounts for differences in the size and number of clusters within and between clusterings. A larger ARI indicates that the clusters are preserved, up to a maximum value of 1 for identical clusterings. In and of itself, the magnitude of the ARI has little meaning, and it is best used to assess the relative similarities of different clusterings (e.g., “Walktrap is more similar to Louvain than either are to Infomap”). Nonetheless, if one must have a hard-and-fast rule, experience suggests that an ARI greater than 0.5 corresponds to “good” similarity between two clusterings. ## [1] 0.7796 The same function can also provide a more granular perspective with mode="ratio", where the ARI is broken down into its contributions from each pair of clusters in one of the clusterings. This mode is helpful if one of the clusterings - in this case, clust - is considered to be a “reference”, and the aim is to quantify the extent to which the reference clusters retain their integrity in another clustering. In the breakdown matrix, each entry is a ratio of the adjusted number of concoordant pairs to the adjusted total number of pairs. Low values on the diagonal in Figure 10.19 indicate that cells from the corresponding reference cluster in clust are redistributed to multiple other clusters in clust.5. Conversely, low off-diagonal values indicate that the corresponding pair of reference clusters are merged together in clust.5. breakdown <- pairwiseRand(ref=clust, alt=clust.5, mode="ratio") pheatmap(breakdown, color=viridis::magma(100), cluster_rows=FALSE, cluster_cols=FALSE) Figure 10.19: ARI-based ratio for each pair of clusters in the reference Walktrap clustering compared to a higher-resolution alternative clustering for the PBMC dataset. Rows and columns of the heatmap represent clusters in the reference clustering. Each entry represents the proportion of pairs of cells involving the row/column clusters that retain the same status in the alternative clustering. 10.6.3 Evaluating cluster stability A desirable property of a given clustering is that it is stable to perturbations to the input data (Von Luxburg 2010). Stable clusters are logistically convenient as small changes to upstream processing will not change the conclusions; greater stability also increases the likelihood that those conclusions can be reproduced in an independent replicate study. scran uses bootstrapping to evaluate the stability of a clustering algorithm on a given dataset - that is, cells are sampled with replacement to create a “bootstrap replicate” dataset, and clustering is repeated on this replicate to see if the same clusters can be reproduced. We demonstrate below for graph-based clustering on the PCs of the PBMC dataset. myClusterFUN <- function(x) { g <- bluster::makeSNNGraph(x, type="jaccard") igraph::cluster_louvain(g)$membership } pcs <- reducedDim(sce.pbmc, "PCA") originals <- myClusterFUN(pcs) table(originals) # inspecting the cluster sizes. ## originals ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ## 127 62 48 343 45 56 124 94 848 290 200 459 233 143 541 372 set.seed(0010010100) ratios <- bootstrapStability(pcs, FUN=myClusterFUN, clusters=originals) dim(ratios) ## [1] 16 16 The function returns a matrix of ARI-derived ratios for every pair of original clusters in originals (Figure 10.20), averaged across bootstrap iterations. High ratios indicate that the clustering in the bootstrap replicates are highly consistent with that of the original dataset. More specifically, high ratios on the diagonal indicate that cells in the same original cluster are still together in the bootstrap replicates, while high ratios off the diagonal indicate that cells in the corresponding cluster pair are still separted. pheatmap(ratios, cluster_row=FALSE, cluster_col=FALSE, color=viridis::magma(100), breaks=seq(-1, 1, length.out=101)) Figure 10.20: Heatmap of ARI-derived ratios from bootstrapping of graph-based clustering in the PBMC dataset. Each row and column represents an original cluster and each entry is colored according to the value of the ARI ratio between that pair of clusters. Bootstrapping is a general approach for evaluating cluster stability that is compatible with any clustering algorithm. The ARI-derived ratio between cluster pairs is also more informative than a single stability measure for all/each cluster as the former considers the relationships between clusters, e.g., unstable separation between \(X\) and \(Y\) does not penalize the stability of separation between \(X\) and another cluster \(Z\). Of course, one should take these metrics with a grain of salt, as bootstrapping only considers the effect of sampling noise and ignores other factors that affect reproducibility in an independent study (e.g., batch effects, donor variation). In addition, it is possible for a poor separation to be highly stable, so highly stable cluster may not necessarily represent some distinct subpopulation. 10.7 Subclustering Another simple approach to improving resolution is to repeat the feature selection and clustering within a single cluster. This aims to select HVGs and PCs that are more relevant to internal structure, improving resolution by avoiding noise from unnecessary features. Subsetting also encourages clustering methods to separate cells according to more modest heterogeneity in the absence of distinct subpopulations. We demonstrate with a cluster of putative memory T cells from the PBMC dataset, identified according to several markers (Figure 10.21). g.full <- buildSNNGraph(sce.pbmc, use.dimred = 'PCA') clust.full <- igraph::cluster_walktrap(g.full)$membership plotExpression(sce.pbmc, features=c("CD3E", "CCR7", "CD69", "CD44"), x=I(factor(clust.full)), colour_by=I(factor(clust.full))) Figure 10.21: Distribution of log-normalized expression values for several T cell markers within each cluster in the 10X PBMC dataset. Each cluster is color-coded for convenience. # Repeating modelling and PCA on the subset. memory <- 10L sce.memory <- sce.pbmc[,clust.full==memory] dec.memory <- modelGeneVar(sce.memory) sce.memory <- denoisePCA(sce.memory, technical=dec.memory, subset.row=getTopHVGs(dec.memory, n=5000)) We apply graph-based clustering within this memory subset to obtain CD4+ and CD8+ subclusters (Figure 10.22). Admittedly, the expression of CD4 is so low that the change is rather modest, but the interpretation is clear enough. g.memory <- buildSNNGraph(sce.memory, use.dimred="PCA") clust.memory <- igraph::cluster_walktrap(g.memory)$membership plotExpression(sce.memory, features=c("CD8A", "CD4"), x=I(factor(clust.memory))) Figure 10.22: Distribution of CD4 and CD8A log-normalized expression values within each cluster in the memory T cell subset of the 10X PBMC dataset. For subclustering analyses, it is helpful to define a customized function that calls our desired algorithms to obtain a clustering from a given SingleCellExperiment. This function can then be applied multiple times on different subsets without having to repeatedly copy and modify the code for each subset. For example, quickSubCluster() loops over all subsets and executes this user-specified function to generate a list of SingleCellExperiment objects containing the subclustering results. (Of course, the downside is that this assumes that a similar analysis is appropriate for each subset. If different subsets require extensive reparametrization, copying the code may actually be more straightforward.) set.seed(1000010) subcluster.out <- quickSubCluster(sce.pbmc, groups=clust.full, prepFUN=function(x) { # Preparing the subsetted SCE for clustering. dec <- modelGeneVar(x) input <- denoisePCA(x, technical=dec, subset.row=getTopHVGs(dec, prop=0.1), BSPARAM=BiocSingular::IrlbaParam()) }, clusterFUN=function(x) { # Performing the subclustering in the subset. g <- buildSNNGraph(x, use.dimred="PCA", k=20) igraph::cluster_walktrap(g)$membership } ) # One SingleCellExperiment object per parent cluster: names(subcluster.out) ## [1] "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" ## [16] "16" ## ## 1.1 1.2 1.3 1.4 1.5 1.6 ## 28 22 34 62 11 48 Subclustering is a general and conceptually straightforward procedure for increasing resolution. It can also simplify the interpretation of the subclusters, which only need to be considered in the context of the parent cluster’s identity - for example, we did not have to re-identify the cells in cluster 10 as T cells. However, this is a double-edged sword as it is difficult for practitioners to consider the uncertainty of identification for parent clusters when working with deep nesting. If cell types or states span cluster boundaries, conditioning on the putative cell type identity of the parent cluster can encourage the construction of a “house of cards” of cell type assignments, e.g., where a subcluster of one parent cluster is actually contamination from a cell type in a separate parent] clustree_0.4.3 ggraph_2.0.4 [3] dynamicTreeCut_1.63-1 dendextend_1.14.0 [5] cluster_2.1.0 pheatmap_1.0.12 [7] scater_1.18.3 ggplot2_3.3.2 [9] bluster_1.0.0 scran_1.18] bitops_1.0-6 RColorBrewer_1.1-2 [3] backports_1.2.0 tools_4.0.3 [5] R6_2.5.0 irlba_2.3.3 [7] vipor_0.4.5 colorspace_2.0-0 [9] withr_2.3.0 tidyselect_1.1.0 [11] gridExtra_2.3 processx_3.4.5 [13] compiler_4.0.3 graph_1.68.0 [15] BiocNeighbors_1.8.2 DelayedArray_0.16.0 [17] labeling_0.4.2 bookdown_0.21 [19] checkmate_2.0.0 scales_1.1.1 [21] callr_3.5.1 stringr_1.4.0 [23] digest_0.6.27 rmarkdown_2.5 [25] XVector_0.30.0 pkgconfig_2.0.3 [27] htmltools_0.5.0 sparseMatrixStats_1.2.0 [29] limma_3.46.0 highr_0.8 [31] rlang_0.4.9 DelayedMatrixStats_1.12.1 [33] generics_0.1.0 farver_2.0.3 [35] BiocParallel_1.24.1 dplyr_1.0.2 [37] RCurl_1.98-1.2 magrittr_2.0.1 [39] BiocSingular_1.6.0 GenomeInfoDbData_1.2.4 [41] scuttle_1.0.3 Matrix_1.2-18 [43] Rcpp_1.0.5 ggbeeswarm_0.6.0 [45] munsell_0.5.0 viridis_0.5.1 [47] lifecycle_0.2.0 stringi_1.5.3 [49] yaml_2.2.1 edgeR_3.32.0 [51] MASS_7.3-53 zlibbioc_1.36.0 [53] grid_4.0.3 ggrepel_0.8.2 [55] dqrng_0.2.1 crayon_1.3.4 [57] lattice_0.20-41 graphlayouts_0.7.1 [59] cowplot_1.1.0 beachmat_2.6.2 [61] locfit_1.5-9.4 CodeDepends_0.6.5 [63] knitr_1.30 ps_1.5.0 [65] pillar_1.4.7 igraph_1.2.6 [67] codetools_0.2-18 XML_3.99-0.5 [69] glue_1.4.2 evaluate_0.14 [71] BiocManager_1.30.10 tweenr_1.0.1 [73] vctrs_0.3.5 polyclip_1.10-0 [75] tidyr_1.1.2 gtable_0.3.0 [77] purrr_0.3.4 ggforce_0.3.2 [79] xfun_0.19 rsvd_1.0.3 [81] tidygraph_1.2.0 viridisLite_0.3.0 [83] tibble_3.0.4 beeswarm_0.2.3 [85] statmod_1.4.35 ellipsis_0.3.1 Bibliography Ji, Z., and H. Ji. 2016. “TSCAN: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis.” Nucleic Acids Res. 44 (13): e117. Von Luxburg, U. 2010. “Clustering Stability: An Overview.” Foundations and Trends in Machine Learning 2 (3): 235–74. Wolf, F. Alexander, Fiona Hamey, Mireya Plass, Jordi Solana, Joakim S. Dahlin, Berthold Gottgens, Nikolaus Rajewsky, Lukas Simon, and Fabian J. Theis. 2017. “Graph Abstraction Reconciles Clustering with Trajectory Inference Through a Topology Preserving Map of Single Cells.” bioRxiv.. Xu, C., and Z. Su. 2015. “Identification of cell types from single-cell transcriptomes using a novel clustering method.” Bioinformatics 31 (12): 1974–80.
https://bioconductor.org/books/release/OSCA/clustering.html
CC-MAIN-2021-17
en
refinedweb
Hello, I'm new to sage and have been trying to import numpy when starting up sage for a few hours now. I've searched everywhere and so far i've tried the following options: Editing the import_all variable in .sage/ipythonrc import_all numpy I've also tried adding some execute instructions in the ipythonrc execute print "test" execute from numpy import * The thing is, the first line works and writes "test" to the console, but the import statement doesn't seem to work. Finally, I've edited the main function in .sage/ipy_user_conf like this: def main(): from numpy import * o = ip.options ip.ex('from numpy import *') main() But this doesn't seem to work either. When I try to create a new column matrix like this: a=matrix("[1; 2; 3; 4]") I get an error which is solved by manually importing the numpy libs. Is there any other way to automatically load modules at startup? Am I missing something? Thanks in advance for any help.
https://ask.sagemath.org/questions/8929/revisions/
CC-MAIN-2021-17
en
refinedweb
1. Overview In this tutorial, we'll understand the basic need for a container orchestration system. We'll evaluate the desired characteristic of such a system. From that, we'll try to compare two of the most popular container orchestration systems in use today, Apache Mesos and Kubernetes. 2. Container Orchestration Before we begin comparing Mesos and Kubernetes, let's spend some time in understanding what containers are and why we need container orchestration after all. 2.1. Containers A container is a standardized unit of software that packages code and all its required dependencies. Hence, it provides platform independence and operational simplicity. Docker is one of the most popular container platforms in use. Docker leverages Linux kernel features like CGroups and namespaces to provide isolation of different processes. Therefore, multiple containers can run independently and securely. It's quite trivial . 2.2. Container Orchestration So, we've seen how containers can make application deployment reliable and repeatable. But why do we need container orchestration? Now, while we've got a few containers to manage, we're fine with Docker CLI. We can automate some of the simple chores as well. But what happens when we've to manage hundreds of containers? For instance, think of architecture with several microservices, all with distinct scalability and availability requirements. Consequently, things can quickly get out of control, and that's where the benefits of a container orchestration system realize. A container orchestration system treats a cluster of machines with a multi-container application as a single deployment entity. It provides automation from initial deployment, scheduling, updates to other features like monitoring, scaling, and failover. 3. Brief Overview of Mesos Apache Mesos is an open-source cluster manager developed originally at UC Berkeley. It provides applications with APIs for resource management and scheduling across the cluster. Mesos gives us the flexibility to run both containerized and non-containerized workload in a distributed manner. 3.1. Architecture Mesos architecture consists of Mesos Master, Mesos Agent, and Application Frameworks: Let's understand the components of architecture here: - Frameworks: These are the actual applications that require distributed execution of tasks or workload. Typical examples are Hadoop or Storm. Frameworks in Mesos comprise of two primary components: - Scheduler: This is responsible for registering with the Master Node such that the master can start offering resources - Executor: This is the process which gets launched on the agent nodes to run the framework's tasks - Mesos Agents: These are responsible for actually running the tasks. Each agent publishes its available resources like CPU and memory to the master. On receiving tasks from the master, they allocate required resources to the framework's executor. - Mesos Master: This is responsible for scheduling tasks received from the Frameworks on one of the available agent nodes. Master makes resource offers to Frameworks. Framework's scheduler can choose to run tasks on these available resources. 3.2. Marathon As we just saw, Mesos is quite flexible and allows frameworks to schedule and execute tasks through well defined APIs. However, it's not convenient to implement these primitives directly, especially when we want to schedule custom applications. For instance, orchestrating applications packaged as containers. This is where a framework like Marathon can help us. Marathon is a container orchestration framework which runs on Mesos. In this regard, Marathon acts as a framework for the Mesos cluster. Marathon provides several benefits which we typically expect from an orchestration platform like service discovery, load balancing, metrics, and container management APIs. Marathon treats a long-running service as an application and an application instance as a task. A typical scenario can have multiple applications with dependencies forming what is called Application Groups. 3.3. Example So, let's see how we can use Marathon to deploy our simple Docker image we created earlier. Note that installing a Mesos cluster can be little involved and hence we can use a more straightforward solution like Mesos Mini. Mesos Mini enables us to spin up a local Mesos cluster in a Docker environment. It includes a Mesos Master, single Mesos Agent, and Marathon. Once we've Mesos cluster with Marathon up and running, we can deploy our container as a long-running application service. All we need a small JSON application definition: #hello-marathon.json { "id": "marathon-demo-application", "cpus": 1, "mem": 128, "disk": 0, "instances": 1, "container": { "type": "DOCKER", "docker": { "image": "hello_world:latest", "portMappings": [ { "containerPort": 9001, "hostPort": 0 } ] } }, "networks": [ { "mode": "host" } ] } Let's understand what exactly is happening here: - We have provided an id for our application - Then, we defined the resource requirements for our application - We also defined how many instances we'd like to run - Then, we've provided the container details to launch an app from - Finally, we've defined the network mode for us to be able to access the application We can launch this application using the REST APIs provided by Marathon: curl -X POST \ \ -d @hello-marathon.json \ -H "Content-type: application/json" 4. Brief Overview of Kubernetes Kubernetes is an open-source container orchestration system initially developed by Google. It's now part of Cloud Native Computing Foundation (CNCF). It provides a platform for automating deployment, scaling, and operations of application container across a cluster of hosts. 4.1. Architecture Kubernetes architecture consists of a Kubernetes Master and Kubernetes Nodes: Let's go through the major parts of this high-level architecture: - Kubernetes Master: The master is responsible for maintaining the desired state of the cluster. It manages all nodes in the cluster. As we can see, the master is a collection of three processes: - kube-apiserver: This is the service that manages the entire cluster, including processing REST operations, validating and updating Kubernetes objects, performing authentication and authorization - kube-controller-manager: This is the daemon that embeds the core control loop shipped with Kubernetes, making the necessary changes to match the current state to the desired state of the cluster - kube-scheduler: This service watches for unscheduled pods and binds them to nodes depending upon requested resources and other constraints - Kubernetes Nodes: The nodes in a Kubernetes cluster are the machines that run our containers. Each node contains the necessary services to run the containers: - kubelet: This is the primary node agent which ensures that the containers described in PodSpecs provided by kube-apiserver are running and healthy - kube-proxy: This is the network proxy running on each node and performs simple TCP, UDP, SCTP stream forwarding or round-robin forwarding across a set of backends - container runtime: This is the runtime where container inside the pods are run, there are several possible container runtimes for Kubernetes including the most widely used, Docker runtime 4.2. Kubernetes Objects In the last section, we saw several Kubernetes objects which are persistent entities in the Kubernetes system. They reflect the state of the cluster at any point in time. Let's discuss some of the commonly used Kubernetes objects: - Pods: Pod is a basic unit of execution in Kubernetes and can consist of one or more containers, the containers inside a Pod are deployed on the same host - Deployment: Deployment is the recommended way to deploy pods in Kubernetes, it provides features like continuously reconciling the current state of pods with the desired state - Services: Services in Kubernetes provide an abstract way to expose a group of pods, where the grouping is based on selectors targetting pod labels There are several other Kubernetes objects which serve the purpose of running containers in a distributed manner effectively. 4.3. Example So, now we can try to launch our Docker container into the Kubernetes cluster. Kubernetes provides Minikube, a tool that runs single-node Kubernetes cluster on a Virtual Machine. We'd also need kubectl, the Kubernetes Command Line Interface to work with the Kubernetes cluster. After we've kubectl and Minikube installed, we can deploy our container on the single-node Kubernetes cluster within Minikube. We need to define the basic Kubernetes objects in a YAML file: # hello-kubernetes.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: hello-world:latest ports: - containerPort: 9001 --- apiVersion: v1 kind: Service metadata: name: hello-world-service spec: selector: app: hello-world type: LoadBalancer ports: - port: 9001 targetPort: 9001 A detailed analysis of this definition file is not possible here, but let's go through the highlights: - We have defined a Deployment with labels in the selector - We define the number of replicas we need for this deployment - Also, we've provided the container image details as a template for the deployment - We've also defined a Service with appropriate selector - We've defined the nature of the service as LoadBalancer Finally, we can deploy the container and create all defined Kubernetes objects through kubectl: kubectl apply -f yaml/hello-kubernetes.yaml 5. Mesos vs. Kubernetes Now, we've gone through enough context and also performed basic deployment on both Marathon and Kubernetes. We can attempt to understand where do they stand compared to each other. Just a caveat though, it's not entirely fair to compare Kubernetes with Mesos directly. Most of the container orchestration features that we seek are provided by one of the Mesos frameworks like Marathon. Hence, to keep things in the right perspective, we'll attempt to compare Kubernetes with Marathon and not directly Mesos. We'll compare these orchestration systems based on some of the desired properties of such a system. 5.1. Supported Workloads Mesos is designed to handle diverse types of workloads which can be containerized or even non-containerised. It depends upon the framework we use. As we've seen, it's quite easy to support containerized workloads in Mesos using a framework like Marathon. Kubernetes, on the other hand, works exclusively with the containerized workload. Most widely, we use it with Docker containers, but it has support for other container runtimes like Rkt. In the future, Kubernetes may support more types of workloads. 5.2. Support for Scalability Marathon supports scaling through the application definition or the user interface. Autoscaling is also supported in Marathon. We can also scale Application Groups which automatically scales all the dependencies. As we saw earlier, Pod is the fundamental unit of execution in Kubernetes. Pods can be scaled when managed by Deployment, this is the reason pods are invariably defined as a deployment. The scaling can be manual or automated. 5.3. Handling High Availability Application instances in Marathon are distributed across Mesos agents providing high availability. Typically a Mesos cluster consists of multiple agents. Additionally, ZooKeeper provides high availability to the Mesos cluster through quorum and leader election. Similarly, pods in Kubernetes are replicated across multiple nodes providing high availability. Typically a Kubernetes cluster consists of multiple worker nodes. Moreover, the cluster can also have multiple masters. Hence, Kubernetes cluster is capable of providing high availability to containers. 5.4. Service Discovery and Load Balancing Mesos-DNS can provide service discovery and a basic load balancing for applications. Mesos-DNS generates an SRV record for each Mesos task and translates them to the IP address and port of the machine running the task. For Marathon applications, we can also use Marathon-lb to provide port-based discovery using HAProxy. Deployment in Kubernetes creates and destroys pods dynamically. Hence, we generally expose pods in Kubernetes through Service, which provides service discovery. Service in Kubernetes acts as a dispatcher to the pods and hence provide load balancing as well. 5.5 Performing Upgrades and Rollback Changes to application definitions in Marathon is handled as deployment. Deployment supports start, stop, upgrade, or scale of applications. Marathon also supports rolling starts to deploy newer versions of the applications. However, rolling back is as straight forward and typically requires the deployment of an updated definition. Deployment in Kubernetes supports upgrade as well as rollback. We can provide the strategy for Deployment to be taken while relacing old pods with new ones. Typical strategies are Recreate or Rolling Update. Deployment's rollout history is maintained by default in Kubernetes, which makes it trivial to roll back to a previous revision. 5.6. Logging and Monitoring Mesos has a diagnostic utility which scans all the cluster components and makes available data related to health and other metrics. The data can be queried and aggregated through available APIs. Much of this data we can collect using an external tool like Prometheus. Kubernetes publish detailed information related to different objects as resource metrics or full metrics pipelines. Typical practice is to deploy an external tool like ELK or Prometheus+Grafana on the Kubernetes cluster. Such tools can ingest cluster metrics and present them in a much user-friendly way. 5.7. Storage Mesos has persistent local volumes for stateful applications. We can only create persistent volumes from the reserved resources. It can also support external storage with some limitations. Mesos has experimental support for Container Storage Interface (CSI), a common set of APIs between storage vendors and container orchestration platform. Kubernetes offers multiple types of persistent volume for stateful containers. This includes storage like iSCSI, NFS. Moreover, it supports external storage like AWS, GCP as well. The Volume object in Kubernetes supports this concept and comes in a variety of types, including CSI. 5.8. Networking Container runtime in Mesos offers two types of networking support, IP-per-container, and network-port-mapping. Mesos defines a common interface to specify and retrieve networking information for a container. Marathon applications can define a network in host mode or bridge mode. Networking in Kubernetes assigns a unique IP to each pod. This negates the need to map container ports to the host port. It further defines how these pods can talk to each other across nodes. This is implemented in Kubernetes by Network Plugins like Cilium, Contiv. 6. When to Use What? Finally, in comparison, we usually expect a clear verdict! However, it's not entirely fair to declare one technology better than another, regardless. As we've seen, both Kubernetes and Mesos are powerful systems and offers quite competing features. Performance, however, is quite a crucial aspect. A Kubernetes cluster can scale to 5000-nodes while Marathon on Mesos cluster is known to support up to 10,000 agents. In most practical cases, we'll not be dealing with such large clusters. Finally, it boils down to the flexibility and types of workloads that we've. If we're starting afresh and we only plan to use containerized workloads, Kubernetes can offer a quicker solution. However, if we've existing workloads, which are a mix of containers and non-containers, Mesos with Marathon can be a better choice. 7. Other Alternatives Kubernetes and Apache Mesos are quite powerful, but they are not the only systems in this space. There are quite several promising alternatives available to us. While we'll not go into their details, let's quickly list a few of them: - Docker Swarm: Docker Swarm is an open-source clustering and scheduling tool for Docker containers. It comes with a command-line utility to manage a cluster of Docker hosts. It's restricted to Docker containers, unlike Kubernetes and Mesos. - Nomad: Nomad is a flexible workload orchestrator from HashiCorp to manage any containerized or non-containerised application. Nomad enables declarative infrastructure-as-code for deploying applications like Docker container. - OpenShift: OpenShift is a container platform from Red Hat, orchestrated and managed by Kubernetes underneath. OpenShift offers many features on top of what Kubernetes provide like integrated image registry, a source-to-image build, a native networking solution, to name a few. 8. Conclusion To sum up, in this tutorial, we discussed containers and container orchestration systems. We briefly went through two of the most widely used container orchestration systems, Kubernetes and Apache Mesos. We also compared these system based on several features. Finally, we saw some of the other alternatives in this space. Before closing, we must understand that the purpose of such a comparison is to provide data and facts. This is in no way to declare one better than others, and that normally depends on the use-case. So, we must apply the context of our problem in determining the best solution for us.
https://www.baeldung.com/mesos-kubernetes-comparison
CC-MAIN-2021-17
en
refinedweb
In this article, we’ll be looking at how we can use command line arguments in C / C++. Command Line arguments are very useful if you want to pass any input strings to your main program, from the command line. These arguments are passed as parameters to the main() function. Let’s look at how we can use these effectively. Table of Contents Why should we use command line arguments? Often, it is very convenient for us to directly give input to our program. One common way is to use scanf() / getchar(), etc to wait for a user input. But, these calls waste a lot of time in waiting, and requires the user to manually enter the input. We can save a lot of time by simply giving these inputs to our main program! The format will be something like: ./executable input1 input2 The program will automatically store those command-line arguments in special variables, from which we can access them directly! This will only require a one time input, given when we start our program. Let’s look at how we can use them now. Command Line Arguments in C/C++ – The special variables The program will pass the command line arguments to the main() function. In C / C++, the main() function takes in two additional parameters for these arguments. argc-> Argument Count. Gives the number of arguments that we pass (includes the program name also) argv-> Argument Vector. This is a char*array of strings. These are the argument values itself. So, argv[0] is the name of the program itself, and argv[1] … argv[argc-1] will be all our command line arguments. int main(int argc, char* argv[]); To see this in action, let’s take an example. Using command line arguments – A simple example Let’s consider a program which concatenates two strings, given as input. We’ll pass in two command line arguments to our program, so our total argc must be 3 (including the program name). We can write our program like this: #include <iostream> #include <string> using namespace std; string concat_strings(string s1, string s2) { return s1 + s2; } int main(int argc, char* argv[]) { cout << "You have entered " << argc << " arguments:" << "\n"; if (argc != 3) { cerr << "Program is of the form: " << argv[0] << " <inp1> <inp2>\n"; return 1; } string result = concat_strings(argv[1], argv[2]); cout << "Result: " << result << endl; return 0; } If our executable name was test.out, on my linux machine, I run the executable using this command: ./test.out Hello _JournalDev Notice that the arguments are space-separated. So our command-line arguments are: “Hello” and “_JournalDev” Output You have entered 3 arguments: Result: Hello_JournalDev Great! This seems to work as expected, since the first argument is the name of the program itself. Let’s try to run this with 4 arguments now. ./test.out Hello from JournalDev Output You have entered 4 arguments: Program is of the form: ./test.out <inp1> <inp2> Indeed, it gives us the correct error message! Conclusion Hope this article gives you a better understanding of command line arguments. We saw how we can use it to make our lives easier! For similar content, do go through our tutorial section on C++ programming. References - cplusplus.com post on Command Line arguments in C++
https://www.journaldev.com/41869/command-line-arguments-c-plus-plus
CC-MAIN-2021-17
en
refinedweb
This document shows cluster operators and platform administrators how to safely roll out changes across multiple environments using Anthos Config Management. Anthos Config Management can help you avoid errors that affect all of your environments simultaneously. Anthos Config Management lets you manage single clusters, multi-tenant clusters, and multi-cluster Kubernetes configurations by using files stored in a Git repository. Anthos Config Management combines three technologies— Config Sync, Policy Controller, and Config Connector. Config Sync watches for updates to all files in the Git repository and applies changes to all relevant clusters automatically. Policy Controller manages and enforces policies for objects in your clusters. Config Connector uses Google Kubernetes Engine (GKE) custom resources to manage cloud resources. Config Sync configurations can represent several things, including the following: - Standard GKE objects, such as NetworkPolicies resources, DaemonSets resources, or RoleBindings resources. - Google Cloud resources, such as Compute Engine instances or Cloud SQL databases, through Config Connector. - Constraints on the configuration themselves, through Policy Controller. Anthos Config Management is especially suited to deploy configurations, policies, and workloads needed to run the platform that you build on top of Anthos—for example, security agents, monitoring agents, and certificate managers. Although you can deploy user-facing applications with Anthos Config Management, we don't recommend linking their release lifecycle to the release lifecycle of the administrative workloads mentioned earlier. Instead, we recommend that you use a tool dedicated to application deployment, such as a continuous deployment tool, so that application teams can be in charge of their release schedule. Anthos Config Management is a powerful product that can manage many elements, so you need guardrails to avoid errors that have a major impact. This document describes several methods to create guardrails. The first section covers staged rollouts, the second section focuses on tests and validations, and the third section explains how to use Policy Controller to create guardrails. The fourth section shows how to monitor Anthos Config Management deployments. You can use most of the methods discussed in this document, even if you're using only Config Sync and not the full Anthos Config Management product. If you are not using the full Anthos Config Management product but still want to implement the methods involving Policy Controller, you can successfully do so using Gatekeeper. The exceptions to this rule are methods that rely on the Anthos Config Management page in the Google Cloud console, like updating the Anthos Config Management configuration in the Google Cloud console. You can also use several of the methods described in this document at the same time. In the following section, a table indicates which methods are compatible for simultaneous use. Implementing staged rollouts with Anthos Config Management In a multi-cluster environment, which is a common situation for Anthos users, we don't recommend applying a configuration change across all the clusters at the same time. A staged rollout, cluster per cluster—or even namespace per namespace, if you use namespaces as the boundary between applications—is much safer because it reduces the blast radius of any error. Following are several ways to implement staged rollouts with Anthos Config Management: - Use Git commits or tags to manually apply the changes that you want to the clusters. - Use Git branches to automatically apply the changes when the changes are merged. You can use different branches for different groups of clusters. - Use ClusterSelectorand NamespaceSelectorobjects to selectively apply changes to subgroups of clusters or namespaces. All methods for staged rollouts have advantages and disadvantages. The following table shows which of these methods you can use at the same time. The following decision tree helps you decide when to use one of the staged rollout methods. Use Git commits or tags Compared to the other staged rollout methods, using Git commits or tags provides the most control and is the safest. You can use the Anthos Config Management page in the console to update multiple clusters at the same time. Use this method if you want to apply changes to your clusters one by one, and to control exactly when this happens. In this method, you "pin" each cluster to a specific version (either a commit or a tag) of your Anthos Config Management repository. This method is similar to using the Git commit as a container image tag. You implement this method by specifying the commit or the tag in the spec.git.syncRev field of the ConfigManagement custom resource. If you synchronize configs from multiple repositories, you implement this method by updating the RootSync and RepoSync custom resources instead. For more information about the configuration fields, see configuring the Operator. If you manage your ConfigManagement custom resources with a tool like kustomize, you can reduce the amount of manual work required to roll out changes. With such a tool, you only need to change the syncRev parameter in one place, and then selectively apply the new ConfigManagement custom resource to your clusters in the order, and at the pace, that you choose. Additionally, if you are using Anthos Config Management (and not Config Sync), you have access to the Anthos Config Management page in the Google Cloud console. This page lets you update the syncRev parameter for multiple clusters belonging to the same environ at the same time. If you have an automated system to update the Anthos Config Management configuration, we recommend against using the console to change this configuration. For example, the following ConfigManagement definition configures Anthos Config Management to use the 1.2.3 tag: apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: # clusterName is required and must be unique among all managed clusters clusterName: my-cluster git: syncRepo: git@example.com:anthos/config-management.git # Pin the cluster using a tag syncRev: 1.2.3 secretType: ssh If you apply this configuration to your cluster, Anthos Config Management will use the 1.2.3 tag of the example.com:anthos/config-management.git repository. To update a cluster, change the spec.git.syncRev field to the new value for the cluster. This lets you define which clusters get updated and when. If you need to roll back a change, change the spec.git.syncRev field back to its former value. The following diagram illustrates the rollout process for this method. First, you commit changes to the Anthos Config Management repository, and then you update the ConfigManagement definitions on all the clusters: We recommend the following actions: - Use Git commit IDs rather than tags. Because of the way that Git functions, you have a guarantee that they will never change. For example, a git push --forcecan't change the commit that Anthos Config Management is using. This approach is useful for auditing purposes and to track which commit you are using in logs. Additionally, unlike with tags, there's no extra step to commit IDs. - If you prefer using Git tags instead of Git commit IDs, and you're using GitLab, protect the tags to keep them from being moved or deleted. The other major Git solutions do not have this feature. - If you want to update multiple clusters at the same time, you can do that in the Anthos Config Management console page. For you to update multiple clusters at once, they need to be part of the same environ (and be in the same project). Use Git branches If you want changes to be applied to clusters as soon as they are merged in your Git repository, configure Anthos Config Management to use Git branches instead of commits or tags. In this method, you can create multiple long-lived branches in your Git repository, and configure Anthos Config Management in different clusters to read its configuration from different branches. For example, a simple pattern has two branches: - A stagingbranch for non-production clusters. - A masterbranch for production clusters. For non-production clusters, create the ConfigManagement object with the spec.git.syncBranch field set to staging. For production clusters, create the ConfigManagement object with the spec.git.syncBranch parameter set to master. If you synchronize configs from multiple repositories, make this configuration in the RootSync and RepoSync custom resources instead. For more information, see configuring the Operator. For example, the following ConfigManagement definition configures Anthos Config Management to use the master branch: apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: # clusterName is required and must be unique among all managed clusters clusterName: my-cluster git: syncRepo: git@example.com:anthos/config-management.git # This cluster will apply the configuration # available on the master branch. syncBranch: master secretType: ssh The following diagram illustrates the rollout process for this method: You can adapt this pattern to specific needs, using more than two branches, or using branches that are mapped to something other than environments. If you need to roll back a change, use the git revert command to create a new commit on the same branch that reverts the changes from the previous commit. We recommend the following actions: - When dealing with multiple clusters, use at least two Git branches to help to distinguish between production and non-production clusters. - Most Git solutions let you use the protected branches feature to prevent deletions or unreviewed changes of those branches. For more information, see the documentation for GitHub, GitLab, and Bitbucket. Use ClusterSelector and NamespaceSelector objects Git branches are a good way of doing a staged rollout of changes across multiple clusters that will eventually all have the same policies. However, if you want to roll out a change only to a subset of clusters or of namespaces, then use the ClusterSelector and NamespaceSelector objects. These objects have a similar goal: they let you apply objects only to clusters or namespaces that have specific labels. For example: - By using ClusterSelectorobjects, you can apply different policies to clusters, depending on which country they are located in, for various compliance regimes. - By using NamespaceSelectorobjects, you can apply different policies to namespaces used by an internal team and by an external contractor. ClusterSelector and NamespaceSelector objects also let you implement advanced testing and release methodologies, such as the following: - Canary releases of policies, where you deploy a new policy to a small subset of clusters and namespaces for a long time to study the policy's impact. - A/B testing, where you deploy different versions of the same policy to different clusters to study the difference of the policy versions' impact and then choose the best one to deploy everywhere. For example, imagine an organization with several production clusters. The platform team has already created two categories of production clusters, called canary-prod and prod, using Anthos Config Management, Cluster, and ClusterSelector objects (see configuring only a subset of clusters). The platform team wants to roll out a policy with Policy Controller to enforce the presence of a team label on namespaces in order to identify which team each namespace belongs to. They have already rolled out a version of this policy in dry run mode, and now they want to enforce it on a small number of clusters. Using ClusterSelector objects, they create two different K8sRequiredLabels resources that are applied to different clusters. The K8sRequiredLabelsresource is applied to clusters of type prod, with an enforcementActionparameter set to dryrun: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-team annotations: configmanagement.gke.io/cluster-selector: prod Spec: enforcementAction: dryrun match: kinds: - apiGroups: [""] kinds: ["Namespace"] parameters: labels: - key: "team" The K8sRequiredLabelsresource is applied to clusters of type canary-prod, without the enforcementActionparameter, meaning that the policy is actually enforced: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-team annotations: configmanagement.gke.io/cluster-selector: canary-prod spec: match: kinds: - apiGroups: [""] kinds: ["Namespace"] parameters: labels: - key: "team" The configmanagement.gke.io/cluster-selector annotation allows the team to enforce the policy only in clusters of type canary-prod, preventing any unintended side-effects from spreading to the whole production fleet. For more information about the dry run feature of Policy Controller, see creating constraints. We recommend the following actions: - Use ClusterSelectorand NamespaceSelectorobjects if you need to apply a configuration change to only a subset of clusters or namespaces indefinitely or for a long time. - If you roll out a change by using selectors, be very careful. If you use Git commits, any error affects only one cluster a time, because you're rolling out cluster by cluster. But if you use Git branches, any error can affect all the clusters that use that branch. If you use selectors, error can affect all clusters at once. Implementing reviews, tests, and validations One advantage of Anthos Config Management is that it manages everything declaratively—Kubernetes resources, cloud resources, and policies. This means that files in a source control management system represent the resources (Git files, in the case of Anthos Config Management). This characteristic lets you implement development workflows that you already use for an application's source code: reviews and automated testing. Implement reviews Because Anthos Config Management is based on Git, you can use your preferred Git solution to host the Anthos Config Management repository. Your Git solution probably has a code review feature, which you can use to review changes made to the Anthos Config Management repository. The best practices for reviewing changes to the Anthos Config Management repository are the same as with a normal code review, as follows: - Practice trunk-based development. - Work in small batches. - Ensure that code review is done synchronously or at least promptly. - The person who reviews and approves the change should not be the same person who suggested the change. Because of the sensitivity of the Anthos Config Management codebase, we also recommend that, if possible with your Git solution, you make the following configurations: - Protect the branches that are directly used by clusters. See the documentation for GitHub, GitLab, and Bitbucket. GitLab also lets you protect tags. - After the branches are protected, you can refine the approvals that are needed to merge a change: - On GitHub, enable required reviews, and optionally use the CODEOWNERS file to control who can approve changes for subsections of the repository. - For GitLab, follow the recommendations for managing who can approve merge requests in the Best practices for policy management with Anthos Config Management and GitLab article. - On Bitbucket, combine default reviewers with default merge checks. Optionally, use a Code Owners plugin for Bitbucket Server is available on the Atlassian Marketplace to control who can approve changes for subsections of the repository. By using these different features, you can enforce approvals for each change request to the Anthos Config Management codebase. For example, you can ensure that each change is approved at least by a member of the platform team (who operates the fleet of clusters), and by a member of the security team (who is in charge of defining and implementing security policies). We recommend the following action: - Enforce peer reviews on the Anthos Config Management repository, and protect the Git branches that are used by your clusters. Implement automated tests A common best practice when working on a codebase is to implement continuous integration. This means that you configure automated tests to run when a change request is created or updated. Automated tests can catch many errors before a human reviews the change request. This tightens the feedback loop for the developer. You can implement the same idea, using the same tools, for the Anthos Config Management repository. For example, a good place to start is to run the nomos vet command automatically on new changes. This command validates that your Anthos Config Management repository's syntax is valid. You can implement this test by using Cloud Build by following the validating configs tutorial. You can integrate Cloud Build with the following options: - Bitbucket, by using build triggers. - GitHub, by using the Google Cloud Build GitHub application. Build triggers are also available for GitHub, but the GitHub application is the preferred method of integration. As you can see in the validating configs tutorial, the test is done by using a container image. You can therefore implement the test in any continuous integration solution that runs containers, not only Cloud Build. Specifically, you can implement it with GitLab CI, following this example, which also includes tests for Policy Controller. To tighten the feedback loop even more, you can ask that users run the nomos vet command as a Git pre-commit hook. One caveat is that some users might not have access to the Kubernetes clusters managed by Anthos Config Management, and they might not be able to run the full validation from their workstation. Run the nomos vet --clusters "" command to restrict the validation to semantic and syntactic checks. You can implement any other test that you think is necessary or useful. If you use Policy Controller, you can implement automated tests of suggested changes against its policies, as outlined in Test changes against Policy Controller policies. We recommend the following action: - Implement tests in a continuous integration pipeline. Run at least the nomos vetcommand on all suggested changes. Using Policy Controller Policy Controller is a Kubernetes dynamic admission controller. When you install and configure Policy Controller, Kubernetes can reject changes that don't comply with predefined rules, which are called policies. Following are two example use cases of Policy Controller: - Enforce the presence of specific labels on Kubernetes objects, - Prevent the creation of privileged pods. A library of policy templates is available for implementing the most commonly used policies, but you can write your own with a powerful language called Rego. Using Policy Controller, you can, for example, restrict the hostnames that users can configure in an ingress (for more information, see this tutorial). Like Config Sync, Policy Controller is part of the Anthos Config Management product. Policy Controller and Config Sync have different, but complementary, use cases, as follows: - Config Sync is a GitOps-style tool that lets you create any Kubernetes object, potentially in multiple clusters at the same time. As mentioned in the introduction, Config Sync is especially useful for managing policies. - Policy Controller lets you define policies for objects that can be created in Kubernetes. You define these policies in custom resources, which are Kubernetes objects themselves. The preceding features create a bidirectional relationship between the two applications. You can use Config Sync to create the policies that are enforced by Policy Controller, and you can use those policies to control exactly which objects that Config Sync (or any other process) can create, as shown in the following diagram: The Git repository, Config Sync, Policy Controller, Kubernetes, a continuous deployment (CD) system, and users all interact with each other, in the following ways: - Users interact with the Anthos Config Management Git repository to create, update, or delete Kubernetes objects. - Config Sync reads its configuration from the Anthos Config Management Git repository. - Config Sync interacts with the Kubernetes API server to create objects, which include policies for Policy Controller. - The CD system also interacts with the Kubernetes API server to create objects. It can create constraints for Policy Controller. However, we recommend that you use Anthos Config Management for this use case because it gives you a centralized place to manage and test the constraints. - The Kubernetes API server either accepts or rejects the creation of objects by Config Sync and by the CD system, based on the response from Policy Controller. - Policy Controller gives that response based on the policies that it reads from the Kubernetes API server. The following diagram illustrates these interactions: Policy Controller can prevent policy violations that escape human reviewers and automated tests, so you can consider it the last line of defense for your Kubernetes clusters. Policy Controller also becomes more useful as the number of human reviewers grows for Anthos Config Management. Due to the phenomenon of social loafing, the more reviewers that you have, the less likely it is that they are consistently enforcing the rules defined in your organization. Test changes against Policy Controller policies If you use Policy Controller, you can add a few steps to your continuous integration pipeline (see Implement automated tests) to automatically test suggested changes against policies. Automating the tests gives quicker and more visible feedback to the person who suggests the change. If you don't test the changes against the policies in the continuous integration pipeline, then you have to rely on the system described in Monitor rollouts to be alerted of Anthos Config Management syncing errors. Testing the changes against the policies exposes any violation clearly, and early, to the person who suggests the change. You can implement this test in Cloud Build by following the Using Policy Controller in a CI pipeline tutorial. As mentioned earlier in Implement automated tests, you can integrate Cloud Build with GitHub and Bitbucket. You can also implement this test with GitLab CI. See this repository for an implementation example. We recommend the following action: - If you use Policy Controller, validate the suggested changes against its policies in your continuous integration pipeline. Monitoring rollouts Even if you implement all the guardrails that this document covers, errors can still occur. Following are two common types of errors: - Errors that pose no problem to Config Sync itself, but prevent your workloads from working properly, such as an overly restrictive NetworkPolicy that prevents components of your workload from communicating. - Errors that make it impossible for Config Sync to apply changes to a cluster, such as an invalid Kubernetes manifest, or an object rejected by an admission controller. The methods explained earlier should catch most of these errors. Detecting the errors described in the first preceding bullet is almost impossible at the level of Anthos Config Management, because this requires understanding the state of each of your workloads. For this reason, detecting these errors is best done by your existing monitoring system that alerts you when an application is misbehaving. Detecting the errors described in the second preceding bullet—which should be rare if you have implemented all the guardrails—requires a specific setup. By default, Anthos Config Management writes errors to its logs (which you will find, by default, in Cloud Logging). Errors are also displayed in the Anthos Config Management console page. Neither logs nor the console are usually enough to detect errors, because you probably don't monitor them at all times. The simplest way to automate error detection is to run the nomos status command, which tells you if there's an error in a cluster. You can also set up a more advanced solution with automatic alerts for errors. Anthos Config Management exposes metrics in the Prometheus format. You can use Prometheus to scrape these metrics, you can configure the import of Prometheus metrics into Cloud Monitoring, or you can use any monitoring solution compatible with the Prometheus format. For more information, see monitoring Anthos Config Management. After you have the Anthos Config Management metrics in your monitoring system, create an alert to notify you when the gkeconfig_monitor_errors metric is greater than 0. For more information, see managing alerting policies for Cloud Monitoring, or alerting rules for Prometheus. Summary of mechanisms for safe rollouts with Anthos Config Management The following table summarizes the various mechanisms described earlier in this document. None of these mechanisms is exclusive. You can choose to use some of them or all of them, for different purposes. Rollout strategy example This section uses the concepts introduced in the rest of this article to help you create an end-to-end rollout strategy across all the clusters in your organization. This strategy assumes that you have separate environs for development, staging, and production (as shown in Environ Example 1 - Approach 1). In this scenario, you configure each cluster to synchronize with the Anthos Config Management Git repository using a specific Git commit. Deploying a change to a given environ is a 4-step process: - You update a single (the "canary") cluster in the environ to use the new commit first. - You validate that everything works as expected by running tests and monitoring the rollout. - You update the rest of the clusters in the environ. - You validate again that everything works as expected. To deploy a change across all your clusters, you repeat this process for each environ. You can technically apply this method with any Git commit, from any branch. However, we suggest that you adopt the following process to identify problems early in the review process: - When someone opens a change request in the Anthos Config Management Git repository, deploy that change to one of the development clusters. - If the change request is accepted and merged in your main branch, run the full deployment across all environs as described earlier. While some changes might target only a specific environ, we recommend that you deploy all changes to all environs eventually. This strategy eliminates the problem of tracking which environ should sync with which commit. Pay special attention to the changes that target only the production environ because proper testing will not have been possible in previous environs. For example, this means waiting longer for issues to surface between deploying to the canary clusters and to the rest of the clusters. To summarize, a full end-to-end deployment looks like this: - Someone opens a change request. - Automated tests and validations run, and a manual review is done. - You trigger a job manually to deploy the change to the canary cluster in the development environ. Automated end-to-end tests run in this cluster. - If everything is OK, you merge the change request on the main branch. - The merge triggers an automated job to deploy the new main branch tip commit to the canary cluster in the development environ. Automated end-to-end tests run in this cluster (to detect potential incompatibilities between two change requests that have been created and merged approximately at the same time). - The following jobs run one after the other (you trigger them manually, or after a predefined time to allow for user reports of regressions): - Deploy to all the clusters of the development environ. - Run tests and validations in the clusters of the development environ. - Deploy to the canary cluster of the staging environ. - Run tests and validations in the canary cluster of the staging environ. - Deploy to all the clusters of the staging environ. - Run tests and validations in the clusters of the staging environ. - Deploy to the canary cluster of the production environ. - Run tests and validations in the canary cluster of the production environ. - Deploy to all the clusters of the production environ. - Run tests and validations in the clusters of the production environ. What's next - Read about monitoring Anthos Config Management. - Read about environs. - Learn how to validate your app against company policies in a continuous integration pipeline. - Read about the best practices for policy management with Anthos Config Management and GitLab. - Try out other Google Cloud features for yourself. Have a look at our tutorials.
https://cloud.google.com/solutions/safe-rollouts-with-anthos-config-management?hl=ar
CC-MAIN-2021-17
en
refinedweb
AWS News Blog New – Amazon Kinesis Data Analytics for Java Customers are using Amazon Kinesis to collect, process, and analyze real-time streaming data. In this way, they can react quickly to new information from their business, their infrastructure, or their customers. For example, Epic Games ingests more than 1.5 million game events per second for its popular online game, Fortnite. With Amazon Kinesis Data Analytics you can process data in real-time using standard SQL. While SQL provides an easy way to quickly query large volumes of streaming data without learning new frameworks or languages, many customers also want to build more sophisticated data processing applications using general-purpose programming languages. Using Java with Amazon Kinesis Data Analytics Today, we are introducing support for Java in Amazon Kinesis Data Analytics. Now, developers can use their own Java code to create powerful real-time applications that process streaming data like continuously transforming and loading data into their data lakes, generating metrics to feed real-time gaming leaderboards, applying machine learning models to data streams from connected devices, and more. To use this new functionality, developers build applications using open source libraries which include built-in operators for common data processing functions that allow applications to organize, transform, aggregate, and analyze data at any scale. These libraries are both open source and you can run them anywhere: - Apache Flink, an open source framework and engine for processing data streams. - AWS SDK for Java, providing Java APIs for many AWS services. Developers can use these Java libraries within their Integrated Development Environment (IDE) of choice. Using these libraries, the following AWS services can be integrated with as little as, and the ability to build custom integrations. Building a Kinesis Data Streams Java Application I prepared a simple Java application that implements the “mandatory” word count example for data processing. I send some paragraphs of text in input and I get, every five seconds, the number of times each word is being used as output. First, I create two Kinesis Data Streams: - TextInputStream, where I am going to send my input records - WordCountOutputStream, where I am going to read the output of the Java application Here is the code of the word-count Java application. To read and write from Kinesis Data Streams, I am using the Kinesis Connector from the Apache Flink project. public class StreamingJob { private static final String region = "us-east-1"; private static final String inputStreamName = "TextInputStream"; private static final String outputStreamName = "WordCountOutputStream"; private static DataStream<String> createSourceFromStaticConfig( StreamExecutionEnvironment env) { Properties inputProperties = new Properties(); inputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); inputProperties.setProperty(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST"); return env.addSource(new FlinkKinesisConsumer<>(inputStreamName, new SimpleStringSchema(), inputProperties)); } private static FlinkKinesisProducer<String> createSinkFromStaticConfig() { Properties outputProperties = new Properties(); outputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); FlinkKinesisProducer<String> sink = new FlinkKinesisProducer<>(new SimpleStringSchema(), outputProperties); sink.setDefaultStream(outputStreamName); sink.setDefaultPartition("0"); return sink; } public static void main(String[] args) throws Exception { final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStream<String> input = createSourceFromStaticConfig(env); input.flatMap(new Tokenizer()) .keyBy(0) .timeWindow(Time.seconds(5)) .sum(1) .map(new MapFunction<Tuple2<String, Integer>, String>() { @Override public String map(Tuple2<String, Integer> value) throws Exception { return value.f0 + "," + value.f1.toString(); } }) .addSink(createSinkFromStaticConfig()); env.execute("Word Count"); } public static final class Tokenizer implements FlatMapFunction<String, Tuple2<String, Integer>> { @Override public void flatMap(String value, Collector<Tuple2<String, Integer>> out) { String[] tokens = value.toLowerCase().split("\\W+"); for (String token : tokens) { if (token.length() > 0) { out.collect(new Tuple2<>(token, 1)); } } } } } The most important part of the application is the manipulation of the input object, where I apply a few DataStream Transformations: - I start with a DataFrame containing the String from the input stream. - I use a Tokenizer in a FlatMap to split the sentence into “words”, each word followed by the number “1”. - I apply the KeyBy operator to logically partition the stream in respect to the “word”. - I use a 5 seconds tumbling window. - I aggregate within the window, summing up for each word the number “1” to count them. - I use a simple Map for each record to join the word and the number into a comma-separated values (CSV) String that I send to the output stream. One of the most powerful operators shown here is the KeyBy operator. It enables you to re-organize a particular stream by a specified key in real-time. This type of re-keying enables further downstream operations like aggregations, counts, and much more. This enables you to set up streaming map-reduce on different keys within the same application. I build the Java application using Maven and load the output JAR to an Amazon Simple Storage Service (S3) bucket in the region where I want to deploy the application. In the Kinesis Data Analytics console, I create a new application and select “Flink” as runtime: I then configure the application to use the code on my S3 bucket. The console updates the IAM role for the application to have permissions to read the code. You can optionally add key/value properties to the configuration of the application. You can read those properties from within the application, to provide customization at deployment time. For monitoring, I leave the default metrics. I enable logging to Amazon CloudWatch, for errors only. Don’t forget to add permissions to the IAM role created by the console to allow the Kinesis Analytics application to read and write from the streams used for input and output, TextInputStream and WordCountOutputStream in my case. I can now start the application with the “Run” button, and when it is running, I use a script that I prepared to put some text (I am using a description of the Amazon Kinesis platform) in the input stream: $ python put_records.py TextInputStream Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data... The behavior of my application is summarized in the console in the Application Graph, a visual representation of the data flow consisting of operators and intermediate results (complex applications, using multiple streams, have a much more interesting graph): To read the output stream, I am using a Lambda function written in Python. I am using the one provided with the Kinesis Record Aggregation & Deaggregation Modules for AWS Lambda, that provides automatic “de-aggregation” of records aggregated by the Amazon Kinesis Producer Library (KPL). As expected, in the CloudWatch Logs console I get the list of the words and the number of times they were used, updated every 5 seconds by the Lambda function: Pricing and Availability With Amazon Kinesis Data Analytics for Java, you pay only for what you use. Pricing is similar to Amazon Kinesis Data Analytics for SQL, but there are a few differences. For Java applications, you are charged a single additional Amazon Kinesis Processing Unit (KPU) per application, used for application orchestration. Java applications are also charged for running application storage and durable application backups. Running application storage is used for Amazon Kinesis Data Analytics’ stateful processing capabilities and is charged per GB-month. Durable application backups are optional and provide a point-in-time recovery point for applications, charged per GB-month. For example, pricing is $0.11 per KPU hour in US East (N. Virginia), and you are charged for running application storage ($0.10 per GB-month) and durable application backups ($0.023 per GB-month). Available Now Amazon Kinesis Data Analytics for Java is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), EU West (Ireland). More information is available in the Kinesis Data Analytics developer guide for Java Applications. I only scratched the surface of the capabilities for stream processing enabled by the support of Java in Amazon Kinesis Data Analytics. I think this is a powerful tool that can enable new use cases. Let me know what you are going to build with it!
https://aws.amazon.com/blogs/aws/new-amazon-kinesis-data-analytics-for-java/
CC-MAIN-2019-09
en
refinedweb
Opened 6 years ago Last modified 3 years ago #19149 new Bug Generic Relation not cascading with Multi table inheritance. Description Generic Relations don't cascade on delete if you are using the generic relation of a superclass in a subclass, then deleting the subclass does not result in the normal cascade behaviour. For example using the following models: class TaggedItem(models.Model): tag = models.SlugField() content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') def __unicode__(self): return self.tag class Post(models.Model): title = models.TextField(blank=True) relation = generic.GenericRelation(TaggedItem) def __unicode__(self): return self.title class ParentPost(Post): description = models.TextField(blank=True) def __unicode__(self): return self.description If you create a ParentPost and a TaggedItem related using the GenericForeignKey: >>> p = ParentPost() >>> p.>> p.>> p.save() >>> t = TaggedItem(content_object=p, tag="This is a tag") >>> t.save() >>> TaggedItem.objects.all() [<TaggedItem: Ypo>, <TaggedItem: This is a tag>] and then delete the ParentPost, the TaggedItem is not also deleted, rather it just points to a None object. >>> p.delete() >>> TaggedItem.objects.all() [<TaggedItem: Ypo>, <TaggedItem: This is a tag>] >>> print t.content_object None If you repeat the same procedure with the Post model you get the expected behaviour where the TaggedItem is deleted. Change History (15) comment:1 Changed 6 years ago by comment:3 Changed 6 years ago by comment:4 Changed 6 years ago by Don't mark your own patch as RFC. comment:5 Changed 6 years ago by comment:6 Changed 6 years ago by The patch isn't correct - doing if not objs: return [] doesn't return a queryset. comment:7 Changed 6 years ago by comment:8 Changed 6 years ago by This fixes the merge conflict and returns an empty queryset: comment:9 Changed 6 years ago by The patch doesn't work correctly for parent model associations. I modified the test case to this: def test_inherited_models_delete(self): """ Test that when deleting a class that inherits a GenericRelation, the correct related object is deleted on cascade. """ p = Post.objects.create(title="This is a title", description="This is a description") ppost = ParentPost.objects.get(pk=p.pk) t1 = TaggedItem.objects.create(content_object=p, tag="This is a tag") t2 = TaggedItem.objects.create(content_object=ppost, tag="This is anoter tag") ppost_ct = ContentType.objects.get_for_model(ParentPost) self.assertEqual(list(TaggedItem.objects.all().order_by('tag')), [t1, t2]) self.assertEqual( list(TaggedItem.objects.filter(content_type=ppost_ct)), [t2]) self.assertEqual(list(Post.objects.all()), [p]) p.delete() self.assertEqual(list(TaggedItem.objects.all()), []) The last line doesn't pass - the parent model's taggeditem isn't deleted. So, the patch just moves the problem around. At minimum both parent and child model deletion should delete the taggeditems for both parent and child. There are more problems in parent <-> child deletions in general If there are more than one child for a parent and you delete one of the childs, then the other childs will not be deleted. Also, each parent model retrieval executes a separate query. A more generic solution to all of the problems would be welcome, though just solving this ticket's issue is a good approach, too... comment:10 Changed 6 years ago by The proper deletion of inheritance chains is somewhat ugly problem. It seems the best approach is to start the deletion from base parents (that is, from those parent models which do not have any concrete parents), then cascade down from there. This should result in correct, though not totally optimal code. OTOH it doesn't matter how fast the deletion code is if it isn't correct... I will see if I can write a patch using the above idea. I think it will be faster than current code even if it isn't optimal, as each parent model fetch will issue separate query at the moment. comment:11 Changed 6 years ago by comment:12 Changed 5 years ago by This is the PR: I based my unit test on akaariai's unit test. I did not try to fix other problems (each parent model retrieval executes a separate query, if there are more than one child for a parent and you delete one of the childs, then the other childs will not be deleted), because I think those deserve separate tickets. comment:13 Changed 5 years ago by comment:14 Changed 5 years ago by I rebased the PR, but there's a failing test: ====================================================================== FAIL: test_cascade_delete_proxy_model_admin_warning (proxy_models.tests.ProxyModelAdminTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/tim/code/django/tests/proxy_models/tests.py", line 386, in test_cascade_delete_proxy_model_admin_warning collector.collect(ProxyTrackerUser.objects.all()) File "/home/tim/code/django/django/test/testcases.py", line 110, in __exit__ query['sql'] for query in self.captured_queries AssertionError: 13 queries executed, 7 expected Be sure to uncheck "Patch needs improvement" if you can fix it so the ticket shows up for review. This should be a relatively easy fix. At the moment the delete query is adding a WHERE clause with the id of the parent (in this case Post). We need to change that to use the id of the actual object being deleted.
https://code.djangoproject.com/ticket/19149
CC-MAIN-2019-09
en
refinedweb
Custom FileBrowserContentProvider FileBrowserContentProvider provides the ability for using different files sources for the FileBrowser dialogs (Document Manager, Flash Manager, Image Manager, Media Manager and Template Manager). Implementing a Custom FileBrowserContentProvider By default the editor file-based dialogs, such as ImageManager, FlashManager etc. read files from and upload files to physical directories within the web application. You may need an alternative mechanism in some scenarios, e.g. integrating RadEditor to existing CMS systems that have an established way of dealing with file resources. RadEditor offers the ability to implement custom content providers that are plugged into the file-browser dialogs instead of the default provider. This enables the use of database, XML files for storing of files and file information. To create a custom content provider you need to create a descendant of FileBrowserContentProvider and implement it's methods.FileBrowserContentProvider is in the Telerik.Web.UI.Widgets namespace. Implementation Overview The steps to implement a custom FileBrowserContentProvider are: Extend the abstract Telerik.Web.UI.Widgets.FileBrowserContentProvider class and implement its methods. Set the dialog's property ContentProviderTypeName e.g. RadEditor1.ImageManager.ContentProviderTypeName = "DatabaseFileBrowser, EditorWebApp" where the value of the property ContentProviderTypeName should be the qualified assembly name of your custom content provider. The general format of the assembly name should be "Full.Class.Name.Including.The.Namespace, Assembly.Name". For example if your content provider class is in a separate project and is declared like this: C# using Telerik.Web.UI.Widgets; namespace RadEditorCustomProvider { public class MyContentProvider : FileBrowserContentProvider { //...override base methods } } VB Imports Telerik.Web.UI.Widgets Namespace RadEditorCustomProvider Public Class MyContentProvider Inherits FileBrowserContentProvider ' ...override base methods End Class End Namespace ...when it is compiled in the ContentProviders.dll assembly, you should set the following value: RadEditor1.ImageManager.ContentProviderTypeName = "ContentProviders.RadEditor.DatabaseContentProvider,ContentProviders" In .NET 2.0 web sites (as opposed to web application projects0, the code is not explicitly compiled into an assembly and that's why you should use App_Code instead of Assembly.Name: RadEditor1.ImageManager.ContentProviderTypeName = "ContentProviders.RadEditor.DatabaseContentProvider, App_Code" If the assembly where the class is defined will be located in the Global Assembly Cache (GAC), you should include the full assembly name, version, public key token, etc. However, there is an easier way to set the value of this property from the code-behind which will work in all cases: RadEditor1.ImageManager.ContentProviderTypeName = typeof(DatabaseContentProvider).AssemblyQualifiedName FileBrowserContentProvider Architecture At present the editor has content providers for physical folders and for Microsoft CMS. Many of the decisions for the content provider architecture were affected by the need for the MCMS support.Here are the requirements that needed to be supported: Support for listing directories as a hierarchical tree and as a flat list. Support for creating thumbnail images. Support for multiple "virtual" root folders, each residing at a completely independent physical location. Support for AJAXbased directory requests. The support for these requirements added certain complexity which could not be avoided, yet most of the work is taken care of by the editor and implementing a custom provider is fairly straightforward, considering the information here and using the sample implementation provided as a starting point. The ContentProvider revolves around the idea of Files and Directories - these being two separate, slightly different objects. When a request comes for a particular node (if no node is specified, the root node is assumed), a hierarchical tree is generated containing information only for the direct descendents (files and folders) of the node. This "load-on-demand" AJAX based approach allows for small footprint and response time. There are methods to be implemented for creating a directory, creating a file, deleting a directory, deleting a file and several thumbnail-creation related methods. Implementing FileBrowserContentProvider To get started implementing FileBrowserContentProvider: Add the Telerik.Web.UI.Widgets namespace to your "uses" (C#) or "Imports" (VB) section of code. Create a new class, e.g. "MyFileBrowserContentProvider" that descends from FileBrowserContentProvider. In C# projects: right-click the FileBrowserContentProvider declaration and select Implement Abstract Class from the context menu. This step will create all the methods that can be implemented: Add a constructor with the signature shown in the code example below. The constructor provides basic dialog and path information.The "Context" parameter allows you to access the current HTTP state including the HTTP Request object: C# public MyFileBrowserContentProvider(HttpContext context, string[] searchPatterns, string[] viewPaths, string[] uploadPaths, string[] deletePaths, string selectedUrl, string selectedItemTag) : base(context, searchPatterns, viewPaths, uploadPaths, deletePaths, selectedUrl, selectedItemTag) { } in VB.NET: VB Imports Telerik.Web.UI.Widgets Public MustInherit Class MyFileBrowserContentProvider Inherits FileBrowserContentProvider) End Sub End Class From this starting point you can implement the FileBrowserContentProvider methods to suit your particular purpose. See Custom File Dialogs Content Provider LiveDemo for a running example. Important Implementation Details Some important details aimed at reducing the overall time needed by developers to implement the custom content provider: Since the FileBrowserContentProvider class needs a number of parameters to be configured when created, it has no default constructor. This means that the subclass must explicitly define a similar constructor with the exact same number of arguments and it needs to explicitly make a call to the parent constructor. The DirectoryItem class has two properties Files and Directories of array type. These are read-only and cannot be replaced, once set. These properties also need to be set during the DirectoryItem construction phase. This could have some implications on the exact algorithm for building the file tree. The information here is for the ImageManager, but is valid for each of the remaining file-browser dialogs:If the property editor property ViewPaths is not set, the ResolveRootDirectoryAsTree method will not be called at all! This is because it will be assumed that no directory browsing should be allowed for the particular user using the editor. The ResolveRootDirectoryAsTree is called for each item in the ViewPaths array. In order to allow creation of directories (and the CreateDirectory method to be called) the CanCreateDirectory property must return true. GetFile, GetFileName, GetPath and StoreBitmap methods are only related to the Thumbnail creation functionality and are not called by the regular file browser. Their implementation can be postponed for the end once all other functionality is working. Sample FileBrowserContentProvider implementation The sample implementation is using a database for information storage, and a single physical directory as a file storage. The screenshots below explain the relationship between the actions that can be taken in the editor (using the Custom File Dialogs Content Provider LiveDemo) and the methods that must be implemented in your FileBrowserContentProvider descendant implementation: The compilator will not complain if the FileBrowserContentProvider's methods bellow are not overridden, but it is highly recommended to override them as well: public override bool CheckDeletePermissions( string folderPath) { return base.CheckDeletePermissions(folderPath); } public override bool CheckWritePermissions( string folderPath) { return base.CheckWritePermissions(folderPath); } //Introduced in the 2010.2.826 version of the control public override bool CheckReadPermissions( string folderPath) { return base.CheckReadPermissions(folderPath); } Public Overrides Function CheckDeletePermissions(ByVal folderPath As String) As Boolean Return MyBase.CheckDeletePermissions(folderPath) End Function Public Overrides Function CheckWritePermissions(ByVal folderPath As String) As Boolean Return MyBase.CheckWritePermissions(folderPath) End Function ' Introduced in the 2010.2.826 version of the control Public Overrides Function CheckReadPermissions(ByVal folderPath As String) As Boolean Return MyBase.CheckReadPermissions(folderPath) End Function
https://docs.telerik.com/devtools/aspnet-ajax/controls/editor/functionality/dialogs/examples/custom-filebrowsercontentprovider
CC-MAIN-2019-09
en
refinedweb
Please forgive the newbie question, but I am indeed new to BioPython. I'm just simply trying to parse a large file in Genbank format to FASTA format and am using Bio.SeqIO in BioPython. I'm looking to parse an output file with the Accession number and Taxon in the FASTA > header and then the Genbank Taxonomy instead of the nucleotide sequence. I am comfortable with parsing just the fasta title and sequence. What I am doing is constructing a file to train RDP classifier for a Eukaryote marker gene (one does not already exist for my marker). The output I am looking for is: X62988Emericellanidulans Eukaryota; Fungi; Dikarya; Ascomycota; Pezizomycotina; Eurotiomycetes; Eurotiomycetidae; Eurotiales; Trichocomaceae; Emericella; Emericella nidulans. or: 573145 Bacteria; Proteobacteria; Gammaproteobacteria; Enterobacteriales; Enterobacteriaceae; Escherichia; Escherichia sp. This is what I have used for simple nucleotide parsing. from Bio import SeqIO%s %s\n%s\n" % ( seq_record.id, seq_record.description, seq_record.seq.tostring())) output_handle.close() input_handle.close() print "Completed" I know this is probably a simple fix, but I've searched for a long while and can't find an output in SeqIO for the taxonomy string, does anyone have any recommendations for modifying the above script? Thanks so much for helping me out... I'm pretty new to the BioPython parsing here. My best to you all.
https://www.biostars.org/p/18664/
CC-MAIN-2019-09
en
refinedweb
Introduction Windows® Azure™ Mobile Service is an Azure service offering designed to provide cloud based backend data source capabilities for your windows applications. By default the mobile Services instance in Azure uses Azure SQL Database to store data. Azure SQL Database is a relational database service that extends core SQL Server features to the cloud. In addition to data store, Mobile Services provide a turnkey way to authenticate users and send push notifications. With SDKs for Windows, Android, iOS, and HTML as well as a powerful and flexible REST API, Mobile Services lets you to build connected applications for any platform and delivers a consistent experience across devices. In this article, you will learn how to use mobile services to fetch and save Windows Phone 8 app data on the cloud. Scenario You are going to create a windows phone application that can save user profile information in Azure SQL database tables.You are going to create a windows phone application that can save user profile information in Azure SQL database tables. Prerequisites 1. Windows Account to access Windows Azure (Free) 2. Windows Azure account (At least Free Trial version) 3. Visual Studio 2013 4. Windows 8 system or VM 5. Windows Phone 8 SDK Creating a Windows Phone 8 Sample Application This application will consume Mobile Service as backend. For this demo, I have created a sample app called “ProfileManager”. You can download it from here: The Profile Manger app has the following screens to create and view user’s profiles. Profile Manager Creating Mobile Service in Azure You can create a mobile service in Windows Azure by following these steps: 1. Login to 2. Select “+New” at bottom 3. From menu select mobile service -> Create Create Mobile Service 4. Give a proper name to mobile service and select new or existing database instance: New Mobile Service 5. Mobile service uses Azure SQL Database, so when you create a mobile service a new database will be created. You can create a new database or use any existing one. In this demo, I am going to create a new database along with its server. New Mobile Service 6. In this way a new mobile service with the name “ProfileMobileSvc” will be created. 7. Create the required tables in this new “ProfileMobileSvc_Db” database. - Open mobile service “ProfileMobileSvc”. - Go to the Data tab, and select “Add a table” to add “UserProfile” as new table. - Open the table and add more columns: Name, Email, and Phone. Now you are ready to use this mobile service as a backend in the Windows Phone 8 app “ProfileManager”. Updating the App to Access Mobile Service in a Windows Phone 8 App You need to update the phone client app to add WindowsAzure.MobileServices package and then add code to access Azure mobile services. You can follow these steps: 1. Search for “WindowsAzure.MobileServices” in Nuget and add the “Windows Azure Mobile Services” package to ProfileManager project. 2. Update App in App.xaml.cs to add the following (Key is from newly created Azure Mobile Service “ProfileMobileSvc” details for existing app): using Microsoft.WindowsAzure.MobileServices; public static MobileServiceClient MobileService = new MobileServiceClient( "", "<xxxxxxxxxxxxxxxxxxxxxxxxxx>"); //Put key of your mobile service 3. Add a new folder “Model” to the project, and then add the following class. This class represents the table in the cloud, which you created while configuring the mobile service database. using Newtonsoft.Json; namespace ProfileManager.Model { public class UserProfile { [JsonProperty(PropertyName = "id")] public string Id { get; set; } [JsonProperty(PropertyName="name")] public string Name { get; set; } [JsonProperty(PropertyName = "email")] public string Email { get; set; } [JsonProperty(PropertyName = "phone")] public string Phone { get; set; } } } 4. Update MainViewModel add the following field: private IMobileServiceTable<UserProfile> userProfileTbl = App.MobileService.GetTable<UserProfile>(); Change the Items property to the following: private ObservableCollection<ItemViewModel> items; public ObservableCollection<ItemViewModel> Items { get { return items; } private set { items = value; NotifyPropertyChanged("Items"); } } Change the LoadData method to the following (It is not asynchronous method): items.Clear(); public async void LoadData() { items.Clear(); // Sample data; replace with real data var azureData = true; MobileServiceCollection<UserProfile, UserProfile> profiles; if (azureData) { profiles = await userProfileTbl.ToCollectionAsync(); foreach (var profile in profiles) { items.Add(new ItemViewModel() { ID = profile.Id.ToString(), Name = profile.Name, Email = profile.Email, Phone = profile.Phone }); } } else { items.Add(new ItemViewModel() { ID = "0", Name = "Manoj Kumar", Email = "manoj@gmail.com", Phone = "172890567" }); } Items = items; this.IsDataLoaded = true; } 5. Update EditPage to save a new user profile to Azure using mobile service. Add field: private IMobileServiceTable<UserProfile> userProfileTbl = App.MobileService.GetTable<UserProfile>(); Update save method to the following: private async void saveBtn_Click(object sender, RoutedEventArgs e) { await userProfileTbl.InsertAsync(new UserProfile() { Name=txtName.Text, Phone=txtPhone.Text, Email=txtEmail.Text}); App.ViewModel.LoadData(); NavigationService.Navigate(new Uri("/MainPage.xaml", UriKind.Relative)); } 6. Build and run the application. 7. Click “Add User” and add a new user profile. 8. Verify that user is listed in the home screen. 9. Verify that the user has been added into the “UserProfile” table in Azure through Mobile Service “ProfileMobileSvc”. User Profile Profile Some General Guidelines and Limitations of Windows Azure SQL Database - Only TCP/IP connections are allowed. - Windows Azure SQL Database does not support SQL Server Agent or jobs. - Windows Azure SQL Database may not preserve the uncommitted timestamp values of the current database (DBTS) across failovers. - Windows Azure SQL Database does not support tables without clustered indexes. A table must have a clustered index. If a table is created without a clustered constraint, a clustered index must be created before an insert operation is allowed on the table. - By default, Windows Azure SQL Database supports up to 150 databases in each SQL Database server, including the master database. - Windows Azure SQL Database provides two database editions: Web Edition and Business Edition. Web Edition databases can grow up to a size of 5 GB and Business Edition databases can grow up to a size of 150 GB. - Features of 2008 R2 or earlier versions are not supported in Azure SQL: SQL Server Utility, SQL Server PowerShell Provider, Master Data Services, Data Auditing, Data Compression, Policy-Based Management, Backup and Restore, Replication, SQL Server Agent/Jobs, Extended Stored Procedures, Service Broker, Database Mirroring, Table Partitioning, (CLR) and CLR User-Defined Types.
https://mobile.codeguru.com/win_mobile/other/using-windows-azure-sql-storage-to-store-windows-phone-data.htm
CC-MAIN-2019-09
en
refinedweb
Scaling raw PNG in Nine Patch way before saving as Nine Patch image I have got PSD file from graphic designer. There is a big image 1400px x 1000px (text area background). Image is light at the top and dark at the bottom (vertical color gradient). Left and right side are the same (no horizontal color gradient). Standard 9-patch tools(editors) enables export from xxhdpi to ldpi (reduces the size). It's not exactly what I am looking for. I found one tool which finds repeating area in raw PNG and offers to reduce 9-patch to 100px x 1000px. But I would like to go further and also reduce vertical image size. There is just a gradient in vertical direction. Which tool enables me scale repeating area just before saving it as nine patch? I want hdpi nine-patch is 100px x 160px which I will use then for creating ldpi nine-patch. Actually I need to create nine-patch in opposite way. I have big png text box and i need to get nine-patch of it. Thx See also questions close to this topic - How to create a simple reflection using imagemagick with dynamic image sizing? I've trying to use imagemagick to create a simple reflection, however the documentation has fixed sizes. I've tried to read in the height and width and use those variables but this doesn't produce a reflection. Here's the documentation Here's the sample code convert pokemon.gif \( +clone -flip \) -append \ -size 100x100 xc:black +swap \ -gravity North -geometry +0+5 -composite reflect_perfect.png Here's my bash script, with my widths and heights... #!/bin/bash infile="framed.png" ww=`convert $infile -format "%w" info:` hh=`convert $infile -format "%h" info:` convert $infile -alpha on \ \( +clone -flip -channel A -evaluate multiply .35 +channel \) -append \ -size ${ww}x${hh} xc:black +swap \ -gravity North -geometry +0+5 -composite reflect_alpha.png My resulting image is exactly the same as the source image. Here's the exact image I'm using - Image picker not selecting images in selection sequence I am using react-native-image-crop-picker for selecting multiple images. When I am selecting the images from mobile in some sequence they are displaying in random sequence. but not displaying on my selection sequence. Why is like this? Does am I missing anything? How can I solve it? Adding mobile screenshots for more info. Image selection sequence and image displaying sequence [ - Kubernetes - Error message ImagePullBackOff when deploy a pod Hello kubernetes developers, i get the error 'ImagePullBackOff' if deploy a pod in kubernetes. Pulling in docker to get the image from git-hub repository is no problem. But what is wrong with my configuration? I tried this workaround to create a secret-key with the following command. kubectl create secret docker-registry secretkey \ --docker-server=registry.hub.docker.com \ --docker-username=reponame \ --docker-password=repopassword \ --docker-email=repoemail And this is the yaml file to create the kubernetes pod. apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: io.kompose.service: gps-restful-server name: gps-restful-server spec: containers: - image: tux/gps:latest name: gps-restful-server ports: - containerPort: 8080 resources: {} volumeMounts: - mountPath: /var/www/html/modules name: gps-modules - mountPath: /var/www/html/profiles name: gps-profile - mountPath: /var/www/html/themes name: gps-theme - mountPath: /var/www/html/sites name: gps-sites imagePullPolicy: Always restartPolicy: OnFailure imagePullSecrets: - name: secretkey volumes: - name: gps-modules persistentVolumeClaim: claimName: gps-modules - name: gps-profile persistentVolumeClaim: claimName: gps-profile - name: gps-theme persistentVolumeClaim: claimName: gps-theme - name: gps-sites persistentVolumeClaim: claimName: gps-sites status: {} To deploy the pod in kubernetes, i execute the command: kubectl create -f gps-restful-server-pod.yaml. Get the status from the pod: kubectl get all NAME READY STATUS RESTARTS AGE pod/telemetry-restful-server 0/1 ImagePullBackOff 0 12m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h Description of the pod: kubectl describe pod gps-restful-server Name: gps-restful-server Namespace: default Priority: 0 PriorityClassName: <none> Node: minikube/192.168.178.92 Start Time: Thu, 14 Feb 2019 16:56:25 +0100 Labels: io.kompose.service=gps-restful-server Annotations: <none> Status: Pending IP: 172.17.0.3 Containers: gps-restful-server: Container ID: Image: tux/gps:latest Image ID: Port: 8080/TCP Host Port: 0/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4t28k (ro) /var/www/html/modules from gps-modules (rw) /var/www/html/profiles from gps-profile (rw) /var/www/html/sites from gps-sites (rw) /var/www/html/themes from gps-theme (rw) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: gps-modules: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gps-modules ReadOnly: false gps-profile: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gps-profile ReadOnly: false gps-theme: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gps-theme ReadOnly: false gps-sites: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gps-sites ReadOnly: false default-token-4t28k: Type: Secret (a volume populated by a Secret) SecretName: default-token-4t28k Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m default-scheduler Successfully assigned default/gps-restful-server to minikube Normal Pulling 2m (x4 over 4m) kubelet, minikube pulling image "tux/gps:latest" Warning Failed 2m (x4 over 4m) kubelet, minikube Failed to pull image "tux/gps:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for tux/gps, repository does not exist or may require 'docker login' Warning Failed 2m (x4 over 4m) kubelet, minikube Error: ErrImagePull Warning Failed 2m (x6 over 4m) kubelet, minikube Error: ImagePullBackOff Normal BackOff 2m (x7 over 4m) kubelet, minikube Back-off pulling image "tux/gps:latest" How it's possible to pull the image from docker-hub in kubernetes? -. - How can i set the image inside a nine patch image custom image I am setting up a app and i need to set the image inside my nine patch image given below like the app in the description see the second screenshot of the app I Tried setting the nine patch image in the xml file but it doesn't help much do i need to set the custom view instasquare link of the app i am trying to make - Weird artifact showing through nine-patch button I have created a simple 9patch to use for my button graphic. When I view it as part of the layout editor, it appears to look okay. However, when I run it in an emulator I get this ugly looking artifact/relic around the bottom left/top right corners. Does anyone know what causes this, and how I might fix it? I'm currently applying the 9patch using the android:background property. - 9Patch image displays large black block I have created a 9patch image using android studio and the top part of the patches looks like the below image. (patches have been marked) The bottom part looks like the below image. (patches have been marked) The below image is the created 9patch image. When i run the application i get the large black block displayed in the below image. How can i get rid of the large black block?
http://quabr.com/53161791/scaling-raw-png-in-nine-patch-way-before-saving-as-nine-patch-image
CC-MAIN-2019-09
en
refinedweb
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of Open status. Section: 18.3 [assertions] Status: Open Submitter: Jonathan Wakely Opened: 2017-08-18 Last modified: 2018-08-20 Priority: 2 View other active issues in [assertions]. View all other issues in [assertions]. View all issues with Open status. Discussion: The C standard says that the expression in an assert must have a scalar type, and implies (or at least allows) that the condition is tested by comparison to zero. C++ says that the expression is a constant subexpression if it can be contextually converted to bool. Those ways to test the condition are not equivalent.It's possible to have expressions that meet the C++ requirements for a constant subexpression, but fail to meet the C requirements, and so don't compile. #include <stdlib.h> // A toy implementation of assert: #define assert(E) (void)(((E) != 0) || (abort(), 0)) struct X { constexpr explicit operator bool() const { return true; } }; constexpr bool f(const X& x) { assert(x); return true; } C++ says that assert(x) is a constant subexpression, but as it doesn't have scalar type it's not even a valid expression.I think either 18.3.1 [cassert.syn] or 18.3.2 [assertions.assert] should repeat the requirement from C that E has scalar type, either normatively or in a note. We should also consider whether "contextually converted to bool" is the right condition, or if we should use comparison to zero instead. [2017-11 Albuquerque Wednesday night issues processing] Priority set to 2; status to Open Jonathan is discussing this with WG14 [2018-08-20, Jonathan comments] This was reported to WG14 as N2207. Proposed resolution:
https://cplusplus.github.io/LWG/issue3011
CC-MAIN-2019-09
en
refinedweb
An Interruptible YieldInst! Today’s article is inspired by a comment asking just this question: how can I interrupt a yield instruction? For example, if you yield return new WaitForSeconds(60) then can you stop the yield after only 15 seconds? Do you have to wait for the whole minute to get control back to your coroutine function? Strictly speaking, the answer is “no”. Once you’ve yielded then there’s no taking it back without stopping the whole coroutine. However, you can design a replacement for classes like YieldInstruction that can be interrupted. In 5.2, you’d do something like this: public static class WaitForSecondsIterator { public static IEnumerable Run(float numSeconds) { var startTime = Time.time; while (Time.time - startTime < numSeconds) { yield return null; } } } IEnumerator Coroutine() { foreach (var cur in WaitForSecondsIterator.Run(3)) { if (Input.GetMouseButtonDown(0)) { break; } yield return cur; } } Now the coroutine only yields null instead of a WaitForSeconds. This means that Unity will resume the coroutine the very next frame rather than waiting for the specified number of seconds. We can capitalize on this opportunity by performing whatever logic we want to on each frame. In this case, WaitForSecondsIterator.Run checks the Time.time whenever it’s resumed. If it hasn’t been long enough, it yields. Otherwise, it stops. The loop over WaitForSecondsIterator.Run also gets an opportunity to perform some logic. Each iteration it checks to see if the mouse button is down. If it is, it stops yielding by breaking out of the loop. Otherwise, it keeps yielding. This is a lot more code than a one-liner yield return new WaitForSeconds(60), but we’ve got custom control now. It really didn’t grow by much more than the extra logic we wanted to add (the if check), so it’s definitely manageable. We also got a reusable WaitForSecondsIterator.Run function that we can use any time we want an interruptible version of WaitForSeconds. Enter Unity 5.3. Now we have a CustomYieldInstruction class where all we need to do is override the keepWaiting property. Does this allow us to simplify the code to solve this problem? Let’s start with a straightforward implementation and see: public class WaitForSecondsOrMouseButton : CustomYieldInstruction { private float numSeconds; private float startTime; public WaitForSecondsOrMouseButton(float numSeconds) { startTime = Time.time; this.numSeconds = numSeconds; } public override bool keepWaiting { get { return Time.time - startTime < numSeconds && Input.GetMouseButtonDown(0) == false; } } } IEnumerator Coroutine() { yield return new WaitForSecondsOrMouseButton(3); } This version radically simplified the coroutine code! Now it’s just one line like the original, uninterruptible version. That’s ideal for the coroutine, but the WaitForSecondsOrMouseButton is no longer very reusable. That’s because we’ve moved the mouse button-checking logic into the same class that checks for the time. Two very different checks are now bound together into one bundled package. So how can we split those up to return some customization to the coroutine? Well, we can make an InterruptibleYieldInstruction class that is interruptible by arbitrary logic. This class won’t know about mouse button presses or time, so it should be reusable by a whole variety of custom, interruptible yield instructions. Here’s what it looks like: public class InterruptibleYieldInstruction : CustomYieldInstruction { private bool stop; public event Action<InterruptibleYieldInstruction> OnKeepWaiting; public void Stop(bool condition) { if (condition) { stop = true; } } public override bool keepWaiting { get { if (stop) { return false; } if (OnKeepWaiting == null) { return true; } OnKeepWaiting(this); return stop == false; } } } To use it, add event listeners to OnKeepWaiting to do your custom logic. They’ll be passed the InterruptibleYieldInstruction instance and you can call Stop on it with your condition. It’s similar to an assert function. Now let’s see how WaitForSeconds would be ported to be a InterruptibleYieldInstruction: public class InterruptibleWaitForSeconds : InterruptibleYieldInstruction { public InterruptibleWaitForSeconds(float numSeconds) { var startTime = Time.time; OnKeepWaiting += i => i.Stop(Time.time - startTime >= numSeconds); } } That’s a pretty simple implementation! It’s about as simple as the WaitForSecondsIterator.Run function was at the start of the article. But how hard is it to use in the coroutine? Let’s see: IEnumerator Coroutine() { var waitForSeconds = new InterruptibleWaitForSeconds(3); waitForSeconds.OnKeepWaiting += i => i.Stop(Input.GetMouseButtonDown(0)); yield return waitForSeconds; } The one-liner has expanded to three lines of code, but we’ve regained the reusability. InterruptibleWaitForSeconds does the time check and the coroutine’s own lambda does the mouse button check. If we wanted, we could go even further and make a class that does both checks so the coroutine would be a one-liner again: public class InterruptibleWaitForSecondsOrMouseButton : InterruptibleWaitForSeconds { public InterruptibleWaitForSecondsOrMouseButton(float numSeconds) : base(numSeconds) { OnKeepWaiting += i => i.Stop(Input.GetMouseButtonDown(0)); } } IEnumerator Coroutine() { yield return new InterruptibleWaitForSecondsOrMouseButton(3); } So the flexibility is there to split out the interruption checks with class inheritance, lambdas in the coroutine itself, or even collections of arbitrary functions. What do you think of InterruptibleYieldInstruction? Do you prefer the CustomYieldInstruction way in 5.3 or the iterator function way in 5.2? Let me know in the comments! #1 by Timo Neu on September 6th, 2018 · | Quote Hey Jackson! Thanks a lot for this one! This was exactly what I was searching for. This solution is perfectly fitin’ into my Dialog System which should wait for a given amount of seconds or an input of the user. Best regards Timo
https://jacksondunstan.com/articles/3411
CC-MAIN-2019-09
en
refinedweb
Roberta-LARGE finetuned on SQuADv2 This is roberta-large model finetuned on SQuADv2 dataset for question answering answerability classification Model details This model is simply an Sequenceclassification model with two inputs (context and question) in a list. The result is either [1] for answerable or [0] if it is not answerable. It was trained over 4 epochs on squadv2 dataset and can be used to filter out which context is good to give into the QA model to avoid bad answers. Model training This model was trained with following parameters using simpletransformers wrapper: train_args = { 'learning_rate': 1e-5, 'max_seq_length': 512, 'overwrite_output_dir': True, 'reprocess_input_data': False, 'train_batch_size': 4, 'num_train_epochs': 4, 'gradient_accumulation_steps': 2, 'no_cache': True, 'use_cached_eval_features': False, 'save_model_every_epoch': False, 'output_dir': "bart-squadv2", 'eval_batch_size': 8, 'fp16_opt_level': 'O2', } Results {"accuracy": 90.48%} Model in Action 🚀 from simpletransformers.classification import ClassificationModel model = ClassificationModel('roberta', 'a-ware/roberta-large-squadv2', num_labels=2, args=train_args) predictions, raw_outputs = model.predict([["my dog is an year old. he loves to go into the rain", "how old is my dog ?"]]) print(predictions) ==> [1] Created with ❤️ by A-ware UG - Downloads last month - 28 Text Classification Examples Examples This model can be loaded on the Inference API on-demand.
https://huggingface.co/aware-ai/roberta-large-squad-classification
CC-MAIN-2022-27
en
refinedweb
Programmable, extensible and easy to use notification system Project description Programmatically send notifications! About Have you ever been in a situation where you’ve been simply twiddling your thumbs, waiting for your program to finish compiling/running? Are you into Deep Learning and need a way to notify yourself when your program crashes or is done learning to do the impossible? Then Herald is for you! With a simple, extensible and pythonic interface, you can get setup with a programmatic way of notifying yourself and/or your teammates about different events in your code. The following platforms are currently supported: - GMail - Twilio Need to use it with a custom platform? You can easily write your own notifier and plug it in to handle that, making Herald infinitely extensible. Installation The easy pip install herald-notify. Usage The primary way to use Herald is like a context manager. E.g. import herald from herald import notifiers # Send yourself a mail in Gmail to notify you # Assumes your Gmail tokens have been set up properly notifier = notifiers.GmailNotifier() with Herald(notifier, message="Model Trained!"): # super long running process train_model() ... You should get an email in your registered Gmail account at the end of the program. You can also specify notifications at arbitrary points via the notifier call: import herald from herald import notifiers notifier = notifiers.TerminalNotifier("Whoop de doo!") notifier.notify("A new custom message") # Send the original message from the constructor notifier.notify() Contributing If you find bugs, please feel free to submit an Issue, or even better, a Pull Request! Development To set up your dev environment, perform the following steps: - Clone Herald - Inside the root directory, run pipenv shell to open a shell. - Finally run pipenv install to install all the dependencies. At this point, you should be good to go! Testing WIP Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/herald-notify/
CC-MAIN-2022-27
en
refinedweb
Github user ravipesala commented on a diff in the pull request: --- Diff: core/src/main/java/org/apache/carbondata/core/datamap/DataMapMeta.java --- @@ -14,37 +14,25 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.apache.carbondata.core.indexstore; -import java.io.DataOutput; +package org.apache.carbondata.core.datamap; -/** - * Data Map writer - */ -public interface DataMapWriter<T> { +public class DataMapMeta { - /** - * Initialize the data map writer with output stream - * - * @param outStream - */ - void init(DataOutput outStream); + public DataMapMeta(String indexedColumn, OperationType optimizedOperation) { --- End diff -- I think it should be array of columns, user can create composite index. ---
https://www.mail-archive.com/issues@carbondata.apache.org/msg07860.html
CC-MAIN-2017-34
en
refinedweb
char findTheDifference(char* s, char* t) { int* alph = (int *) calloc (256, sizeof (int)); for (int c = 0; s[c] != '\0'; c++) alph[(int) s[c]]++; for (int c = 0; t[c] != '\0'; c++) alph[(int) t[c]]--; for (int c = 0; c < 256; c++) if (alph[c] != 0) return (char) c; // in case the input is s == t return 'x'; } My C solution. Nothing special Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/83431/my-c-solution-nothing-special
CC-MAIN-2017-34
en
refinedweb
WinJS.Namespace.define function Defines a new namespace with the specified name. For more information, see Organizing your code with WinJS.Namespace. Syntax Parameters - name Type: string The name of the namespace. This could be a dot-separated name for nested namespaces. Type: object The members of the new namespace. Return value Type: Object The newly-defined namespace. Remarks WinJS.Namespace.define and WinJS.Class.define provide special handling for objects of members that look like property descriptors. The property descriptors can only be one of two types: a data descriptor or an accessor descriptor. A data descriptor is a property that has a value, which may or may not be writable. An accessor descriptor is a property described by a getter-setter pair of functions. Additionally, and unless otherwise specified via the property descriptor, properties which have a name that begin with an underscore are marked as unnumerable. Examples The following code shows how to use this function to define a Robotics namespace with a single Robot class. Requirements
https://msdn.microsoft.com/pt-BR/library/windows/apps/br212667
CC-MAIN-2017-34
en
refinedweb
Some decorators and classes to make working with django projects easier. django-drapes is a small library that aims to ease authorization and user input verification. Most of the functionality is packed into decorators intended for applying to views, hence the name django-drapes. The decorators: - verify: Validate and convert values passed to a controller - require: Check for permissions - verify_post: Validate and process POST requests - render_with: Render a dictionary with a template or json There are also two template tags which can be used in combination with these decorators: - if_allowed: Display content depending on user permissions - modelview: Output a model view Decorators verify verify is a decorator that turns values passed to the controller into a more usable form (such as models), and throws suitable exceptions when this does not work. The conversions are specified as keyword arguments with a validator matching the name of the controller argument. The validators have to implement the formencode validator interface. Here is a simple example: from django_drapes import verify import formencode @verify(int_arg=formencode.validators.Int()) def controller(request, int_arg): return 'Argument is %d' % int_arg The controller receives int_arg as an integer, obviating the need to convert in the controller. The values for the conversions are searched in the arguments for the controller function, and additionally the GET parameters if the request is a GET. This causes a mismatch between the url definition and the function signature, since one can’t specify get parameters in a url entry, and a controller normally has to look up a GET parameter in request.GET. Because of this mismatch, in case you want to verify a GET parameter, you should include this parameter as a keyword argument in the controller signature. The most frequently done conversion is selecting a model with a unique field. django-drapes has a built in validator for this kind of conversion, called ModelValidator. It can be used as follows: from django.db import models from django_drapes import verify, ModelValidator class Project(models.Model): slug = models.SlugField(unique=True) @verify(item=ModelValidator(get_by='slug')) def controller(request, item): return "Item's slug is %s" % item.slug An advanced feature implemented by ModelValidator is looking up a model by multiple keys. In order to do this, you should initialize ModelValidator with a list of strings as get_by. These strings should be in the form model_field=view_arg, matching arguments to a view to fields on a model. For example, let’s assume that we have a project where users can create items identified by slugs. Items belonging to different users can have the same slug, and the page for such an item is identified by the name of the user and the slug of the item. In that case, drapes decorators can be used as follows: @verify(owner=ModelValidator(User, get_by='username')) @verify(item=ModelValidator(Project, get_by=['slug=item','owner=owner'])) @render_with('view_item.html') def view_item(request, owner, item): return dict(item=item) This case also demonstrates Mixing the decorators. require require checks permissions on an incoming request to a controller. Just like validate, it accepts keyword arguments with key referring either to user (accessed through request.user) or the positional or keyword arguments of a view function. Value must be a string corresponding to the permission. What the permission refers to is determined in the following order: - An attribute of the object - A method of the object that does not require any arguments - A method of the model permission (a subclass of ModelPermission; see below) that accepts a user as argument. Here is a very simple example: from django.db import models from django_drapes import verify, ModelValidator import formencode class Thing(models.Model): slug = models.SlugField(unique=True) published = models.BooleanField(default=False) @verify(item=ModelValidator(Thing, get_by=slug)) @require(user='is_authenticated', thing='published') def controller(request, thing): return "This thing's slug is %s" % item.slug Permissions can be added to models by subclassing the ModelPermission class, and setting a model as the class attribute: from django.db import models from django.shortcuts import render from django_drapes import (verify, ModelValidator, ModelPermission) class Thing(models.Model): slug = models.SlugField() class ThingPermissions(ModelPermission): model = Thing def can_view(self, user): return user.username == 'horst' @verify(thing=ModelValidator(get_by=slug)) @require(thing='can_view') def controller(request, thing): return render(request, 'thing.htm', dict(thing=thing)) The only person who can view this item is the one named horst. The default selector used by ModelValidator is model id; this can be overriden using the get_by argument, as seen above. verify_post verify_post is a decorator for easing the workflow with form input. The aim is to split the handling of user input through forms into the presentation of empty or erronuous forms, and the processing of a valid form. There are two ways to use verify_post. The first is the simple case, where the same entry point to an app should display a form for GET, and also process it when it gets POSTed. In this case, verify_post.single should be used. This factory method accepts two positional arguments: the form used to verify the POST, and the handler to call if the form validates: from django import forms from django_drapes import verify_post from django.http import HttpResponseRedirect from django.shortcuts import render_to_response #we are assuming the models exist somewhere from .models import Thing from django_drapes import (verify, verify_post, ModelValidator) class ThingForm(forms.Form): name = forms.CharField(required=True, min_length=4) def create_thing(request, item, form): thing = Thing(name=form.data['name']) thing.save() return HttpResponseRedirect(thing.get_absolute_url()) @verify(item=ModelValidator()) @verify_post.single(ThingForm, create_thing) @require(item='can_view') def controller(request, item, invalid_form=None): return render_to_response('form_template.html', dict(form=ThingForm())) Some notes on this example. When you are handling single forms, the controller must have a keyword argument invalid_form. If the form does not validate, it is passed on to the controller through this argument. The handler of the correct form, in this case create_thing, must have the same signature as the controller, except for invalid_form, which is replaced with form in the signature of the correct handler. If you want to use the same entry point to show and validate forms of different kinds, you should use verify_post.multi. This method accepts a list of form options specified with keyword arguments which are the names of the forms on the page. The form options have to be tuples specifying the form for validation and the valid form handler. Here is an example: from django import forms from django_drapes import verify_post from .models import Thing, Organism class ThingForm(forms.Form): name = forms.CharField(required=True, min_length=4) drape_form_name = forms.CharField(required=True, widget=forms.HiddenInput(), initial='thing_form') class OrganismForm(forms.Form): genus = forms.CharField(required=True, min_length=10) drape_form_name = forms.CharField(required=True, widget=forms.HiddenInput(), initial='organism_form') def create_thing(request, form): Thing(name=form.data['name']) def create_organism(request, form): Organism(genus=form.data['genus']) @verify_post.multi(thing_form=(EntityForm, create_entity), organism_form=(OrganismsForm, create_organism)) @require(item='can_view') def controller(request, item, invalid_form=None): return render_to_response('form_template.html', dict(form=ThingForm())) As it can be seen in this example, the hidden field drape_form_name of a form has to match the keyword argument to verify_post which specifies how that form should be handled. One complication for which I couldn’t come up with a decent solution is form validation with a user. In some cases, it is necessary to to initialize a form class with a user; an example is when a value has to be unique per user. In these cases, you have to set the keyword argument pass_user to True for verify_post.single, and a three-element tuple whose last element is True to verify_post.multi. Let me know in case you have a better solution. render_with render_with turns dictionary return values into rendered templates. It requires a string as argument, signifying either a template path or json. render_with then calls django.shortcuts.render with the dictionary-like return value of the controller, and the template name: @render_with('test.htm') def controller(request): return dict(message='Hello world') The default template can be overriden by setting a ‘template’ key in the return dictionary to the desired template name. render_with also respects return values which are subclasses of HttpResponse (e.g. HttpResponseRedirect). If you want to return something else from your controller, do not use this decorator. Mixing the decorators Any number of these decorators can be applied to the same controller. The following is posible: @verify(model_inst=ModelValidator(MockModel, get_by='slug')) @require(model_inst='can_view', user='is_authenticated') @verify_post.single(ThingForm, create_thing) @render_with('some_template.html') def controller(request, model_inst): return model_inst.message The principle here is that if a decorator depends on the conversions of another, it should come after it. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-drapes/
CC-MAIN-2017-34
en
refinedweb
Read and write to Amazon S3 using a file-like object Read and write files to S3 using a file-like object. Refer to S3 buckets and keys using full URLs. The underlying mechanism is a lazy read and write using cStringIO as the file emulation. This is an in memory buffer so is not suitable for large files (larger than your memory). As S3 only supports reads and writes of the whole key, the S3 key will be read in its entirety and written on close. Starting from release 1.2 this read and write are deferred until required and the key is only read from if the file is read from or written within and only updated if a write operation has been carried out on the buffer contents. More tests and docs are needed. Requirements boto Usage Basic usage: from s3file import s3open f = s3open("") f.write("Lorem ipsum dolor sit amet...") f.close() with statement: with s3open(path) as remote_file: remote_file.write("blah blah blah") S3 authentication key and secret may be passed into the s3open method or stored in the boto config file.: f = s3open("", key, secret) Other parameters to s3open include: - expiration_days - Sets the number of days that the remote file should be cached by clients. Default is 0, not cached. - private - If True, sets the file to be private. Defaults to False, publicly readable. - content_type - The content_type of the file will be guessed from the URL, but you can explicitly set it by passing a content_type value. - create - New in version 1.1 If False, assume bucket exists and bypass validation. Riskier, but can speed up writing. Defaults to True. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/python-s3file/
CC-MAIN-2017-34
en
refinedweb
Hi Dennis, We often use a bundle db PM in a cluster and I've never seen the problem you describe. It woul help if you can provide more detailed logging, and which version of Jackrabbit are you using? Best regards, Martijn On Wed, Dec 16, 2009 at 12:18 AM, Dennis van der Laan <d.g.van.der.laan@rug.nl> wrote: > Dennis van der Laan wrote: >> Hi, >> >> I have two identical machines (A and B) running Jackrabbit 1.6.0 on >> Tomcat 6.0. I made an empty folder on each machine, containing only a >> repository.xml file, again both identical (using System properties to >> set the cluster ID). The repository configuration uses a bundle PM >> (Oracle 10g database) and a local filesystem for all components. >> >> When I start A, the repository tables are created in the database and >> all works well. After the repository is initialized, I add some custom >> namespaces and nodetypes, and create a basic folder hierarchy. >> Then, when I start B, I get) >> >> When I stop both A and B, and starting them again, A can startup again >> but B still gives the exception. >> >> I removed all tables from the database and re-created the repository >> folders on both machines and inverted the startup: first I started B and >> then I started A. Now B starts up fine, but A gives the exception. >> >> In the LOCAL_REVISIONS table I can see both cluster instances add their >> revision (revision 12 for the started repository and 0 for the failed >> repository). >> >> What am I doing wrong here? I found an issue involving LockFactory >> exceptions, but they all had to do with starting and stopping multiple >> repositories in the same VM or concurrently on the same machine. >> > I changed the repository configuration from using a bundle database PM > to a simple database PM (from > org.apache.jackrabbit.core.persistence.bundle.OraclePersistenceManager > to org.apache.jackrabbit.core.persistence.db.OraclePersistenceManager). > Now it works! But a bundle PM should have better performance. Is this a > bug? Is anybody else using a bundle database PM in a cluster configuration? > > Thanks, > Dennis > > -- > Dennis van der Laan > >
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200912.mbox/%3Cdc0c11da0912160018y754a7eefw2d22d040520fc6d0@mail.gmail.com%3E
CC-MAIN-2017-34
en
refinedweb
Lambda expression A Lambda expression is nothing but an Anonymous Function, can contain expressions and statements. Lambdaexpressions can be used mostly to create delegates or expression tree types.Lambda expression uses lambda operator => and read as 'goes to' operator. Left side of this operator specifies the input parameters and contains theexpression or statement block at the right side. The basic syntax is: (commaseparated parameters)=> {semicolon terminated statement list;} Where The=> symbol is read “becomes” to indicate that the parameters are transformedinto the actions. A lambda expression shares all its characteristics withanonymous methods. Example: Exp = Exp/10; Now, let see how we can assign the above to a delegate and create an expressiontree: //This needs System.Linq.Expressions using System.Linq.Expressions; delegate int funDelegate(int intMyNum); static void Main(string[] args) { //assignlambda expression to a delegate: funDelegatemyDelegate = Exp => Exp / 12; intnVal = myDelegate(120); Console.WriteLine("Output {0}", nVal); Console.ReadLine(); //Createan expression tree type //Thisneeds System.Linq.Expressions Expression<funDelegate> ExpDel = Exp => Exp / 12; } Output: Output 10 Note: The => operator has the same precedence as assignment (=) and isright-associative. Lambdas are used in method-based LINQ queries as arguments to standard queryoperator methods such as Where. Thanks.. It is great to associate with such a blog.
https://www.mindstick.com/blog/84/lambda-expression-in-c-sharp
CC-MAIN-2017-34
en
refinedweb
CONGRESS OF THE UNITED STATES CONGRESSIONAL BUDGET OFFICE - Peregrine Harper - 2 years ago - Views: Transcription 1 CONGRESS OF THE UNITED STATES CONGRESSIONAL BUDGET OFFICE A CBO PAPER JULY 2005 Effects of the Federal Estate Tax on Farms and Small Businesses Top: Photodisc/GettyImages; Bottom: Ron Nichols/USDA 2 3 A CBO PAPER Effects of the Federal Estate Tax on Farms and Small Businesses July 2005 The Congress of the United States O Congressional Budget Office 4 Note Numbers in the text, tables, and figures of this report may not add up to totals because of rounding. 5 Preface Critics of the federal estate tax argue that it can hinder families who wish to pass on a farm or small business, because heirs must sometimes liquidate the farm or business to pay the tax. This Congressional Budget Office (CBO) paper prepared at the request of the Ranking Democratic Member of the Senate Finance Committee examines the effects of the estate tax on small businesses and family farms, focusing on how it might alter the behavior of farmers and small-business owners during their lives and on the extent to which their estates have enough liquid assets to pay the estate taxes owed. The paper also looks at the impact on those groups of setting the amount of assets exempt from the estate tax at $1.5 million, $2 million, or $3.5 million. In keeping with CBO s mandate to provide objective analysis, this paper makes no recommendations. Robert McClelland, formerly of CBO s Tax Analysis Division, wrote the paper with additional supporting analysis from Ed Harris under the direction of Roberton Williams and Thomas Woodward. Ben Vallis performed some of the computations used in the analysis, and Perry Beider provided useful comments. Christian Spoor edited the paper, and Loretta Lettner proofread it. Denise Jordan-Williams prepared early drafts of the text, tables, and figures. Maureen Costantino produced the cover and prepared the report for publication. Lenny Skutnik produced the printed copies, and Annette Kalicki and Simone Thomas prepared the electronic version for CBO s Web site (). July 2005 Douglas Holtz-Eakin Director 6 7 CONTENTS Summary vii Provisions of the Estate Tax That Affect Farms and Small Businesses 2 What Is a Small Business? 3 Potential Effects of the Estate Tax on the Behavior of Farmers and Business Owners 4 Why Do People Accumulate? 4 Lessons from the Income Tax 6 Affordability of the Estate Tax 8 Characteristics of Estates Filing Returns in 1999 and Estates with Insufficient Liquid to Pay the Estate Tax 12 Effects of Permanently Raising the Exemption Amount 13 Appendix: Translating the Estate Tax into an Income Tax 17 8 vi EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES Tables 1. Scheduled Changes in Tax Rates and Exemption Amounts for Estate and Gift Taxes Under EGTRRA 2 2. Income Tax Rates Equivalent to a 43 Percent or 14 Percent Estate Tax 6 3. Characteristics of Estates That Filed Estate Tax Returns in 1999 or Estates Filing Estate Tax Returns in 1999 or 2000, by Decedent s Marital Status Common Occupations and Industries of Decedents Whose Estates Filed Estate Tax Returns in Characteristics of Farmers and Small-Business Owners Estates That Filed Estate Tax Returns in Minority Discounts Claimed by Estates Filing Estate Tax Returns in 2000, by Type of Asset Number of Estates Filing Returns and Number with Insufficient Liquidity to Pay the Estate Tax in 2000, Under Various Exemption Levels Income Tax Rates Equivalent to the Estate Tax, Under Various Exemption Levels, for Estates Claiming the QFOBI Deduction in A-1. Income Tax Rates Equivalent to a 43 Percent or 14 Percent Estate Tax, by Rate of Return and Years Until Death 18 Figures 1. Distribution of Gross Value and Estate Tax Liability of Estates Filing Estate Tax Returns in of Estates Filing Estate Tax Returns in 1999 or Boxes 1. Estate Taxes Levied by States 3 2. How the Estate Tax Defines a Family-Owned Business 5 3. Estimating the Number of Estates Belonging to Farmers 8 9 Summary Recent discussion of the federal estate tax has focused in part on how it affects family farms and small businesses particularly the possibility that having to pay the tax might jeopardize those operations. Analysis by the Congressional Budget Office (CBO) and others points to few strong conclusions, both because available evidence is limited and because existing tax data make it difficult to determine which estates are those of farmers or smallbusiness owners. 1. The estate might also have to pay income taxes, but this analysis focuses only on estate taxes.). Possible Effects of the Estate Tax on Entrepreneurship Economic studies have had limited success in identifying how the estate tax may influence the behavior of farmers and small-business owners. Those effects depend on the underlying motives of the individual entrepreneur, which are themselves unclear. At one extreme, if business owners or farmers leave estates only because they die before managing to spend all of their accumulated assets, the existence of the estate tax will have no impact on their entrepreneurial behavior. However, if they intend all along to leave estates and thereby pass on active businesses, the estate tax could affect how much they invest in their farms or businesses. Because the tax reduces the after-tax return on investment, it could lead people to invest less than they would otherwise (or leave them with less money to invest if they held assets in liquid form or bought life insurance to cover future estate tax pay- 10 viii SUMMARY ments). Conversely, because the tax reduces the net size of estates, people might choose to save and invest more to offset it. Unfortunately, research into the estate tax has not reached strong conclusions about the relative strength of such incentives. A large body of research has, however, found that income taxes may discourage entrepreneurial effort. Because the estate tax can be seen as equivalent to an additional income tax, the observed reactions of farmers and business owners to the income tax suggest that the estate tax may also reduce entrepreneurial effort.). According to those definitions, the estates of farmers were smaller than the average estate in 1999 and 2000, and estates claiming the QFOBI deduction were generally larger than average. That situation, combined with the progressivity of the estate tax, meant that the typical effective tax rate for farmers (the share of wealth they paid in estate taxes) was lower than the average for all estates, whereas the typical effective tax rate for estates claiming the QFOBI deduction exceeded that average.. For returns filed in 2000, the threshold for filing was gross assets worth at least $650,000 or $675,000, depending on the year of death less than half the 2005 level of $1.5 million. Had the current filing threshold been in effect in 2000, far fewer estates, especially those of farmers, would have had to file estate tax returns. The scheduled expiration of EGTRRA in 2011 has engendered uncertainty and led to proposals that would permanently extend the higher exemption levels and lower tax rates in EGTRRA. This analysis looked at the effects of freezing the exemption level at three amounts: $1.5 million, $2.0 million, or $3.5 million. Any of those exemption levels, along with a 48 percent tax rate and a large QFOBI deduction, would substantially reduce the number of small businesses and farmers affected by the estate tax. 11 Effects of the Federal Estate Tax on Farms and Small Businesses The United States has had an estate tax since 1916, when the tax was imposed to offset a decline in tariff revenues caused by World War I. 1 Lawmakers have altered the estate tax many times, raising the top statutory rate to as much as 77 percent and increasing or decreasing the amount of assets exempt from taxation. Most recently, the Taxpayer Relief Act of 1997 (TRA-97) and the Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA) modified the estate tax in ways that will cause it to change every year through Under those laws, a unified credit applies to the sum of all taxable gifts made during a taxpayer s lifetime plus the value of assets left at death. 2 In 2005, the credit effectively shelters up to $1.5 million from the unified estate and gift taxes. 3 Only estates worth more than that amount must file an estate tax return, a provision that leaves the vast majority of estates exempt fewer than 2 percent have to file returns. In calculating whether those estates owe estate taxes, various deductions and exemptions are permitted. For example, a surviving spouse can inherit an unlimited amount without paying taxes. That option, combined with the use of a bypass trust, allows married couples to double the amount of wealth that can 1. For a history of the estate tax through 2000, see Joint Committee on Taxation, Description and Analysis of Present Law and Proposals Relating to Federal Estate and Gift Taxation, JCX (March 14, 2001). 2. Taxpayers are currently allowed to give $11,000 annually to each of any number of recipients without paying gift taxes (a threshold that rises by $1,000 for every 10 percent increase in the consumer price index). The unified credit applies to any gifts in excess of the annual limit. 3. Taxable gifts that cumulatively total more than $1 million are subject to gift taxes. At death, estate taxes are levied on the sum of cumulative taxable gifts and the value of the taxable estate. The estate tax liability on that sum is reduced by any gift taxes paid previously. go to their heirs without taxation. 4 bequeathed to qualified charities are deductible from the value of the estate, as are such items as funeral expenses and executors commissions. The resulting net estate is subject to tax rates of 43 percent to 47 percent (depending on its size); if the amount of tax owed exceeds the unified credit, the estate must pay the excess. 5 In recent years, just under half of the estates filing returns have been liable for estate taxes. The amount of assets exempt from the estate tax has been raised and the top tax rate reduced in recent years under TRA-97 and EGTRRA. Those trends are scheduled to continue for the next five years (see Table 1). TRA-97 initially sheltered up to $600,000 from taxation, an amount that was scheduled to rise to $1 million by 2006 before EGTRRA accelerated the increase. Under TRA-97, estate tax rates ranged from 37 percent to 55 percent, although a 5 percent surtax on estates valued between $10 million and $ million phased out the benefit of the unified credit, effectively raising the marginal tax rate (the rate on an additional dollar of wealth) to 60 percent for estates in that range. 4. In essence, a trust is created at the first spouse s death with assets equal to the amount exempt from taxation. The surviving spouse is the beneficiary of the trust, with the heirs becoming the beneficiaries when the surviving spouse dies. Because the size of the trust equals the exemption level, creation of the trust does not trigger the estate tax, and wealth above the exemption amount may be passed on to the spouse tax-free through the unlimited spousal deduction. When the surviving spouse dies, tax is due on the wealth bequeathed to heirs in excess of the exemption level, but none is due on the trust because it is not part of the second spouse s estate. 5. For more details about the estate tax, see Jane G. Gravelle and Steven Maguire, Estate and Gift Taxes: Economic Issues, Report for Congress RL30600 (Congressional Research Service, updated June 24, 2005). 12 2 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES Table 1. Scheduled Changes in Tax Rates and Exemption Amounts for Estate and Gift Taxes Under EGTRRA Estate Tax Gift Tax Lowest Tax Rate Highest Tax Rate Exemption Amount Highest Tax Rate Exemption Amount (Percent) (Percent) (Millions of dollars) (Percent) (Millions of dollars) 2002 a a a a b 0 0 n.a After /60 c /60 c 1.0 Source: Congressional Budget Office. Note: EGTRRA = Economic Growth and Tax Relief Reconciliation Act of 2001; n.a. = not applicable. a. Between 2002 and 2005, the credit for estate taxes levied by states was reduced by 25 percentage points each year and replaced by a deduction. Thus in 2005, estates could only deduct estate taxes paid to states. (See Box 1.) b. Under EGTRRA, the estate tax will be repealed in 2010, and the maximum tax rate on gifts will equal the top individual income tax rate, 35 percent. c. Estates valued at $10 million to $ million are subject to a maximum tax rate of 60 percent in order to eliminate the value of the exempt amount of assets. Estates valued at more than $ million are taxed at an average rate of 55 percent. Under EGTRRA, the maximum tax rate was lowered to 50 percent in 2002; it is scheduled to fall to 45 percent by The amount of wealth exempt from taxation rose to $1 million in 2002 and $1.5 million in 2004 and will increase to $2 million in 2006 and $3.5 million in In 2010, the estate tax will be eliminated. The following year, however, with the scheduled expiration of EGTRRA, the estate tax will be reinstated at the levels defined in TRA-97: an effective exemption of $1 million and a maximum tax rate of 55 percent. (EGTRRA also affected the estate taxes levied by many states; for details, see Box 1.) Critics of the estate tax argue that it may pose a special hardship for families trying to pass along a farm or small business. This analysis evaluates the evidence of the tax s effects on those operations, focusing on how it might influence the behavior of farmers and small-business owners during their lives and the extent to which their estates lack enough liquid assets to pay estate taxes. The analysis also looks at how raising the exemption amount would affect the number of estates that lack sufficient liquid assets to cover their estate tax liabilities. Provisions of the Estate Tax That Affect Farms and Small Businesses Lawmakers first made special provisions for small businesses under the estate tax in 1958 when the Small Business Tax Revision Act allowed some estates containing closely held businesses to pay their estate taxes over 10 years. 6 Subsequent laws added other provisions targeted toward estates that include farms or small businesses. B The Tax Reform Act of 1976 allowed estates to value farms and closely held businesses at their current use value rather than their highest and best use value, with the stipulation that heirs keep the property in its current use for at least 15 years. The law also extended to 14 years the period over which estates with closely held business assets could pay estate taxes. 6. A closely held business is defined as either a sole proprietorship or a partnership or corporation in which one-fifth of the business s value is included in determining the gross estate or in which there are 45 or fewer owners. The value of the business is defined in terms of the total capital (for partnerships) or the voting stock (for corporations). 13 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES 3 Box 1. Estate Taxes Levied by States In addition to the federal government, many states impose taxes on large estates. Prior to the enactment of the Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA), every state and the District of Columbia levied a tax on estates that was at least equal to the amount of state-level estate taxes allowed as a credit on the federal estate tax return. Most states used that federal credit to determine the size of their estate tax levy: 32 states and the District of Columbia defined their tax levels on the basis of the federal tax credit in effect on the date of a person s death, and five states used the federal credit in effect on a specific date. The other 13 states either assessed inheritance taxes on heirs or charged their own estate tax and used the federal credit as a minimum tax in cases in which the state tax was less than the federal credit. EGTRRA phased out the federal credit for state estate taxes over four years, replacing it with a deduction in Eliminating the credit meant that state estate taxes would disappear for the 32 states and the District of Columbia that tied their tax directly to the federal credit. Seven of those states and the District acted to decouple their tax from the federal credit, redefining the levy to equal the federal credit on a date before the passage of EGTRRA. The other 25 states allowed their estate taxes to phase out with the federal credit and thus are levying no state-level estate tax in B The Economic Recovery Act of 1981 shortened to 10 years the period during which heirs had to continue using farms or closely held businesses to be able to value assets at their current use and increased to $750,000 the maximum reduction from using that valuation; liberalized the conditions under which estates with closely held businesses could pay estates taxes over time; and extended the opportunity to pay taxes over time to certain holding companies. B TRA-97 provided an exclusion of up to $675,000 for qualified family-owned business-interest (QFOBI) assets, in addition to the basic exclusion available to all estates. 7 The current-use provisions in the 1976 law are one method whereby estates can lower their tax liability by discounting (claiming a reduced value of) assets that are subject to the estate tax. Another approach, which is particularly important to family farms and small businesses, 7. EGTRRA implicitly repealed the exclusion for family-owned business interests in 2004 because the amount of the effective exemption in that year $1.5 million exceeded the $1.3 million previously available to small businesses by combining the QFOBI and the general estate tax exemption. EGTRRA continued the provisions allowing special valuation and tax-deferral options for farms and small businesses, however. involves minority discounts. Those discounts reflect the fact that a minority share in an ongoing business operation is generally worth less than the equivalent share of the market value of the whole business, because the majority owners can act in ways that adversely affect the value of the minority owner s share. (For example, if the majority owners were also officers of the company, they could enact policies that would increase their income at the expense of minority owners assets.) Heirs to a family farm or small business often receive minority interests in the operation; in that case, the estate can reduce its tax liability by claiming minority discounts. What Is a Small Business? Examining how the estate tax affects small businesses is hampered by the lack of a clear consensus about what constitutes a small business. The Small Business Administration, for example, defines a small business as one that is independently owned and operated and that meets certain limits on the number of employees and average annual revenue. Those limits vary by industry, however, ranging from 100 to 1,500 employees and from $750,000 to $28.5 million in annual revenue. 8 Similar 8. See Small Business Administration, Size Standards, at app1.sba.gov/faqs/faqindex.cfm?areaid=15. 14 4 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES variation exists in the standards used in the tax code: a small business can have gross receipts of no more than $500,000 for calculating certain excise taxes but up to $50 million for some stock sales. 9 Laws governing the federal income tax establish three types of small businesses: S corporations, limited partnerships, and sole proprietorships. S corporations and limited partnerships are generally treated as pass-through entities, meaning that income from the business is taxed at the individual level, not the corporate level. An S corporation may have no more than 35 owners of its stock; no such limit exists for a limited partnership. A sole proprietorship is any taxpayer who has income from a business and files a Schedule C along with his or her federal income tax return. Sole proprietors must pay payroll taxes (both the employer s and employee s shares) on their earnings but may use business and home-office deductions not available to regular wage and salary workers. 10 Laws governing the federal estate tax define two forms of small businesses: family-owned businesses (which are eligible for the QFOBI deduction) and closely held businesses. A family-owned business must satisfy a lengthy set of requirements on ownership and income (see Box 2). A closely held business has no constraints on its size but faces other limits. All sole proprietorships qualify as closely held businesses, but partnerships and corporations must meet one of two requirements: the estate must own at least 20 percent of the business s value or the business must have no more than 45 partners or shareholders. The variety of definitions and forms of small business that exist precludes a comprehensive examination of the effect of the estate tax on small businesses. Instead, this analysis examines the business forms that have been previously studied or for which data are available. For example, when using data from tax returns, the analysis defines a small business as one for which an estate claimed a QFOBI deduction. 9. See Joint Committee on Taxation, Overview of Present Law and Selected Proposals Regarding the Federal Income Taxation of Small Business and Agriculture, JCX-19-0 (March 2001). 10. C corporations are omitted here because they have no limit on their number of shareholders and are not pass-through entities. Another type of business, a limited liability corporation, is defined by state law and may be a partnership or an S corporation. Potential Effects of the Estate Tax on the Behavior of Farmers and Business Owners How farmers and owners of small businesses react to the estate tax is a central consideration in determining its effects. One possibility is that, like others who do not expect their estates to be large enough to be subject to the tax, people in those groups do not alter their behavior in response to the estate tax. Alternatively, like others who expect to owe the tax, they may choose to save more than, less than, or the same as they would have otherwise. In addition, they may have different motives than the rest of the population or face different incentives as a result of the targeted provisions of the estate tax. Little direct evidence exists about the effects of the estate tax on entrepreneurial effort. However, like the income tax, the estate tax may reduce business investment and hiring by farmers and business owners to some degree and thus slow the rate of growth of their enterprises. Why Do People Accumulate? The estate tax potentially reduces the inheritance available to heirs. Whether the tax affects decisions about how much to work and save depends on people s motives. At one extreme, people may save only to meet their own retirement needs and leave estates because they unintentionally fail to spend all of their assets. In that case, estates will not play a role in their planning, so they should act no differently in the face of the estate tax. At the other extreme, people may intend to leave the largest possible estate to their heirs. In that case, by raising the cost of leaving assets to heirs, the estate tax may lead them to work, save, and invest less during their life. Or, by reducing the after-tax size of the inheritances that heirs receive, it may lead such savers to work, save, and invest more to compensate for the loss to taxes. Observed behavior offers mixed evidence about people s motives in regard to their potential estates. On the one hand, the very existence of bequests intentional or otherwise may argue that saving is not driven solely by one s needs during one s lifetime. People can purchase annuities, which give them regular payments until death and leave nothing to their heirs, or reverse mortgages, which provide them with a stream of income in life at the expense of not passing their home equity on to their 15 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES 5 Box 2. How the Estate Tax Defines a Family-Owned Business To qualify as a family-owned business and thus be able to claim the qualified family-owned businessinterest (QFOBI) deduction on an estate tax return a business owned at least partly by an estate must be either a sole proprietorship or an entity to which one of the following three conditions applies: B At least 50 percent of the entity is owned by the decedent or members of the decedent s family; B At least 70 percent of the entity is owned by members of two families, and at least 30 percent is owned by the decedent or members of the decedent s family; or B At least 90 percent of the entity is owned by members of three families, and at least 30 percent is owned by the decedent or members of the decedent s family. The business must satisfy other requirements as well: B It cannot have been publicly traded within three years of the decedent s death. B No more than 35 percent of the business s adjusted ordinary gross income for the year of the decedent s death can be income from a personal holding company. B The decedent must have been a citizen or resident of the United States at the date of death, and the business must be located in the United States. B The business interest must be includable in the gross estate. B The interest must have passed to or been acquired by a qualified heir from the decedent. B The adjusted value of the qualified family-owned business interest must exceed 50 percent of the adjusted gross estate. (That value is reduced to the extent that the business holds passive assets or excess cash or marketable securities.) B The decedent or a member of the decedent s family must have owned the business for five of the eight years before the decedent s death. In addition, the decedent s family must have materially participated in the business for five of those eight years. heirs. 11 The infrequency with which people choose those investments (even in light of their costs from adverse selection) suggests that individuals accumulate assets with the intention of leaving bequests. On the other hand, surveys of the wealthy indicate that passing on assets to heirs is not their primary reason for saving. 12 Moreover, people who want to maximize their 11. See Edward J. McCaffery, Grave Robbers: The Moral Case Against the Death Tax, Tax Notes, vol. 85, no. 11 (December 13, 1999), pp See Christopher Carroll, Why Do the Rich Save So Much? in Joel Slemrod, ed., Does Atlas Shrug? The Economic Consequences of Taxing the Rich (New York: Russell Sage and Harvard University Press, 2000), pp bequests should act to minimize the estate and gift taxes they will pay. But analysis has shown that many individuals fail to take obvious steps to reduce those taxes; for example, many people whose estates will be taxed do not use the annual gift tax exemption of $11,000 per recipient per donor Because $11,000 may be passed by each parent to each heir taxfree, two parents leaving an estate to two heirs could give them up to $44,000 per year without taxation. However, parents typically give far less than that maximum. See James Poterba, Estate and Gift Taxes and Incentives for Inter Vivos Giving in the United States, Journal of Public Economics, vol. 79, no. 1 (January 2001), pp ; and Kathleen McGarry, The Cost of Equality: Unequal Bequests and Tax Avoidance, Journal of Public Economics, vol. 79, no. 1 (January 2001), pp 16 6 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES Lessons from the Income Tax The estate tax could affect farmers and business owners differently from other people because of the business aspects of their wealth accumulation. In one survey, some small-business owners stated that the high levels of the estate tax were powerful disincentives to invest and hire new employees. 14 Economic studies of the estate tax have not reached strong conclusions about its effects on entrepreneurial behavior. However, estate taxes reduce after-tax returns on investment just as income taxes do, and a large body of research suggests that the income tax discourages entrepreneurial effort to some degree. 15 To cast the burden of the estate tax in a more familiar form, the Congressional Budget Office (CBO) translated the estate tax into its income tax equivalent. That translation involved calculating what income tax rate, if applied annually to an entrepreneur s income for a certain number of years, would result in the same amount of assets after death as an estate tax with a flat 43 percent rate (the lowest applicable rate in 2005 under EGTRRA). Although actual situations would be complicated by issues 14. See Joseph H. Astrachan and Robert Tutterow, The Effect of Estate Taxes on Family Business: Survey Results, Family Business Review, vol. 9, no. 3 (September 1996), pp See Donald Bruce, Effects of the United States Tax System on Transitions into Self-Employment, Labour Economics, vol. 7, no. 5 (2000), pp ; Robert Carroll and others, Personal Income Taxes and the Growth of Small Firms, in James Poterba, ed., Tax Policy and the Economy (Cambridge, Mass.: MIT Press, 2001), pp ; Robert Carroll and others, Entrepreneurs, Income Taxes and Investment in Joel Slemrod, ed., Does Atlas Shrug? The Economic Consequences of Taxing the Rich (New York: Russell Sage and Harvard University Press, 2000), pp ; Robert Carroll and others, Income Taxes and Entrepreneurs Use of Labor, Journal of Labor Economics, vol. 18, no. 2 (2000), p ; Julie B. Cullen and Roger H. Gordon, Taxes and Entrepreneurial Activity: Theory and Evidence in the U.S., Working Paper No (Cambridge, Mass.: National Bureau of Economic Research, June 2002); Robert W. Fairlie and Bruce D. Meyer, Trends in Self-Employment Among White and Black Men: , Journal of Human Resources, vol. 35, no. 4 (2000), pp ; William M. Gentry and R. Glenn Hubbard, Tax Policy and Entry Into Entrepreneurship (draft, June 2004); Douglas Holtz-Eakin, John W. Phillips, and Harvey S. Rosen, Estate Taxes, Life Insurance and Small Business, Review of Economics and Statistics, vol. 83, no. 1 (February 2001), pp ; David Joulfaian and Mark Rider, Differential Taxation and Tax Evasion by Small Business, National Tax Journal, vol. 51, no. 4 (December 1998), pp ; and Herbert J. Schuetze, Taxes, Economic Conditions and Recent Trends in Male Self-Employment: A Canada-U.S. Comparison, Labour Economics, vol. 7, no. 5 (2000), pp Table 2. Income Tax Rates Equivalent to a 43 Percent or 14 Percent Estate Tax (Percent) 43 Percent 14 Percent Estate Tax a Estate Tax b Rate of 20 Years 30 Years 20 Years 30 Years Return Until Until Until Until on Capital Death Death Death Death Source: Congressional Budget Office. Note: Each entry equals the annual income tax rate imposed on capital income that would yield the same total asset value at death as assets subject to an estate tax of either 43 percent or 14 percent (but not subject to income taxes), assuming a given rate of return on capital and a given life expectancy. a. The minimum estate tax rate in b. The typical estate tax that estates would have owed had the tax rates of 2005 been in effect in such as a person s reason for leaving an estate and by uncertainty about when the person will die, CBO made several simplifying assumptions for the analysis: that all income is invested at a fixed rate of return, that all returns are reinvested in the farm or business, and that the owner knows when he or she will die. Applying that translation to predicted estate taxes, as calculated using a simplified version of CBO s estate tax model, provides estimates of the equivalent income tax rates that an entrepreneur faces. (The appendix explains CBO s method in more detail.) In some circumstances, the estate tax is equivalent to a high marginal income tax rate. For example, a 31 percent income tax imposed annually on earnings from an investment that yielded 6 percent a year for 20 years would result in the same after-tax wealth as a 43 percent tax on that investment 20 years from now (see Table 2). Thus, for a person who expects to live 20 years, a 43 percent estate tax is equivalent to a 31 percent income tax (assuming a 6 percent rate of return). 16 Higher rates of return and longer life spans are both associated with lower income tax rates, because deferring 16. By comparison, the top statutory income tax rate is 35 percent. 17 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES 7 taxes rather than paying them annually yields benefits. Under an income tax, realized returns from an investment are taxed before they are reinvested, whereas the estate tax only taxes those returns at the end of the owner s life. In essence, returns grow on a pretax basis with respect to the estate tax, yielding a greater after-tax estate than would a tax of the same rate that was applied as returns were reinvested. A greater rate of return increases that gap, so a given estate tax translates into a lower equivalent income tax when rates of return are higher. 17 For example, a life expectancy of 30 years and a rate of return of 6 percent suggest an equivalent income tax of 26 percent (see Table 2). But a 10 percent rate of return about the annual nominal increase in stock indexes since World War II over 30 years suggests an equivalent income tax of 19 percent. Looking at the income tax rates implied by a 43 percent estate tax is appropriate for entrepreneurs whose net worth is already large enough that their estates would incur tax liability if they died immediately, because every additional dollar they saved would be taxed at a marginal rate of 43 percent or more under the estate tax. Many farms and small businesses are currently worth less than $1.5 million, however, and owners of those enterprises would face no estate tax were they to die immediately. If the owner s decision to reinvest in the farm or business determines whether an estate will exceed the filing threshold for the estate tax, then the average tax rate may be a more appropriate comparison than the marginal tax rate. Had the estate tax rates of 2005 been in effect in 2000, the typical estate tax (for those owing tax under current exemptions and rates) would have been about 14 percent of the gross estate. That rate implies much lower income tax rates for every rate of return (see Table 2). For example, a person expecting to live 20 years and earning a 6 percent return faces estate taxes equivalent to an income tax of 9 percent; with a life expectancy of 30 years and a 10 percent rate of return, the estate tax is equivalent to only a 5 percent income tax. A more realistic picture comes from simulating equivalent income tax rates using information on actual estates that filed estate tax returns in 2000 and claimed QFOBI deductions. The question posed in that analysis is, What income tax rate applied to an investment made earlier in 17. The inverse relationship between rate of return and equivalent income tax rate also implies that to the extent that higher rates of return are associated with greater risk, an estate tax encourages risk-taking more than an income tax does. a decedent s life would yield the same after-tax wealth at the time of death as the person s actual estate, net of estate taxes? To simulate that rate, CBO assumed that the person invested an amount at age 45 large enough to grow, by 4 percent annually, to the gross estate reported on the estate tax return. 18 The analysis suggests that under 2000 estate tax law, two-thirds of such estates with gross assets of more than $675,000 (the filing threshold that year) would have owed no estate taxes, so the equivalent income tax for them was zero. On average for all such estates filing returns in 2000, estate taxes were equivalent to a 4 percent income tax applied annually over the simulated investment period. For only those estates with estate tax liability, the average equivalent income tax rate was 11 percent, and the median rate was 9 percent. The estate tax differs from the income tax in that it comes due not at a fixed date but rather at an unknown time in the future. Because the returns and assets of an enterprise vary over time, the amount of estate tax due also varies. 19 That variation could be particularly risky for a farmer or business owner: if the estate does not hold enough liquid assets to pay the estate tax, then heirs could be forced to sell the farm or business. That problem can be ameliorated with life insurance, although predicting what the value of the business will be at the time of the owner s death may be difficult. 20 However, the proceeds from life insurance are themselves subject to estate taxes, unless owners employ devices such as an irrevocable life insurance trust. 21 Alternatively, a farmer or business owner might elect to keep enough liquid assets on hand to pay the estate tax, providing greater 18. With no knowledge of the amount or timing of actual investments, CBO assumed that the person made the full investment at age 45 and reinvested all returns in the farm or business. If death occurred before age 55, the analysis assumed that the investment took place 10 years before death. In all cases, the simulation assumed a 4 percent annual rate of return, roughly the historical average. 19. See James Poterba, The Estate Tax and After-Tax Investment Returns, in Joel Slemrod, ed., Does Atlas Shrug? The Economic Consequences of Taxing the Rich (New York: Russell Sage and Harvard University Press, 2000), pp See Douglas Holtz-Eakin, John W. Phillips, and Harvey S. Rosen, Estate Taxes, Life Insurance and Small Business, Review of Economics and Statistics, vol. 83, no. 1 (February 2001), pp Such devices must be used with caution because a business owner cannot borrow against an irrevocable life insurance trust, even if the survival of the business is at stake. 18 8 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES Box 3. Estimating the Number of Estates Belonging to Farmers Along with using the number of estates claiming the qualified family-owned business-interest deduction to indicate small businesses, the Congressional Budget Office used two methods to estimate the number of farmers represented on estate tax returns. The broader measure defined a farmer as anyone who was reported to have worked in the agricultural crop or livestock industry or anyone whose occupation was listed as nonhorticultural farmer, farm worker, farm supervisor, or farm manager. That definition included people not usually considered farmers, such as bookkeepers and secretaries working for dairy farms, investors in farm real estate, and commodity brokers. The narrower measure defined a farmer as anyone who worked in one of those two industries and had one of those four occupations. The two definitions yielded similar samples of estates (see the table at right). Even that narrower definition may be far too broad, however: almost 40 percent of the estates in that sample reported no farm assets. Defining a farmer only as a nonhorticultural farmer working in the agricultural crop or livestock industry would substantially reduce the number of estates but not alter the conclusions of the analysis. Similarly, defining a farmer s estate as one in which farm assets accounted for at least 35 percent of the gross value of the estate would not qualitatively change the conclusions. (That definition would result in a sample size of about 5,500 estates in 2000.) Further, some estates may have listed farm assets in other categories, such as limited partnership assets. Because only the largest estates are required to file returns, the estates considered in this analysis belong to wealthy people in farming industries, not to subsistence farmers or migrant workers. Gross Estates of Farmers in 2000 Total Number of Estates Broad Sample Narrow Sample 5,308 4,641 Gross Value of Estate (Dollars) Average 1,814,000 1,801,000 Median 987, ,000 Standard deviation 19,737,000 20,861,000 Interquartile range a 647, ,000 5th percentile 660, ,000 95th percentile 3,182,000 3,035,000 Source: Congressional Budget Office based on data from the Internal Revenue Service s Statistics of Income files. a. The distance between the 75th percentile and the 25th percentile. flexibility in access to funds. Whether through life insurance premiums or personal saving, paying the estate tax can be translated from a lump-sum payment into a series of expenditures similar to regular income tax payments. 22 Affordability of the Estate Tax Unlike the issue of whether the estate tax influences behavior, which must be examined through surveys and economic modeling, the issue of whether estates can 22. Note that money allocated to paying estate taxes does not leave the economy, so there is little change in economic activity. Life insurance premiums and money deposited in financial institutions are both loaned and invested. afford to pay taxes can be addressed using more-concrete data. The estate tax return that must be filed within nine months of a person s death (if the gross value of the estate exceeds the filing threshold) contains a variety of information: the value of the estate before and after various credits and deductions; the decedent s occupation and industry; and the estate s assets, such as personal residence, business assets, liquid assets, and so forth. CBO used data from estate tax returns filed in 1999 and 2000 (the most recent years for which data were available when the analysis was conducted) to compare the size of estates left by farmers and small-business owners with those of the population at large and to compare estates tax liability with their liquid assets. 19 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES 9 Several factors complicate those comparisons. First, the distribution of estates filing estate tax returns is extremely asymmetrical. The average size of an estate filing a return may therefore be a misleading indicator of the overall group, because a small number of very large estates can dramatically raise the average. For that reason, this analysis reports not only averages but also medians (the midpoint of a distribution) and other percentile statistics that shed more light on the distribution of estates. Second, the information about occupation, industry, and assets reported on estate tax returns makes it difficult to identify whether a decedent owned a family farm or small business. Thus, CBO had to make assumptions in classifying estates, and those classifications are only approximate (see Box 3 for more details). For the purposes of this analysis, a small-business estate is defined as one claiming a QFOBI deduction about 1 percent of the estates filing returns in A farm estate is one in which the decedent is identified as a farmer or farm worker of any kind about 4 percent to 5 percent of estates filing returns. Because of those data limitations, CBO s analysis may have omitted some estates that contained small businesses and may have included too few or too many family farms. Characteristics of Estates Filing Returns in 1999 and 2000 The distribution of assets reported on estate tax returns is highly skewed. In all, about 104,000 estates, with an average worth of $1.9 million, filed returns in 1999, and about 108,000 estates, with an average worth of $2.0 million, filed returns in 2000 (see Table 3). Those average values do not represent the typical estate: 80 percent of the estates that filed returns were worth less than the average. 24 The median estate filing a return had a net worth of about $1.0 million in 1999 and in (The filing 23. CBO used that definition rather than sole proprietorship because although it is possible to identify sole proprietors from income tax returns, the same is not the case with estate tax returns. Those returns need not note the presence of a Schedule C in a decedent s final income tax filing, and the required reporting of types of assets in an estate cannot reliably identify all sole proprietors. 24. That asymmetry can be seen another way: the 5th percentile of estates filing returns in 1999 was about $648,000, meaning that 95 percent of estates filing returns were at least that large. If the distribution of assets was symmetrical, the 95th percentile would be about $1.35 million; that is, only 5 percent of estates would exceed that amount. The actual 95th percentile is $4.7 million. Table 3. Characteristics of Estates That Filed Estate Tax Returns in 1999 or 2000 Estates Filing Tax Returns Total Number of Estates 103, ,322 Gross Value of Estate (Dollars) Average 1,899,000 2,024,000 Median 1,027,000 1,092,000 Standard deviation 8,770,000 10,016,000 Interquartile range a 861, ,000 5th percentile 648, ,000 95th percentile 4,700,000 4,924,000 Total Number of Estates Estates Owing Estate Tax b ,869 52,000 Gross Value of Estate (Dollars) Average 2,410,000 2,540,000 Median 1,171,000 1,231,000 Standard deviation 12,087,000 13,787,000 Interquartile range a 1,037,000 1,077,000 5th percentile 726, ,000 95th percentile 6,207,000 6,358,000 Amount of Tax Paid (Dollars) Average 460, ,000 Median 125, ,000 Standard deviation 2,360,000 1,939,000 Interquartile range a 304, ,000 5th percentile 7,000 6,000 95th percentile 1,682,000 1,762,000 Source: Congressional Budget Office based on data from the Internal Revenue Service s Statistics of Income files. Note: Estates are subject to the tax law in effect in the year of death, but they do not have to file estate tax returns until nine months after the date of death. As a result, returns filed in a given year may be subject to different tax law. Returns filed in 1999 or 2000 could claim different effective exemptions, depending on the date of death: $625,000 for 1998, $650,000 for 1999, or $675,000 for a. The distance between the 75th percentile and the 25th percentile. b. CBO included only estates that owed taxes on the estate remaining at death. Estates that had paid gift taxes but did not owe additional estate taxes upon death were excluded. There were fewer than 500 such estates in 2000. 20 10 EFFECTS OF THE FEDERAL ESTATE TAX ON FARMS AND SMALL BUSINESSES Table 4. Estates Filing Estate Tax Returns in 1999 or 2000, by Decedent s Marital Status Estates Filing Tax Returns Estates Owing Estate Tax Never Married 8,151 8,726 5,301 6,060 Married 45,378 48,198 6,078 5,824 Widowed 44,948 46,164 34,535 36,307 Separated, Divorced, or Unknown 5,516 5,234 3,956 3,808 Total 103, ,322 49,869 52,000 Source: Congressional Budget Office based on data from the Internal Revenue Service s Statistics of Income files. thresholds in those years ranged from $625,000 to $675,000, depending on a person s year of death.) More than half of the estates filing returns in 1999 and 2000 had a net value that was too low to owe any estate tax. The most common reason was the unlimited spousal bequest: almost three-quarters of decedents whose estates owed no tax were married. (See Table 4 for information on the marital status of decedents whose estates filed returns.) Fewer than half of estates that filed a return owed any tax, and those estates were generally larger than ones with no tax liability. The average size of those estates was $2.4 million in 1999 and $2.5 million in 2000, with median values about half as large (see Table 3). On average, their tax payments were about $460,000 in 1999 and $469,000 in The median payment was much smaller: about $125,000 in 1999 and $131,000 in The relatively large difference between the average tax paid and the median tax paid reflects the topheavy distribution of estates and the progressivity of the estate tax. Because the estate tax is progressive, larger estates pay a disproportionate share of estate taxes. In 1999, for example, the bottom 20 percent of estates that filed returns accounted for only about 7 percent of the total gross value of estates filing returns (see Figure 1). That bottom 20 percent of estates paid less than 1 percent of total estate taxes collected. Likewise, the top 50 percent of estates accounted for 79 percent of the gross value and paid 96 percent of the taxes. Even within that group, the distribution of wealth and estate taxes was extremely uneven. The 25. In those years, the average tax payment equaled 13 percent of the value of the estate, and median payment equaled 10 percent. richest 10 percent of estates filing returns held 45 percent of the wealth and paid two-thirds of the taxes, and the richest 2 percent of estates (those larger than $8.6 million) owned about 25 percent of the wealth and paid about 40 percent of all estate taxes. 26 (Total estate tax collections were $22.9 billion in 1999 and $24.4 billion in 2000.) 27 As noted above, tax laws allow wide variation in the size of a small business when size limits exist at all which means that there is no inherent reason to presume that the typical estate of a small-business owner that files an estate tax return will be either smaller or larger than the typical estate filing a return. Moreover, although about 6 percent of estate tax returns report farming, forestry, or fishing as the decedent s occupation, and a similar share lists agricultural production as the decedent s industry (see Table 5), nothing in the definitions of those terms limits them to either small family farms or large agribusinesses. In 2000, estates that claimed the QFOBI deduction were larger than a typical estate: their average value was $3.1 million (compared with $2.0 million for all estates filing returns), and their median value was $1.3 million (compared with $1.1 million for all estates). By contrast, people identified as farmers or farm workers left estates that were smaller than a typical estate: an average value of $ It is important to note that those statistics include only the wealth of estates filing returns a small fraction of all personal wealth in the United States. 27. Barry W. Johnson and Jacob M. Mikow, Federal Estate Tax Returns, , Statistics of Income Bulletin (Spring 2002), Figure M, p. 145, available at pdf. WEALTH TRANSFER TAXES WEALTH TRANSFER TAXES How do the estate, gift, and generation-skipping transfer taxes work?... 1 Who pays the estate tax?... 2 How many people pay the estate tax?... 2 How could we reform the estate tax?... CRS Report for Congress Order Code RL33718 CRS Report for Congress Received through the CRS Web Calculating Estate Tax Liability: 2001 to 2011 and Beyond November 3, 2006 Nonna A. Noto Specialist in Public Finance Government Highlights of the 2010 Tax Relief Act On December 7, 200, President Barack Obama signed into law H.R. 4853, the Tax Relief, Unemployment Insurance Reauthorization, and Job Creation Act of 200 (the 200 Tax Relief Act). This massive bill affects Planning your estate Planning your estate A general guide to estate planning Policies issued by: American General Life Insurance Company The United States Life Insurance Company in the City of New York What is estate planning? Estate Tax Overview. Emphasis on Generation Skipping Transfers Estate Tax Overview Emphasis on Generation Skipping Transfers 1 A Brief History - 1916 The Revenue Act of 1916 (39 Stat. 756) created a tax on the transfer of wealth from an estate to its beneficiaries, Charitable Giving and Retirement Assets Charitable Giving and Retirement Assets In this issue: Basics of IRAs Retirement Plan Basics Lifetime Taxation of Distributions from Retirement Accounts Estate Taxation of IRAs and Tax-Deferred Retirement ESTATE PLANNING & TAXATION& & TAXATION Deathbed Opportunities In decoupled states, there are some last-minute planning strategies that can significantly help heirs By Robert C. Pomeroy, of counsel, and Susan L. Abbott, associate, Minimum Distributions & Beneficiary Designations: Planning Opportunities 28 $ $ $ RETIREMENT PLANS The rules regarding distributions and designated beneficiaries are complex, but there are strategies that will help minimize income and estate taxes. Minimum Distributions & Beneficiary RETIREMENT PLANNING FOR THE SMALL BUSINESS RETIREMENT PLANNING FOR THE SMALL BUSINESS PI-1157595 v1 0950000-0102 II. INCOME AND TRANSFER TAX CONSIDERATIONS A. During Participant s Lifetime 1. Prior to Distribution Income tax on earnings on plan Alternative Retirement Financial Plans and Their Features RETIREMENT ACCOUNTS Gary R. Evans, 2006-2013, November 20, 2013. The various retirement investment accounts discussed in this document all offer the potential for healthy longterm returns with substantial, TAX RELIEF ACT UPDATED DECEMBER 29, 2010 2010 TAX RELIEF ACT UPDATED DECEMBER 29, 2010 TAX RELIEF, UNEMPLOYMENT INSURANCE RE-AUTHORIZATION, AND JOB CREATION ACT OF 2010 INTRODUCTION On December 17, 2010, President Obama signed the much-anticipated Report for Congress. Estate and Gift Taxes: Economic Issues. Updated January 31, 2003 Order Code RL30600 Report for Congress Received through the CRS Web Estate and Gift Taxes: Economic Issues Updated January 31, 2003 Jane G. Gravelle Senior Specialist in Economic Policy Government ENTITY CHOICE AND EFFECTIVE TAX RATES ENTITY CHOICE AND EFFECTIVE TAX RATES Prepared by Quantria Strategies, LLC for the National Federation of Independent Business and the S Corporation Association ENTITY CHOICE AND EFFECTIVE TAX RATES CONTENTS RETIREMENT ACCOUNTS. Alternative Retirement Financial Plans and Their Features RETIREMENT ACCOUNTS The various retirement investment accounts discussed in this document all offer the potential for healthy longterm returns with substantial tax advantages that will typically have Zero Estate Tax Strategy Zero Estate Tax Strategy AN PLANNING STRATEGY USING LIFE INSURANCE, A FOUNDATION, AND WEALTH REPLACEMENT TRUST The Prudential Insurance Company of America 0257697 0257697-00003-00 Ed. 07/2015 Exp. 01/20/2017 Personal Income Tax Bulletin 2008-1. IRAs PENNSYLVANIA DEPARTMENT OF REVENUE ISSUED: JANUARY 16, 2008 Section 1. Introduction. 1. FEDERAL TAX PERSPECTIVE. Personal Income Tax Bulletin 2008-1 IRAs When Congress enacted ERISA in 1974 to regulate Retirement Investing: Analyzing the Roth Conversion Option* Retirement Investing: Analyzing the Roth Conversion Option* Robert M. Dammon Tepper School of Bsiness Carnegie Mellon University 12/3/2009 * The author thanks Chester Spatt for valuable discussions. The The 3.8% Medicare Surtax on Investment Income Wealth Strategy Report The 3.8% Medicare Surtax on Investment Income OVERVIEW Beginning in 2013, certain investment income will be subject to an additional 3.8% surtax, enacted as part of the Health Care Estate Planning. And The Second To Die Program. Estate Planning And The Second To Die Program Estate Planning and the Second to Die Program from Indiana Farm Bureau Insurance A source of satisfaction for most married couples is Tax Subsidies for Health Insurance An Issue Brief Tax Subsidies for Health Insurance An Issue Brief Prepared by the Kaiser Family Foundation July 2008 Tax Subsidies for Health Insurance Most workers pay both federal and state taxes for wages paid to them Higher Education Tax Benefits: Brief Overview and Budgetary Effects Higher Education Tax Benefits: Brief Overview and Budgetary Effects Margot L. Crandall-Hollick Analyst in Public Finance Mark P. Keightley Analyst in Public Finance August 24, 2011 CRS Report for Congress Wealthiest Families Know: 2013 & Beyond What the Wealthiest Families Know: 2013 & Beyond Determine How Estate Planning Strategies and Life Insurance May Help You Turn Your Goals into a Wealth Legacy Whether you acquired it or inherited it, wealth Estate planning opportunities with Roth IRA conversions Estate planning opportunities with Roth IRA conversions Vanguard research March 2010 Executive summary. Beginning January 1, 2010, as part of the Tax Increase Prevention and Reconciliation Act of 2005, LIQUIDATING RETIREMENT ASSETS LIQUIDATING RETIREMENT ASSETS IN A TAX-EFFICIENT MANNER By William A. Raabe and Richard B. Toolson When you enter retirement, you retire from work, not from decision-making. Among the more important decisions CHAPTER 1 Introduction to Taxation CHAPTER 1 Introduction to Taxation CHAPTER HIGHLIGHTS A proper analysis of the United States tax system begins with an examination of the tax structure and types of taxes employed in the United States. estate and gift tax planning estate and gift tax planning On January 1, 2013, Congress enacted ATRA. This law created certainty and provides for planning opportunities to reduce tax cost of transferring your assets to your beneficiaries. business owner issues and depreciation deductions business owner issues and depreciation deductions Individuals who are owners of a business, whether as sole proprietors or through a partnership, limited liability company or S corporation, have specific Advanced Markets Combining Estate Planning Techniques A Powerful Strategy Life insurance can help meet many wealth transfer goals. The death benefit could cover estate taxes, for instance, avoiding liquidation of much of the estate to meet the estate tax bill. Even though a Family Business Succession Planning WILLIAM DELMAGE President 22 Hemingway Drive East Providence, RI 02915 (401) 435-4239 103 wmd@wdandassociates.com Family Business Succession Planning Page 2 of 9 Transferring TAX PLANNING FOR CANADIAN FARMERS April 2014 CONTENTS Annual tax planning issues Income tax deferral Incorporating your farming business Long-term planning issues Taxation of capital gains Maximizing your capital gains exemption claims THE ESTATE TAX: MYTHS AND REALITIES 820 First Street NE, Suite 510 Washington, DC 20002 Tel: 202-408-1080 Fax: 202-408-1056 center@cbpp.org Revised October 11, 2007 THE ESTATE TAX: MYTHS AND REALITIES The estate tax has been Business Uses of Life Insurance Select Portfolio Management, Inc. David M. Jones, MBA Wealth Advisor 120 Vantis, Suite 430 Aliso Viejo, CA 92656 949-975-7900 dave.jones@selectportfolio.com Business Uses of Life GUIDE TO OHIO S SCHOOL DISTRICT INCOME TAX Prepared by THE OHIO DEPARTMENT OF TAXATION JUNE 2013 TABLE OF CONTENTS Gui det oohi o s SchoolDi st r i ct I ncometax Updat edjune2013 GUIDE TO OHIO S SCHOOL DISTRICT INCOME TAX Prepared by THE OHIO DEPARTMENT OF TAXATION JUNE 2013 TABLE OF CONTENTS General Filing On Charitable Remainder Trust: A Valuable Financial Tool for the Agricultural Family The Charitable Remainder Trust: A Valuable Financial Tool for the Agricultural Family An Educational Resource From Solid Rock Wealth Management By Christopher Nolt, LUTCF Introduction A charitable remainder A Sole Proprietor Insured Buy-Sell Plan A Sole Proprietor Insured Buy-Sell Plan At a sole proprietor s death, the business is dissolved and all business assets and liabilities become part of the sole proprietor's personal estate. Have you evaluated Advanced Wealth Transfer Strategies Family Limited Partnerships (FLPS) Advanced Wealth Transfer Strategies The American Taxpayer Relief Act of 2012 established a permanent gift and estate tax exemption of $5 million, which is adjusted annually Wealth Transfer Planning Considerations for 2011 and 2012 THE CENTER FOR WEALTH PLANNING Wealth Transfer Planning Considerations for 2011 and 2012 March 2011 The Center for Wealth Planning is part of Credit Suisse s Private Banking USA and does not provide tax Diabetes Partnership of Cleveland s Planned Giving Guide Diabetes Partnership of Cleveland s Planned Giving Guide Diabetes Partnership of Cleveland s Planned Giving Guide TABLE OF CONTENTS Life Insurance Gifts Page 2 Charitable Gifts of IRAs.Page 3 Charitable Transferring Business Assets Transferring Business Assets In the future, you may either want to transfer your business to heirs or sell your business to employees, competitors, or others. Planning for transfer of a family business THE FEDERAL ESTATE AND GIFT TAX: Description, Profile of Taxpayers, and Economic Consequences. David Joulfaian* U.S. Department of the Treasury THE FEDERAL ESTATE AND GIFT TAX: Description, Profile of Taxpayers, and Economic Consequences by David Joulfaian* U.S. Department of the Treasury OTA Paper 80 December 1998 OTA Papers is an occasional Advanced Markets Sales Strategy Estate Liquidity Through Section 6166 and Life Insurance LIQUIDITY FOR THE BUSINESS OWNER S ESTATE For the owner of a closely held business, estate liquidity can be a matter of particular importance. If that business represents a large portion of the estate, Farmland Preservation An Estate Planning Tool Fact Sheet 779 Farmland Preservation An Estate Planning Tool If current trends are left unchecked, Maryland could lose 500,000 acres of farmland, forests, and other open spaces to development over the Sales Strategy Sale to a Grantor Trust (SAGT) Estate planners have been using the Irrevocable Life Insurance Trust (ILIT) for many years, to increase wealth and liquidity outside the taxable estate. 1 However, transfers to ILITs One effective technique Taking Advantage of the New Gift and Estate Tax Law product resource Taking Advantage of the New Gift and Estate Tax Law summary tra 2010 in brief Congressional debate about whether to extend tax cuts put into place during the Bush administration came to 2011 Tax And Financial Planning Tables 2011 Tax and Financial PLanning Tables Investment Planning 2011 Tax And Financial Planning Tables Tax planning is an important component for your overall financial plan. 2011 Tax and Financial PLanning The Impact of Proposed Federal Tax Reform on Farm Businesses The Impact of Proposed Federal Tax Reform on Farm Businesses James M. Williamson and Ron Durst United States Department of Agriculture Economic Research Service Washington, DC Poster prepared for presentation Estate planning strategies using life insurance in a trust Options for handling distributions, rollovers and conversions Estate planning strategies using life insurance in a trust Options for handling distributions, rollovers and conversions Life s better when we re connected Table of contents Find your questions review THE TAX-FREE SAVINGS ACCOUNT THE TAX-FREE SAVINGS ACCOUNT The 2008 federal budget introduced the Tax-Free Savings Account (TFSA) for individuals beginning in 2009. The TFSA allows you to set money aside without paying tax on the income INCORPORATING YOUR BUSINESS INCORPORATING YOUR BUSINESS REFERENCE GUIDE If you are carrying on a business through a sole proprietorship or a partnership, it may at some point be appropriate to use a corporation to carry on the business. VCR PRODUCED BY THE NATIONAL VENTURE CAPITAL ASSOCIATION AND ERNST & YOUNG LLP VENTURE CAPITAL REVIEW ISSUE 17 SPRING 2006 VCR VENTURE CAPITAL REVIEW ISSUE 17 SPRING 2006 PRODUCED BY THE NATIONAL VENTURE CAPITAL ASSOCIATION AND ERNST & YOUNG LLP Using Derivatives to Transfer Carried Interests in Private Equity, LBO and Venture Recapitalization: Estate Freeze Techniques Inspire Capital Management LLC Michael P. McKee, CFP President 1681 Maitland Avenue Maitland, FL 32751 407-331-0076 mckee@inspirecapital.com Recapitalization: Estate Freeze Techniques Comprehensive Split Dollar Advanced Markets Client Guide Comprehensive Split Dollar Crafting a plan to meet your needs. John Hancock Life Insurance Company (U.S.A.) (John Hancock) John Hancock Life Insurance Company New York (John Estate Planning Considerations for Ohio Families Estate Planning Considerations for Ohio Families Section 9 Ohio and Federal Estate Settlement Costs by Jim Polson, David Miller, and Russell Cunningham* Competent estate planning requires consideration National Small Business Network National Small Business Network WRITTEN STATEMENT FOR THE RECORD US SENATE COMMITTEE ON FINANCE U.S. HOUSE OF REPRESENTATIVES COMMITTEE ON WAYS AND MEANS JOINT HEARING ON TAX REFORM AND THE TAX TREATMENT Life Insurance Companies and Products Part B. Life Insurance Companies and Products The current Federal income tax treatment of life insurance companies and their products al.lows investors in such products to obtain a substantially higher CHAPTER 9 BUSINESS INSURANCE CHAPTER 9 BUSINESS INSURANCE Just as individuals need insurance for protection so do businesses. Businesses need insurance to cover potential property losses and liability losses. Life insurance also Charitable Gift Planning Charitable Gift Planning A Practical Guide for the Estate Planner Second Edition Thomas J* Ray, Jr. Section of Real Property, _,..,... r 7 Defending Liberty Probate and Trust Law Pursuing justice CONTENTS National Capital Gift Planning Council National Capital Gift Planning Council Nuts and Bolts Session September 16, 2015 Craig Stevens, Aronson LLC 1 Income Tax and Tax Brackets Calculating Income Tax Gross Income Everything 2016 Tax Planning & Reference Guide 2016 Tax Planning & Reference Guide The 2016 Tax Planning & Reference Guide is designed as a reference and is not intended to function as tax advice. Please consult your professional accounting advisor
http://docplayer.net/1649386-Congress-of-the-united-states-congressional-budget-office.html
CC-MAIN-2017-34
en
refinedweb
Double Nickels on the Dime: An Oral History of the Foundation Grants Program This is a long-ish post summarizing my perspective on the implementation and effects (so far) of CIHR’s Foundation Grant program. I know this program is defensible from some points of view. I’m not trying to speak for anyone but myself. My main points are these: - A grant consolidation program is a good idea under some conditions. But Foundation is not a consolidation program, and we can’t afford it at any scale under the current circumstances. The opportunity costs are too high. - CIHR should stop Foundation, wind down current grants, and re-allocate as much funding as possible to Project Grants, with a target of 20% success rates as the number one priority of the agency. Project provides the broadest support to the most people. It is the best way to invest in the future of Canadian health research. It matters more — and is a better way to fulfill CIHR’s mandate — than Foundation or most strategic programs. If the Project Grants program is not robust and healthy, CIHR is a failure. - If 20% success rates can be maintained, a true consolidation program should be considered, with strict controls on eligibility and budget. Alain the farmer has 10 pigs and 4 buckets of slop per day to feed them. [Bear with me.] He notices two of the pigs — his two favorite pigs, in fact, pigs he knows are of the highest quality — tend to get a bit more slop than the others. They are good at competing for a spot at the trough, and they are good eaters! He’s had them the longest, and they have more meat on them than the others. So he thinks, “If I give those two even more slop, I’ll have the two most excellent pigs in the county, and I’ll make a killing on market day.” So he splits his single pig pen into two. He puts 8 of the pigs in one pen and feeds them 2 slop buckets per day, and puts the 2 excellent pigs in another and gives them each their own bucket. The 2 super-pigs grow faster, but not all that much faster. Often they leave uneaten slop on the floor of their pen. Even a pig can only eat so much, and they can only grow so fast. Meanwhile, the 8 other pigs stop growing. Most of them lose weight. Competition at the trough gets ugly. A few get sick. One of the younger pigs dies. On market day, he does indeed make way more than average on his two super-pigs, who are nicely fattened and beautiful to behold. No one can deny their porcine excellence, as determined by stringent, objective, expert pig review. One of them even wins a blue ribbon and gets its picture in the newspaper with the Minister of Agriculture! But his 8 other pigs (sorry, now 7) are worth far less than average. Overall, it’s a large net loss. The farmer could learn something about zero-sum resource allocation, diminishing returns, and opportunity costs. Or, he could pretend that winning a ribbon was the goal all along. A lot is happening at CIHR, now under new (albeit interim) leadership. Most notably, they have returned to face-to-face panels for peer review. I’m not saying it was easy, but this was, politically speaking, low-hanging fruit. We are going back to a peer review system that pretty much worked. But success rates will reach new lows, and fewer labs will be funded than before the Reforms. Peer review in the Reforms was bad. Some of it was surreal. But the bungled implementation of virtual peer review was not what crippled the system. It was the creation of a funding caste system in which large, long-term Foundation grants are awarded primarily based on seniority and reputation. These grants both enlarge annual budgets and extend funding durations for a small subset of scientists — a subset from which early and mid-career investigators are largely excluded by design. Reallocating a large proportion of open funds to a small group in Foundation has been paid for so far by cutting the number of normal operating grant competitions in half for 3 years. Because time did not stand still while this happened, it has had predictable effects on stability and application pressure. Each year, many more operating grants are ending than CIHR is awarding, something that was never true before the Reforms. Because the vast majority of PIs in the system have one grant, this means labs are being defunded. People are losing their jobs. Past investment is squandered. As we return (finally, maybe?) to two competitions per year, it will necessitate cutting Project success rates in half. Six of one, half dozen of the other — the effect is the same. The opportunity costs are massive, but have been ignored. CIHR initially committed 45% of open program funding to Foundation. This was clearly not something they could afford to do while keeping productive labs open and letting new investigators into the system. It was, in fact, an obscene giveaway that was duly taken advantage of with large — and largely approved — budget requests above an already dubiously generous “baseline” calculation. How the 45% allocation and individual budgeting procedures were approved by those charged with oversight of CIHR’s operations is an incomprehensible failure of due diligence that as far as I know remains unquestioned and unexplained. The various defenses of this at the time were laughable, in a not-funny sort of way. For example, a CIHR executive at the time told me on the phone that they expected ECIs and MCIs to outperform senior scientists in Project, because after all, Foundation was protecting us from having to compete with the scientific crème de la crème as they floated up to unprecedented heights in the funding distribution. At the same time, ironically, CIHR was releasing data showing that, in fact, ECIs and MCIs had generally competed just fine in the OOGP. As it happened (believe it or not), ECIs and MCIs did not outperform more experienced applicants in the chaotic and confusing Project competitions. Unlike in any competition before, quotas and supplemental funding were needed to ensure that ECIs received something approaching — but still substantially less than — the proportion of open program funding they had received in the OOGP. MCIs received no such relief nor sympathy. After all, what more natural time to cull the herd — shouldn’t you have to prove yourself? We’re a meritocracy! Never mind that “proving yourself” under today’s crisis conditions bears zero resemblance to proving yourself under the stable formats and 25–30% success rates of the 2000s. It’s always been hard. So to summarize the Reforms: CIHR took a funding program that was roughly equitable by career stage and split it into two programs that both disfavored early/mid-career applicants, one of them cartoonishly so. All the while, they claimed that life would be good in the Projects, because we were safely protected there from the apex scientists who were now in Foundation, along with half the money. Alternative take: The ladder is being pulled up on two generations of Canadian scientists. In the OOGP, about 140 people had the equivalent of 3 or more concurrent grants in annual funding. CIHR was now going to give out about 140 grants larger than that every year while funding the same number of labs. It was truly magical thinking. In fact, as in all zero-sum wealth concentration phenomena, the creation of local luxury and stability would be paid for with global scarcity and instability. Last year, CIHR finally entered the CHiPs-style 40 car pile-up on the Pacific Coast Highway phase of the Reforms. (This phase was easy to foresee, though it was left off the official transition Gantt charts.) Around this time, it became conventional wisdom that the Foundation program had some “good ideas behind it” but was “unsustainable at its current size.” So: how much could CIHR afford to allocate to a <ahem> “pilot” program like Foundation? The answer 4 years ago was probably something like 10%, assuming they were able to keep budgets under control, which they can’t seem to do. The answer today is unequivocally 0%. This is an agency overextended in every sense and still mired in operational issues. The damage done by the Reforms cannot be addressed by returning to face-to-face panels and fiddling around the edges of budget allocations. Why bring all this up? Why relitigate the Reforms when we have to look ahead? Because it is clear that there is still no will to even temporarily suspend the Foundation program, let alone do the needful and cancel it. The fact that the people charged with getting CIHR back on the right track — and given enormous authority to do so — are still thinking that a program like Foundation has any place at all in what we are all hoping will be CIHR’s recovery should be unsettling. It suggests that they share a core belief of previous leadership: that there is a subset of senior PIs who should be protected at any cost from the current funding climate. Do you “like the idea” of a “Foundation-type program?” I do. Especially when we phrase it in this exquisitely hedged and vague manner. I think we all like the idea, in isolation, in principle. So what? I “like the idea” of a lot of things we can’t afford or that have unintended consequences where harms outweigh benefits. Again and again, the appeal and reasonableness of the “idea” of a Foundation-like program has been weaponized against us and used to justify this specific program at this specific time, which is poorly-conceived and is doing enormous damage. The recent CIHR road show is indistinguishable from the Reforms marketing blitz on this topic. Don’t you think [Famous Scientist] deserves it? Don’t you like excellence? Well, Foundation is about excellence, chum. Case closed. If success rates stabilized in Project at an acceptable level, I would welcome a “Foundation-like” program that would give people who have a sustained track record of high productivity some extra freedom from the grant treadmill. My idea in a nutshell: if you have 3 or more concurrent grants that have been successfully renewed, you can consolidate them. This can only happen when Project success rates are above 20%. And, indeed, that is exactly what should be the primary goal for CIHR: get Project Grant success rates up to and sustained at 20% or higher so that we can have a functional, healthy, future-oriented funding system. Here is how: - Stop the Foundation program. Start winding down current Foundation grants now. Revisit consolidation when Project success rates are 20%. - Put 70% of the CIHR budget into Project. This will require cutting strategic programs and non-operating grant spending significantly. - Do everything possible to #SupporttheReport. Even with the above, 20% success rates in Project will require the full Naylor ask. I know influential people want these grants. Who wouldn’t? I know there is significant support for continuing Foundation. But I would hope there is even more support for the research community all being in this together, for not pulling up the ladder, for not eating our young, and for building research capacity and sustaining research careers. These, by the way, are things that are required by the CIHR Act. They cannot be accomplished with 10% success rates in Project. Prestige-based glamour grants to a small tier of scientists are, funnily enough, not mentioned in the CIHR Act. The CIHR Reforms were an extraordinary failure that has done extraordinary damage. Damage that is now chronic, no matter how you review the grants. It is always tempting to deploy half measures, to split the difference on hard choices. That’s not good enough. Extraordinary measures are needed to restore CIHR. Stopping Foundation isn’t even the extraordinary step — it’s the easiest one on the table. If we can’t get that done, I have little hope for CIHR as an agency that can support a bright future for Canadian health research.
https://medium.com/@MHendr1cks/double-nickels-on-the-dime-an-oral-history-of-the-foundation-grants-program-626549adb2da
CC-MAIN-2017-34
en
refinedweb
Before using the WiringPi GPIO library, you need to include its header file in your programs: #include <wiringPi.h> You may also need to add -I/usr/local/include -L/usr/local/lib -lwiringPi to the compile line of your program depending on the environment you are using. The important one is -lwiringPi You may also need additional #include lines, depending on the modules you are using. Reference/API Pages These two above are the most important features of wiringPi. Below are additional functions and libraries that comprise the main wiringPi release. - Raspberry Pi specific functions - Timing functions - Program priority, timing and threads - Serial library - SPI library - I2C library - Shift library - Software PWM library - Software tone library
http://wiringpi.com/reference/
CC-MAIN-2017-34
en
refinedweb
Java. Scanner Class Keyboard Class. User Interaction . So far when we created a program there was no human interaction Our programs just simply showed one output In order for users to interact with our programs we need to use external classes. External. Java Scanner Class Keyboard Class Class Creates an instance of the class Constructor Name import java.util.Scanner; class test { public static void main (String args[]){ //Create an instance of the Scanner Scanner s = new Scanner(System.in); System.out.print("Enter your name : "); //Since the name is a String the String //has to be used String name = s.next(); System.out.println("How old are you ? "); //The age can be stored in a long long age = s.nextLong(); System.out.println("You are "+name+" and you are "+age+" years old."); } } class test { public static void main (String args[]){ System.out.print("Enter your name : "); String name = Keyboard.readString(); System.out.println("How old are you ? "); long age = Keyboard.readLong(); System.out.println("You are "+name+" and you are "+age+" years old."); } }
http://www.slideserve.com/elda/java
CC-MAIN-2017-13
en
refinedweb
). What is Sense HAT? The Sense HAT is an add-on board for Raspberry Pi, made espacially for the Astro Pi mission to go in Space. The Sense HAT has an 8x8 RGB LED matrix, a five-button joystick and includes the following sensors: - Gyroscope - Accelerometer - Magnetometer - Barometric pressure sensor - Humidity sensors The Sense HAT - image from the official Raspberry Pi website Basically, the Sense HAT is a board that has integrated sensors and a joystick, with an additional 8x8 LED matrix. It is a very fun board to play with and I recommend it if you don’t want (or know, for that matter) to get tangled in wires and in calculating resistances. Assembling the two boards In order to get started, firstly we need to assemble the HAT on the Pi. Image from the official Raspberry Pi website After carefully assembling the two boards (watching not to bend the pins of either boards) and putting the screws and the hexagonal stand-offs in place, you are ready to go. If successful, the end result should look similar to this: You can follow the instructions here to install Windows IoT Core on your Raspberry Pi. Assuming you assembled the board and installed Windows on your Raspberry, we are ready to get started. Creating a Windows Universal App that will run on the Raspberry At this moment there is no official .NET library to work with the Sense HAT. There is an official Python library and you can get started with it here. While there is no official .NET library, there is a library developed by Mattias Larsson which provides exactly the functionality we need. You can find the source code on GitHub and the package on NuGet. If you are a complete beginner with Windows IoT Core and the Raspberry Pi3, I strongly recommend you go through this step-by-step tutorial on how to get started with the installing, configuring and writing your first app on the Raspberry Pi3 with Windows IoT Core: - Step 1 of 4: Get the Tools - Step 2 of 4: Set up your device - Step 3 of 4: Set up Visual Studio - Step 4 of 4: Write your first app After this, you can find multiple examples with various hardware components and software services, from the Windows IoT developer website or from Hackster. Here you can find a very brief introduction to creating UWP apps and a Hello, World example. Creating the app architecture At this point, we can choose between two approaches: - have the entire logic (including the communication with the Sense HAT and the cloud) in the UWP application - create separate (and reusable) class library projects that can be referenced from the UWP application For this project, we will go with the second approach (and you can already review the source code on GitHub). In this case, I went with the following naming conventions for the solution and the projects: - the name of the solution: RPiSenseHatTelemetry - the name of various projects: RPiSenseHatTelemetry.SpecificProjectFunctionality To get started, open Visual Studio and create a new Universal Windows app for Windows 10. There is nothing specific for Raspberry Pi or Sense HAT yet, just a typical UWP app. (In my case, the naming was: RPiSenseHatTelemetry.Uwp). The telemetry collected and analyzed In this very simple example, we will only get the temperature telemetry, with the Celsius value and the timestamp of the measurement. Since we plan on taking more telemetry than temperature (and use these classes in other projects), we will create a project called RPiSenseHatTelemetry.Common that we will reference later. public class TemperatureTelemetry { public string Time { get; set; } public double Temperature { get; set; } } The Sense HAT Communication Project Right now, we need to have specific functionality for communicating with the Sense HAT. In a new class library project (in this case called: RPiSenseHatTelemetry.SenseHatCommunication), add the NuGet package for the Sense HAT: Install-Package Emmellsoft.IoT.RPi.SenseHat. This will add a bunch of files and folders to your project (mainly for demo and testing purposes) that we will not use, all we need is the reference to the Emmellsoft.IoT.RPi.SenseHat dll. The main thing that we will use from this library is exposed through the ISenseHat interface, which you can find in the library’s GitHub repository. The best place to learn how to use this library is the demo section of the repository. There you can find how to create a compass, how to make disco lights from the 8x8 LED matrix or write the temperature on the LED matrix. We will create a new class that will only expose funtionality related with the temperature telemetry, but you can expand this class to do all the things you want. using System; using System.Threading; using System.Threading.Tasks; using Emmellsoft.IoT.Rpi.SenseHat; using RPiSenseHatTelemetry.Common; public class SenseHat : IDisposable { private ISenseHat _senseHat { get; set; } public async Task Activate() { _senseHat = await SenseHatFactory.GetSenseHat().ConfigureAwait(false); _senseHat.Display.Clear(); _senseHat.Display.Update(); } public TemperatureTelemetry GetTemperature() { while (true) { _senseHat.Sensors.HumiditySensor.Update(); if (_senseHat.Sensors.Temperature.HasValue) { return new TemperatureTelemetry() { Time = DateTime.UtcNow.AddHours(3).ToString("yyyy-MM-dd HH:mm:ss.fff"), Temperature = Math.Round(_senseHat.Sensors.Temperature.Value, 2) }; } else new ManualResetEventSlim(false).Wait(TimeSpan.FromSeconds(0.5)); } } public void Dispose() { _senseHat.Dispose(); } } We will obtain the actual object that does the communication with the board through SenseHatFactory and we will use this object throughout our class. In the Activate method, we get the reference to the ISenseHat object through the factory, then we turn off the LED matrix (since we don’t want to have it on at all times when it’s running). We also have a Dispose method, since the _senseHat is disposable. The GetTemperature method is pretty straightforward: we check if the temperature sensor has value. If it does, we create a new TemperatureTelemetry object with the corresponding timestamp and temperature that we return. If the sensor doesn’t have a value, we wait half a second and try again. This is the entire code that deals with the Sense HAT. Displaying the temperature in the UWP app In the SenseHat property in order to get the temperature once every three seconds, that we then display. public sealed partial class MainPage : Page { private SenseHat _senseHat { get; set; } public MainPage() { this.InitializeComponent(); _senseHat = new SenseHat(); this.ActivateSenseHat(); this.Loaded += (sender, e) => { DispatcherTimer timer = new DispatcherTimer(); timer.Tick += async (x, y) => { var temperatureTelemetry = _senseHat.GetTemperature(); this.temperatureTextBlock.Text = "Temperature: " + temperatureTelemetry.Temperature.ToString() + "at " + temperatureTelemetry.Time; }; timer.Interval = TimeSpan.FromSeconds(3); timer.Start(); }; } private async void ActivateSenseHat() { await _senseHat.Activate(); } } After the page has loaded, we use a DispatcherTimer object and every 3 seconds we get a new value from the sensors and display it in the UWP app. Later, in the same place as we displayed in the app, we will send the data to IoT Hub and we will write the value on the HAT LED matrix. Communicating with the Cloud - Azure IoT Hub First of all, we need to create an Azure IoT Hub. To do this, simply follow the instructions here. At this point, you can start using the Device Explorer - a small application that you can use to see the IoT Hubs from Azure and their connected devices. From here, you can add devices that are authorized to send data to the IoT Hub and see the messages that arrive on the IoT Hub in real-time, while also having the ability to send messages to the device. In this case, I created a device called RPi.SenseHat that sends the temperature telemetry to the hub. To communicate with the cloud, I created a separate project - RPiSenseHatTelemetry.CloudCommunication, where I created a class called IoTHubCommunication that deals with sending a receiving messages from the cloud. using System; using System.Text; using System.Threading.Tasks; using Microsoft.Azure.Devices.Client; public class IoTHubConnection : IDisposable { private DeviceClient _deviceClient { get; set; } public IoTHubConnection() { _deviceClient = DeviceClient.CreateFromConnectionString(GetConnectionString(), TransportType.Amqp); } public async Task SendEventAsync(string payload) { await _deviceClient.SendEventAsync(new Message(Encoding.ASCII.GetBytes(payload))); } public async Task<string> ReceiveEventAsync() { while (true) { var receivedMessage = await _deviceClient.ReceiveAsync(); if (receivedMessage != null) { var messageData = Encoding.ASCII.GetString(receivedMessage.GetBytes()); await _deviceClient.CompleteAsync(receivedMessage); return messageData; } await Task.Delay(TimeSpan.FromSeconds(1)); } } private string GetConnectionString() { return "your-connection-string"; } public void Dispose() { _deviceClient.Dispose(); } } Basically, the methods we will use are SendEventAsync, which sends an event asynchronously and ReceiveEventAsync which receives and event asynchronously. You can also use a VS extension to add a class that makes the communication with an IoT Hub you select. Here you can find a tutorial on how to start publising events from a UWP app with Azure IoT Hub. Adding cloud communication to the UWP app To add the cloud communication to the UWP app, simply create a new instance of our newly created class, IoTHubConnection and use the SendEventAsync method in the timer.Tick += async (x, y) => { var temperatureTelemetry = _senseHat.GetTemperature(); this.temperatureTextBlock.Text = "Temperature: " + temperatureTelemetry.Temperature.ToString() + "at " + temperatureTelemetry.Time; await _iotHubConnection.SendEventAsync(JsonConvert.SerializeObject(temperatureTelemetry)); }; Sending the data from IoT Hub to Stream Analytics You can get started on what is Stream Analytics here. This is an introduction on building IoT solutions with Stream Analytics. Right now we have data going from the Raspberry Pi to the IoT Hub, but nothing more happens with that data. At first, we will send the data to an Azure SQL database. This is a step-by-step tutorial on how to create an Azure SQL database. After creating the Stream Analytics service in Azure, we need to add input data, in this case from the IoT Hub: We also need to configure an output, an Azure SQL database: Notice how we also need to configure a table for our data to be stored. This table should have the same structure as our TemperatureTelemetry objects that we send. This is the script I used for creating the table for the temperature telemetry: CREATE TABLE [dbo].[TemperatureTelemetry] ( [Time] DATETIME NULL, [Temperature] FLOAT (53) NOT NULL ); At this point, we need a Stream Analytics query that takes the data from IoT Hub and puts it into our SQL database: SELECT Time, Temperature INTO [sql-database-output] FROM [rpi-sensehat-iot-hub-input] The names sql-database-outputand rpi-sensehat-iot-hub-inputare the names I gave the SQL database as output and the IoT Hub as input, respectively. Testing and running the application Right now, we can run the application remotely from Visual Studio to our Raspberry Pi by finding the IP in the IoT Dashboard application: We can also connect from our browser through the device portal: We can also create a remote desktop connection to our Raspberry: From Visual Studio, we deploy our application to a remote device: After the application is successfully deployed, we will see data coming into the IoT Hub through the Device Explorer, and also in the SQL database we created as output for the Stream Analytics job. And here is the data that arrives in the SQL database: The query above is made through PowerShell, through a custom script I made for querying and making commands to a SQL database without the SQL Server Management Studio. You can find the script here with demo usage. Next steps In a following article, we will configure an additional output for the Stream Analytics job, a Service Bus that will allow us to use the messages real-time in a web (or even mobile) application, with custom alerts. We will also create a command from the IoT Hub based on an alert from the Stream Analytics job that will turn on or off the LED matrix (and will even control an Arduino connected through USB). We will also write the current temperature on the LED matrix. Conclusion We created a very simple UWP app that takes data from the Sense HAT sensors, displays it on the screen, then sends it through Azure IoT Hub, then to a Stream Analytics job that outputs it into an Azure SQL database. Share this post Google+ StumbleUpon
https://radu-matei.github.io/blog/rpi-sensehat-telemetry/
CC-MAIN-2017-13
en
refinedweb
Hi, i need some help with a program that calculates the numeric value of a name. The value of a name is determined by summing up the values of the letters of the name where 'a' is 1, 'b' is 2, 'c' is 3 etc., up to 'z' being 26. For example, the name "Zelle" would have the value 26+5+12+12+5=60. Write a program that calculates the numeric value of a complete name such as "John Marvin Zelle". I can get it to work for one name but for a complete name i am lost. Here is my code so far: import string import math def main(): word=raw_input('Enter your name:') sum=0 word=string.upper(word) s=string.split(word) print s for l in word: sum=sum+ord(l)-64 print'The numeric value of your name:',sum main() any help would be appreciated
https://www.daniweb.com/programming/software-development/threads/382082/python-program-help
CC-MAIN-2017-13
en
refinedweb
#include <polygon.h> #include <polygon.h> Inheritance diagram for aePolygon: Definition at line 35 of file polygon.h. Create a polygon with no name. Naturally you won't be able to find this object from the engine by name. Create a polygon given a name. [virtual] Draw the polygon. This is called by aeEngine::Render(). You shouldn't need to call this yourself. Implements aeObject.
http://aeengine.sourceforge.net/documentation/online/pubapi/classaePolygon.html
CC-MAIN-2017-13
en
refinedweb
Hungarian Notation With C++ I make it a point to in fot Hungarian Notation. Just makes reading the code a little easier. Arethier any norms for VB. I recently started on VB.Net, so I want to get into a habit right away. Has anybody set some rules for themselves? Gunjan Sinha Thursday, June 12, 2003 Hungarian notation is falling off in popularity among the .NET crowd. Somewhere on MSDN, MS has published their opinion of good naming conventions. Mark Hoffman Thursday, June 12, 2003 i used to code with type notation untill i started programming those loosed typed languages like perl, php etc. for me, type notation in those languages is a limitation. abel Thursday, June 12, 2003 It's a mix of mainly Pascal (LikeThis) and some Camel (likeThis). You can find it under "Design Guidelines for Class Library Developers" in the MSDN library. Personally, I prefer it to Hungarian. I also tend to use the universal type names of the FCL in C# (Int32 rather than int), but that's another discussion. Pietro Thursday, June 12, 2003 Personally I'm still a big fan of Hungarian, it's ESPECIALLY important in untyped languages, and we even use it in SQL code which I've found very, very valuable. Joel Spolsky Thursday, June 12, 2003 I was a huge Hungarian advocate, but something about .Net just makes it seem like a waste of time. Actually, I just realized that .Net *does* follow some of the later hungarian recommendations - over the last few years most standards I read indicated that class variables didn't need a prefix (since adding a "c" doesn't really do anything). Well, since everything in .Net is a class... [grin] However, I *do* prefix controls on webforms, since it makes them easier to find in intellisense. Philo Philo Thursday, June 12, 2003 > Personally I'm still a big fan of Hungarian, it's ESPECIALLY > important in untyped languages, and we even use it in > SQL code which I've found very, very valuable. I used to use it all the time, especially when I was using VBScript and Javascript just to keep track of what was supposed to be what. But in a langauge like C# I feel it makes less sense with namespacing and tighter variable scope. Of course, you could just write code that was one long procedure and it would get very confusing, but if you're doing it right you shouldn't really need Hungarian. Plus as a personal preference I think the code looks less "ugly" than with Hungarian. Although I do still use it in forms programming, for example a "lbl" prefix for a label. "... if you're doing it right you shouldn't really need Hungarian..." Unfortunately, if you're rustling someone else's spaghetti code, it can be very handy to be able to *quickly* see that: dwCustNum = dsCustomers.GetI("custno"); ... is an error. As long as there are amateur hacks, there will be Hungarian to help bring order to chaos... Spaghetti Rustler Thursday, June 12, 2003 I personnally follow the umain methodology. Swahili Dilio Thursday, June 12, 2003 But if you can teach them to use hungarian, can't you just teach them not to check in code they haven't compiled? BTW, another reason hungarian is less necessary in .Net - no implicit type conversions. string var1; int var2; [lots of code] var1=var2; //hard to find bug in VB, throws an exception // upon compiling in C# Philo I have been a hardcore Hungarian Apologist for many years, but lately my faith is wavering. I am now test-driving one word names, but I still find the m_ and g_ "namespace" prefixes useful. I also like the 'p' pointer prefixes, especially when dealing with pointers to pointers ('pp'). I am giving up Hungarian mostly because of the inconsistent naming of strings, pointers to strings, and arrays of chars. Which is really correct? In most cases, knowing the actual underlying type of the "string" is unimportant, but when it is then Hungarian's supposed advantage of "visual type checking" breaks down when you code: pszUserName = archUserName; char* pUserName = NULL; char* pszUserName = NULL; char* szUserName = NULL; char achUserName[USERNAME_LENGTH]; char rgchUserName[USERNAME_LENGTH]; char sUserName[USERNAME_LENGTH]; char szUserName[USERNAME_LENGTH]; char pUserName[USERNAME_LENGTH]; char pszUserName[USERNAME_LENGTH]; runtime Thursday, June 12, 2003 oops, I also forgot: TCHAR* pUserName = NULL; TCHAR* pchUserName = NULL; TCHAR* pszUserName = NULL; TCHAR* szUserName = NULL; TCHAR* pUserName = NULL; TCHAR* ptchUserName = NULL; TCHAR* ptszUserName = NULL; TCHAR* tszUserName = NULL; WCHAR* pUserName = NULL; WCHAR* pchUserName = NULL; WCHAR* pszUserName = NULL; WCHAR* szUserName = NULL; WCHAR* pUserName = NULL; WCHAR* pwchUserName = NULL; WCHAR* pwszUserName = NULL; WCHAR* wszUserName = NULL; Referring to the above example: dwCustNum = dsCustomers.GetI("custno"); How do you know that dwCustNum is of type DWORD. dw prefix? Are you sure? That prefix means nothing to the compiler. In a typeless language you can put anything to a variable regadless the prefix. Personally I hate hungarian notation, it pollutes names, making the code harder to read. And as to the spaghetti code, don't you think that you are applying wrong methods to solve the problem, isn't it better in the long run to educate people instead of having a policy to use this notation, which I am sure will fall apart during the maintenance period of the product life time. Passater Thursday, June 12, 2003 I use p for pointers and that's about it. I could see m for members as well. But microsoft goes totally overboard. The code looks nasty. I don't understand the whole point of having the types in your variable name. Isn't that what the compiler is for? (at least in a statically type-checked language which almost all MS code is in) Having a decent code browse feature in an editor or those tooltips lets you know the type. I've seen people use hungarian on functions to indicate the return value, i.e. void vFunction(); int iFunction(); float fFunction(); int* piFunction(); well why not int ifipiFunction( float f, int i, int* pi ) since the arguments are part of the type for a function. It gets ridiculous. Andy Thursday, June 12, 2003 > don't understand the whole point of having the types in > your variable name. Isn't that what the compiler is for? As I pointed out the last time this came up in JOS, there are two completely contradictory philosophical approaches to Hungarian naming conventions. The one I call "the sensible philosophy" is the one actually espoused by Simonyi in his original article: Hungarian prefixes concisely describe semantics and explicitly do not describe storage. Simonyi is very clear on this point. The one I call "the pointless philosophy" is the one espoused by Petzold in "Programming Windows": Hungarian prefixes connote the storage type and do not describe semantics at all. Most arguments about Hungarian are, at their root, based on this fundamental dichotomy. As you correctly note, the pointless philosophy is, in fact, pointless. The compiler does that for you, and all the hungarian does is make the code redundant and hard to maintain. However, the sensible philosophy is very valuable when writing low level C code. Case in point: one day about seven years ago I rewrote the entire VBScript string runtime library -- which had to work on European, Far East and Bi-Di Win16/Win95/WinNT systems -- so that all the Hungarian prefixes correctly described the semantics of every variable. If a variable was a maximum count of characters then it was cchMax. If it was a pointer to a string of not-null-terminated unicode characters then it was a pwch. Etc. By simply renaming all the variables correctly I found SO MANY BUGS. Every place that a cb was assigned to a cch, I knew that there was a Unicode or DBCS bug right there. Hungarian greatly improved the code quality, particularly on DBCS machines. I still use semantic Hungarian prefixes in my C# code, but to a much smaller extent because most of the problems that Hungarian solves were designed out of the language in the first place. I try to never use "storage" Hungarian prefixes in any code, C# or C++. Eric Eric Lippert Thursday, June 12, 2003 That's interesting, I didn't realize that the original intent was for semantics. I do use stuff like that, nFoos/numFoos for the number of Foos, max and min as prefixes. But I don't think that is what most people associate with hungarian now. Most people think it is p for pointer, m for member, s static, i integer, f float, etc. It might be marginally okay if everyone used the same hungarian, but everyone uses a different one so it's not. Like in perl it's built into the language kind of, you can use $ for scalar, @ for lists or associative arrays, I forget. But anyway it's standard, so it sort of works. I think your example is a fine use of a naming convention, but most people wouldn't call it hungarian. I think within each _individual_ program, it is very often necessary to come up with a consistent naming convention. However, most people associate hungarian with a system you adopt for _all_ programs that you write. Incidentally, it would be nice if typedefs in C/C++ really created new types. That is, if I say typedef double Inches; typedef double Pounds; Pounds p = 1.0; Inches i = 2.0; void f( Pounds p ) { ... } p = i; // should give an error, but doesn't f( i ); // also shouldn't work, but does I think this would let the compiler solve certain problems that you would use a naming convention for. I'm inclined to think that, if you want a new type, you should ask for one. If typedef generates a new type, it looses much of its effectiveness. Of course, if you want to take the blue pill, see Danil Thursday, June 12, 2003 > typedef double Inches; > typedef double Pounds; Indeed, that would be nice. An even stronger approach towards solving this problem is the use of unit classes. At one point there was a proposal to add unit classes to ECMAScript, but I don't think anything came of it. (Then again, I haven't read the committee procedings for a while, so it might still be there.) The idea of a unit class is to declare a class which can lexically decorate literals. For example: var distance : inches = 4 inches + 3 cm; var speed : velocity = length / 12 seconds; The type system presumably has enough information about conversions between various units to determine that adding foot-pounds to Newton-metres is OK, but will throw compile errors if you try to add inches to kiloPascals. Eric <Incidentally, it would be nice if typedefs in C/C++ really created new types. That is, if I say> I'm with Danil on this one! If you do this you have a problem. If T is "std::vector<Ty>", what is "typename T::value_type"? How many bloody types do you want?!?! I'd have to think about this a bit more, but my gut reaction is that a whole bunch of template stuff would not be half so useful it typedef introduces new types. (Note, I'm not necessarily talking about using typedef to massage the syntax into something usable, but that too.) You can do kind of what you want using templates incidentally. Template your class on random type T and instantiate multiply with different typedefs. Voila, N incompatible classes that are otherwise identical. Manual instantiation of template classes will solve your "but it all has to go in the header file and what about the code bloat" questions :) Tom Thursday, June 12, 2003 Well, clearly there are potential uses for both "create an ALIAS for an existing type" and "create a NEW TYPE from an existing type". Your example of alising ugly template type names is a good example of the former. Creating typesafe enums from of subsets of the integer type, or traditional OO class extension are good examples of the latter. I predict that as computing power continues to increase we will see more and more ability to put arbitrary constraints on types -- there is no reason why we couldn't implement languages with type constraints like var c : int( c > 0 ); c = -1; // Whoops, type system violation. Of course, such a system requires the compiler to solve the Halting Problem if you want to guarantee type safety at compile time -- but as computing power increases, we can start with check-on-write constraints and then move up to systems which compute dependency graphs to determine when type system violation checks must occur. Eric Hungarian is crap. It has been utter crap ever since C supported function prototypes and type checking arguements. Untyped languages are a different story. Hungarian is less crappy there. Clutch Cargo Thursday, June 12, 2003 At my first programming job, coding C in 1995, Hungarian was quite helpful. However, it's unnecessary now that I'm using Java in 2003, for these reasons: 1) In Java and C++, type mismatches are much less likely to shoot you in the foot than they are in C. 2) IDE's can instantly tell you the type of any variable, and for better IDE's instanctly display any type errors before you compile. 3) Better programming practices, such as shorter functions/methods and refactoring, make it easier to keep track of your variables, removing the need for Hungarian notation. Julian Friday, June 13, 2003 Well, now that I think of it, there are some things you could do with templates. I haven't fully thought of all the implications of making typedef create a new type, but I know in at least some cases it would be nice. typedef is barely better than defining a macro, instead of what its name implies -- defining a type. Another keyword would be cool. But I guess for the subsets of an integer problem, if you could just do something like: template<int lowerbound, int upperbound> class BoundedInt { // constructor, assignment operator, etc. check the bounds // if you need arithmetic, operator+ etc would have to check bounds too } But that's a lot of typing for something that's not too hard to avoid in other ways, depending on the program. Maybe for debug builds this would be useful, and then for release you could typedef it away to just an int, to eliminate the overhead. Andy Friday, June 13, 2003 One of the problems of overlaying any informal naming structure onto variables (whether strongly typed or no), is that it sets up the sin of mistaking the map for the territory. I've come across bugs where an 'h' was misplaced in that it implied it referred to a handle when in fact it semantically was not. As a handle is itself an abstraction outside of the language that was just sin on top of sin. In these kinds of areas we are poorly served by current languages. Simon Lucy Friday, June 13, 2003 "... isn't it better in the long run to educate people instead of having a policy to use this notation..." Ha!! If I could educate people to write better code, I'd be so rich Bill Gates would be my pool boy! Also, I have better things to do than proselytize (sp?)... Spaghetti Rustler Friday, June 13, 2003 The thing that turned me on to using some notation was the fact that you often need a bunch of variables grouped on a particular concept. So, it not only the issue of showing the user the data type, but in fact that using notation gave me a natural grouping of variables. I find that there is less mental effort needed to come up with variable names. More important this is natural grouping also. Lets assume we are dealing with a student table. So, if I need to open a table of students, it is likely that I also need a variable to hold the table name, and also a recordset. The result might be placed into a collection. Without some notation system I have to come up with 3 variable names that really represent the same data, or same concept. A imaginary example could be: Dim rstStudents as recordset Dim strStudents as string Dim colStudents as new collection Dim lngStudentid as long You can see, all of the 3 vars above are “grouped” together. All of the above vars are to deal with a chunk of code that has to do something with a student. This kind of grouping was the first benefit I noticed about using some notation. IMHO, the reason for Hungarian notation falling out of favor is that now with OO programming, all of the above vars would be a member of a object. So, now that “natural” group of vars in reference to “student” is not needed. You might get: Students.RecordSet Students.TableName Students.Collecton In other words, due to more OO, then there is less need as a developer to use some notation that “groups” the above set of vars together. Since the values are grouped in the object, then there is LESS benefit to using some notation. For sure some notation does still help the issue of the data type used, but again with OO you are dealing with a object that has all kinds of stuff, and thus again it makes less sense to try and “type” the object. You can type the members of the object, but it don't give you that grouping I talked about. Albert D. Kallal Edmonton, Alberta Canada kallal@msn.com Albert D. Kallal Friday, June 13, 2003 A number of variables grouped together is called a structure. You don't need any kind of notation to show that. Passater Friday, June 13, 2003 I mean even without having classes you could group variables in the old structural languages like c and pascal. I think VB programmers tend to like Hungarian because it is useful for controls, as well as for what Albert says. It does make naming controls or variables easier. Now lblStudentID is the same as StudentIDLabel and many people prefer the latter naming convention. I can't see that it makes a big difference, but using a notation to show controls and variable types (whether something is a string or an integer is useful to know in VB) seems a good idea for amateur (and maybe not so amateur) programmers. Stephen Jones Friday, June 13, 2003 There are non-struct groupings. For example, textStudentID is the textbox, while iStudentID is the local variable which is the integer representation of the contents of the textbox. You then might use this to create the Student object which has other properties. mb Friday, June 13, 2003 >>A number of variables grouped together is called a structure. You don't need any kind of notation to show that. >> I mean even without having classes you could group variables in the old structural languages like c and Pascal. That is true. I loved working in Pascal, and that is where I got real hooked on User Defined Types (UDT). Remember, this natural grouping is only ONE of the many benefits that some notation system gives you. There are a number of limitations as to when a UDT can be used in VB. You can’t define them in VB class objects for example. Further, unlike Pascal, you can’t define UDT’s local in a sub or function. So, the amount of effort to start defining a new user types every time that you need simple recordset and table name gets to be a bit of work. You now have to come up with a “name” for the UDT type, and THEN also have to define this var “group” using that type name. I was trying to save mental effort here!. There are also some issues of scope of the UDT also. I mean, do you have to define the UDT as global (vb = yes you must)? And, you also often will have to pass individual members of the UDT to other routines anyway. Often that natural grouping I talked about is not worth the extra effort to define a global UDT. Unfortunately, the lack of flexibility in UDT’s is a true weak spot in the VB language. However, since it is a weak spot, this just again favors using some notation system to make up for this fact. Obviously many people are finding that with .net some notation is not worth the extra effort. So, this is kind of a horse for the course kind of thing. It seems that some programming styles and languages seem to benefit more from notation then other languages. So, if I just need two vars in a routine (one for a recordset and a table name) then VB does not sit well from the effort to define a new type for this. However, that is not such a bad idea here either! I am actually open to suggestions on this concept because of my Pascal roots. No question that creating a UDT does give a natural grouping “concept” that I talked about. With OO so much of the conceptual grouping occurs by methods of objects. Another important issue here is that new languages are less strongly typed then they used to be. (casting of data types is rather automatic in VB as compared to Pascal for example). With OO this is even more so as code must handle different types of data objects passed. Hence, modern languages care less about data types now. There is also a bit of a style issue going on here. Some developers don’t care, or bother to use some type of notation. For me, VB is certainly one of those languages that seems to really benefit from some notation. Sure....Peoples mileage will vary on this issue... Albert D. Kallal Edmonton, Alberta Canada kallal@msn.com Albert D. Kallal Saturday, June 14, 2003 Recent Topics Fog Creek Home
http://discuss.fogcreek.com/joelonsoftware2/default.asp?cmd=show&ixPost=50197&ixReplies=31
CC-MAIN-2017-13
en
refinedweb
Almost all uses of the bucket-based hash table have been removed save 3. We could numHashTableImplementations-- if these last 3 were finished off. Whoa, JSAtomList and JSHashTable are rather intertwined. That one is non-trivial. (In reply to comment #1) > Whoa, JSAtomList and JSHashTable are rather intertwined. That one is > non-trivial. No kidding! The JSAtomList starts off as a linked list of JSHashEntry (which is effectively just a generic linked list node). Once the list hits critical mass the JSAtomList table-ifies with explicitly managed stable ordering of the entries during the transition. JSAtomList's implementation manages the hash table's guts to be a multimap with special properties: a lookup that hits causes a node to be moved to the front of the hash chain; hoisted definition nodes are added at the end of the hash chain; any number of shadowed definitions can coexist in a single hash bucket. I'm mulling over a few solutions. Wow. Well, if you are going break an abstraction, might as well break the hell out of it! billm's taking care of the scripFilenameTable use in bug 661903 and jorendorff is self-hosting sharp var support in bug 486643. There's one more minor usage in traceviz that follows the same form as the script filename table. When billm's patch lands we can generalize a CStringHasher. Now we wait. Created attachment 547230 [details] [diff] [review] WIP: rm jshash.h Decided to see if I could remove jshash in the engine. Only took about an hour. JS binary goes down by ~20K. However, there are still some uses in jsd! That's for another day. Created attachment 643259 [details] [diff] [review] Sequester jshash.{h,cpp} in js/jsd/. The removal of sharps made this a lot easier. Although js/src/ barely uses jshash.{h,cpp}, js/jsd/ still does. So I just moved those files into js/jsd/. (I'm not sure if that'll show up in Bugzilla's nicely-formatted patch viewers.) I figure this is a win because I have a notion that js/jsd/ is headed for the scrap-heap -- is that right? And even if it's not, this greatly reduces the visibility of those files. A few things from those moved files were used in js/src/ and js/xpconnect/src/. Here's what I did with them: - I replaced JSHashNumber with js::HashNumber; they're both typedefs of uint32_t. - I moved JS_HashString, which has a single use, into jsstr.{h,cpp}. - I inlined the single use of JS_GOLDEN_RATIO. - And I removed ATOM_HASH because it is dead code. Comment on attachment 643259 [details] [diff] [review] Sequester jshash.{h,cpp} in js/jsd/. Review of attachment 643259 [details] [diff] [review]: ----------------------------------------------------------------- Sweet! I agree with your reasoning on it to jsd. ::: js/src/jsatom.h @@ +100,5 @@ > #endif > + > + static const js::HashNumber goldenRatio = 0x9E3779B9U; > + > + return n * goldenRatio; Even better, I think you could replace the body with: return HashGeneric((void *)JSID_BITS(id)); from mozilla/HashFunctions.h. (This (void *) wouldn't be necessary if you added a AddToHash(uint32_t hash, uint64_t value) overload to HashFunctions.h.) ::: js/src/jsstr.h @@ +18,5 @@ > #include "vm/Unicode.h" > > +/* General-purpose C string hash function. */ > +extern JS_PUBLIC_API(js::HashNumber) > +JS_HashString(const void *key); I think you kept this around for the 1 use in ScriptFilenameHasher. For that, you could use HashString in mozilla/HashFunctions.h instead. With that change, I think you could kill JS_HashString and make it a non-public extern in jsd. Sorry, but this caused Windows bustage, so I had to back it out. e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(68) : error C2491: 'JS_NewHashTable' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(106) : error C2491: 'JS_HashTableDestroy' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(138) : error C2491: 'JS_HashTableRawLookup' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(218) : error C2491: 'JS_HashTableRawAdd' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(248) : error C2491: 'JS_HashTableAdd' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(270) : error C2491: 'JS_HashTableRawRemove' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(288) : error C2491: 'JS_HashTableRemove' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(304) : error C2491: 'JS_HashTableLookup' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(323) : error C2491: 'JS_HashTableEnumerateEntries' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(418) : error C2491: 'JS_HashTableDump' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(430) : error C2491: 'JS_HashString' : definition of dllimport function not allowed e:/builds/moz2_slave/m-in-w32/build/js/jsd/jshash.cpp(442) : error C2491: 'JS_CompareValues' : definition of dllimport function not allowed \o/
https://bugzilla.mozilla.org/show_bug.cgi?id=647367
CC-MAIN-2017-13
en
refinedweb
lp(7) lp(7) Series 800 Only NAME lp - line printer SYNOPSIS #include <<<<sys/lprio.h>>>> Remarks: This manual entry applies only to a certain group of printers. For Series 800 systems, it applies to printers controlled by the device driver lpr2. It does not apply to any printers on Series 700 systems. DESCRIPTION This section describes capabilities provided by many line printers supported by various versions of the HP-UX operating system. A line printer is a character special device that may optionally have an interpretation applied to the data. If the character special device file has been created with the raw option (see the HP-UX System Administrator manuals for information about creating device files with the raw option), data is sent to the printer in raw mode (as, for example, when handling a graphics printing operation). In raw mode, no interpretation is done on the data to be printed, and no page formatting is performed. Data bytes are simply sent to the printer and printed exactly as received. If the device file does not contain the raw option, data can still be sent to the printer in raw mode. Raw mode is set and cleared by the LPRSET request. If the line printer device file does not contain the raw option, data is interpreted according to rules discussed below. The driver understands the concept of a printer page in that it has a page length (in lines), line length (in characters), and offset from the left margin (in characters). The default line length, indent, lines per page, open and close page eject, and handling of backspace are set to defaults determined when the printer is opened and recognized by the system the first time. If the printer is not recognized, the default line length is 132 characters, indent is 4 characters, lines per page is 66, one page is ejected on close and none on open, and backspace is handled for a character printer. The following rules describe the interpretation of the data stream: + A form feed causes a page eject and resets the line counter to zero. + Multiple consecutive form-feeds are treated as a single form- feed. + The new-line character is mapped into a carriage-return/line- feed sequence, and if an offset is specified a number of Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000 lp(7) lp(7) Series 800 Only blanks are inserted after the carriage-return/line-feed sequence. + A new-line that extends over the end of a page is turned into a form-feed. + Tab characters are expanded into the appropriate number of blanks (tab stops are assumed to occur every eight character positions as offset by the current indent value). + Backspaces are interpreted to yield the appropriate overstrike either for a character printer or a line printer. + Lines longer than the line length minus the indent (i.e., 128 characters, using the above defaults) are truncated. + Carriage-return characters cause the line to be overstruck. + When it is opened or closed, a suitable number of page ejects is generated. Two ioctl(2) requests are available to control the lines per page, characters per line, indent, handling of backspaces, and number of pages to be ejected at open and close times. At either open or close time, if no page eject is requested the paper will not be moved. For opens, line and page counting will start assuming a top-of-form condition. The ioctl requests have the following form: #include <<<<sys/lprio.h>>>> int ioctl(int fildes, int request, struct lprio *arg); The possible values of request are: LPRGET Get the current printer status information and store in the lprio structure to which arg points. LPRSET Set the current printer status information from the structure to which arg points. The lprio structure used in the LPRGET and LPRSET requests is defined in <sys/lprio.h>, and includes the following members: short int ind; /* indent */ short int col; /* columns per page */ short int line; /* lines per page */ short int bksp; /* backspace handling flag */ short int open_ej; /* pages to eject on open */ short int close_ej; /* pages to eject on close */ Hewlett-Packard Company - 2 - HP-UX Release 11i: November 2000 lp(7) lp(7) Series 800 Only short int raw_mode; /* raw mode flag */ These are remembered across opens, so the indent, page width, and page length can be set with an external program. If the col field is set to zero, the defaults are restored at the next open. If the backspace handling flag is 0, a character printer is assumed and backspaces are passed through the driver unchanged. If the flag is a 1, a line printer is assumed, and sufficient print operations are generated to generate the appropriate overstruck characters. If the raw mode flag is 0, data sent to the printer is formatted according to indent, columns per page, lines per page, backspace handling, and pages to eject on open and close. If the raw mode flag is 1, data sent to the printer is not formatted. If the raw mode flag is changed from 1 to 0 (raw mode is turned off) and the format settings (indent, columns per page, etc.) have not been modified, the data is formatted according to the prior format settings. AUTHOR lp was developed by HP and AT&T. FILES /dev/lp default or standard printer used by some HP-UX commands; /dev/[r]lp* special files for printers SEE ALSO lp(1), slp(1), ioctl(2), cent(7), intro(7). Hewlett-Packard Company - 3 - HP-UX Release 11i: November 2000
http://modman.unixdev.net/?sektion=7&page=lp&manpath=HP-UX-11.11
CC-MAIN-2017-13
en
refinedweb
I'm running on a Backtrack 4 Pre-Final hard drive install and upgraded to ubuntu 9.04 The only problem I'm having is when I run: I recieve the following error:I recieve the following error:Code: sslstrip -a -k -f Any thoughts?Any thoughts?Code: Traceback (most recent call last): File "/usr/bin/sslstrip", line 33, in <module> from sslstrip.DataShuffler import DataShuffler ImportError: No module named sslstrip.DataShuffler
http://www.backtrack-linux.org/forums/printthread.php?t=22449&pp=10&page=1
CC-MAIN-2017-13
en
refinedweb
On Tue, 21.12.10 12:05, Scott James Remnant (scott@netsplit.com) wrote:> > PID namespaces primarily provide an independent PID numbering scheme for> > a subset of processes, i.e. so that identical may PIDs refer to different> > processes depending on the namespace they are running in. As a side> > effect this also provides init-like behaviour for processes that aren't> > the original PID 1 of the operating system. For systemd we are only> > interested in this side effect, but are not interested at all in the> > renumbering of processes, and in fact would even really dislike if it> > happened. That's why PR_SET_ANCHOR is useful: it gives us init-like> > behaviour without renaming all processes.> >> Right, but I don't get why you need this behavior to supervise either> system or user processes. You already get all the functionality you> need to track processes via either cgroups or the proc connector (or a> combination of both).Well, we want a clean way to get access to the full siginfo_t of theSIGCHLD for the main process of a service. the proc connector is awfuland cgroups does not pass siginfo_t's back to userspace, hence thecleanest way to get this done properly and beautifully is to make thesession systemd a mini-init via PR_SET_ANCHOR, because then the per-usersystemd's and the per-system systemd can use the exact same code tohandle process managment.> So is this really just about making ps look pretty, as Kay says?That's a side effect, but for me it's mostly about getting a simple wayto get the SIGCHLDs, focussed on the children of the session manager andwith minimal wakeups.Lennart-- Lennart Poettering - Red Hat, Inc.
http://lkml.org/lkml/2010/12/23/189
CC-MAIN-2017-13
en
refinedweb
Flex Builder 3 introduces an agile approach for integrating Flex clients with Web Services, the Web Service Introspection Wizard (WSIW). The wizard lets you specify a service WSDL and auto generates the AS stubs for the service. Zee Yang wrote a great tutorial about this subject demonstrating this tool. Looking at his example, you can understand how simple consuming WS with Flex has become. The wizard creates a strongly typed stub for each service, and each operation is called by using a specific method. once the call is back you need to listen to a specific event in order to tell whether the call result was successful or not. As you can see, if you are developing large enterprise SOA applications like I do, you have to write lots of code in order to cover each service. In my daily work I use Flex WS enabled clients, I also use Cairngorm as a framework for my code. Following are several of the benefits of using Caringrom together with the WSIW: - You don't need to repeat your code - for instance, you can use a single error handler for all ws operations. - Your code is layered and ordered, for each tasks there's a certain type of class operating on different code layer - One example can be the commands that handle the application flow. - Your code is modular - in the future you may replace SOAP WS with a different integration layer (like REST), using Caringrom this task is simple. To demonstrate the benefits of this methodology, I included an example. Click on the following link here to find the source code. The sample project was kept short and simple for your convenience, I left out many things from the source, for instance, in my daily work I use delegates in order to encapsulate service calls handling. Also note that this specific sample can only be uses with Flash Player 9. The service I'm using is The Global Weather Service, it lets you get the weather in different cities all over the world. The service has 2 operations, one lets you get a list of cities the service covers fr a given country, the other retrieves the weather forecast for a given city. 1. Import the WSDL The first thing you need to do in order to consume the WS is to generate local stubs from it's WSDL. Go to Data > Import Web Service (WSDL) > choose the src folder of your project and press next > Type the WSDL you re about to import (in my case it's) and press "finish" 2. Register the service The local stubs were generated in a default location. Take a look at the BaseGlobalWeather class, in this sample this is the only class that is referenced. The next step would be to make the class available via the service locater. create a business\Service.mxml document and use the ServiceLocator in order to register the new imported service: xml <!-- declare the global weather service, provide the base service class --> <globalWeatherService:BaseGlobalWeather </cairngorm:ServiceLocator> The globalWeatherService will now be available via the ServiceLocator. 3. Create the commands For each WS operation you will need to create a separate command, use a Base command as the base class for the commands. The tasks the base command needs to perform are: - Declare and set the service and model. - Create global onResult and onFault handlers for the service operations 1 package sample.command 2 { 13 14 public class BaseGlobalWeatherCommand implements Responder, Command 15 { 16 protected var service:BaseGlobalWeather 17 protected var model:GlobalWeatherModelLocator; 18 19 public function BaseGlobalWeatherCommand() 20 { 21 //get the ws instance 22 service = BaseGlobalWeather(ServiceLocator.getInstance() 23 .getService("globalWeatherService")); 24 model = GlobalWeatherModelLocator.getInstance(); 25 } 26 33 /** 34 * Generic on result, all results eventualy goes here 35 */ 36 public function onResult(event:*=null):void 37 { 38 //the service returned a response, update the model 39 model.workflowstate = -1; 40 } 41 42 /** 43 * Generic fault handler all fault responses goes here 44 */ 45 public function onFault(event:*=null):void 46 { 47 //the service returned a fault response, update the model and wanr the user 48 model.workflowstate = -1; 49 Alert.show("An error has accourd:"+event.toString()); 50 } 51 } 52 } Next, write a command per each WS operation, here are a number of lines from the GetCitiesByCountryCommand: 1 package sample.command 2 { 10 public class GetCitiesByCountryCommand extends BaseGlobalWeatherCommand 11 { 12 public override function execute(event:CairngormEvent):void 13 { 14 var e:GetCitiesByCountryEvent = GetCitiesByCountryEvent(event); 15 var token:AsyncToken = service.getCitiesByCountry(e.countryName); 16 token.addEventListener("result",onResult); 17 token.addEventListener("fault",onFault); 18 model.workflowstate = GlobalWeatherModelLocator.WAITING_WAITING_FOR_SERVICE_RESPONSE; 19 } 20 21 public override function onResult(event:*=null):void 22 { 23 //handle specific command taks here 24 model.cities = event.result; 25 super.onResult(event); 26 } 27 } 28 } Notice how my onResult method handles the results in the context of the specific command (line 24), it sets the result of the operation to the relevant member of the model (cities) it then moves on and let the base class handle the results globally, the base class will notify the rest of the application that the application state has changed(line 39 of BaseGloblWeatherCommand): 38 //the service returned a response, update the model 39 model.workflowstate = -1;39 model.workflowstate = -1; As you can see, by using the commands and the WSIW, you can write applications that talk to web services and maintain a simple and efficient architecture. 10 comments: Very nice. Thank you. Hi there, Thanks so much for this article , but i have a little question for all of you ppl Have you ever experienced problems applying this?. I have problems with ambigous references between Cairngorm and the Flex Framework in mx.rpc classes like WSDLOperation and others Hi Drazz, I don't recall having such problems. Is this happening with the code from this post or is it happening with a different project? feel free to link or post you code here so I can take a look. explain how data can be retrieve from webservice and store it in datagrid using cairngorm...... i am getting an error type in webservice project using cairngorm plz explain [Data binding will not be able to detect assignments to "DischargeArray".] @ashfak, sorry for not replying to your earlier question, I guess you already figured that out by yourself. Regarding the message you now get, this looks like a warning, not an error message. Flex is simply telling you it will not detect binding, it's most likely that you are missing the [Bindable] annotation above the declaration for "DischargeArray" [Bindable] public var DischargeArray:Array Thanks a lot Lior, Earlierone i got it. hi Lior another one question how to use if and else condition in dropdown list to choose one item and get data from webservice related to that Great article. Have you an update for Flash Builder 4.5.1 Data Service Wizard - or a link to doucumentation? The generated code is quite different from FB3 Data Wizard. @reggolb thanks, I did not switch to 4.5.1 yet but once I get a chance I might write an update to this tutorial. I'm happy to hear this feature was updated, there were many issues with previous versions.
http://www.flexonjava.net/2008/12/integrating-cairngorm-with-fb3-data.html
CC-MAIN-2017-13
en
refinedweb
In a prior post, Jorge Morales described a number of techniques for how one could reduce build times for a Java based application when using OpenShift. Since then there have been a number of releases of the upstream OpenShift Origin project. In the 1.1.2 release of Origin a new feature was added to builds called an Image Source, which can also be useful in helping to reduce build times by offloading repetitive build steps to a separate build process. This mechanism can for example be used to pre build assets which wouldn’t change often, and then have them automatically made available within the application image when it is being built. To illustrate how this works, I am going to use an example from the Python world, using some experimental S2I builders for Python I have been working on. I will be using the All-In-One VM we make available for running OpenShift Origin on your laptop or desktop PC. Deploying a Python CMS The example I am going to start with is the deployment of a CMS system called Wagtail. This web application is implemented using the popular Django web framework for Python. Normally Wagtail would require a database to be configured for storage of data. As I am more concerned with the build process here rather than seeing the site running, I am going to skip the database setup for now. To create the initial deployment for our Wagtail CMS site, we need to create a project, import the Docker image for the S2I builder I am going to use and then create the actual application. $ oc new-project image-source Now using project "image-source" on server "". You can add applications to this project with the 'new-app' command. For example, try: $ oc new-app centos/ruby-22-centos7~ to build a new hello-world application in Ruby. $ oc import-image grahamdumpleton/warp0-debian8-python27 --confirm The import completed successfully. Name: warp0-debian8-python27 Created: Less than a second ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2016-03-02T00:14:37Z Docker Pull Spec: 172.30.118.161:5000/image-source/warp0-debian8-python27 Tag Spec Created PullSpec Image latest grahamdumpleton/warp0-debian8-python27 Less than a second ago grahamdumpleton/warp0-debian8-python27@sha256:ae947cc679d2c1... <same> $ oc new-app warp0-debian8-python27~ -->-demo-site:latest" * This image will be deployed in deployment config "wagtail-demo-site" * Port 8080/tcp will be load balanced by service "wagtail-demo-site" * Other containers can access this service through the hostname "wagtail-demo-site" --> Creating resources with label app=wagtail-demo-site ... imagestream "wagtail-demo-site" created buildconfig "wagtail-demo-site" created deploymentconfig "wagtail-demo-site" created service "wagtail-demo-site" created --> Success Build scheduled for "wagtail-demo-site", use 'oc logs' to track its progress. Run 'oc status' to view your app. The initial build and deployment of the Wagtail site will take a little while for a few reasons. The first is that because we didn’t already have the S2I builder loaded into our OpenShift cluster, it needs to download it from the Docker Hub registry where it resides. Because I live down in Australia where our Internet is only marginally better than using two tin cans joined by a piece of wet string, this can take some time. The next most time consuming part of the process is one which actually needs to be run every time we do a build. That is that we need to download all the Python packages that the Wagtail CMS application requires. This includes Wagtail itself, Django, as well as database clients, image manipulation software and so on. Many of the packages it requires are pure Python code and so it is just a matter of downloading the Python code and installing it. In other cases, such as with the database client and image manipulation software, it contains C extension modules which need to be first compiled into a dynamically loadable object library. The delay points are therefore the time taken to download the packages from the Python package index, followed by actually code compilation times. A final source of an extra delay for the initial deploy is the pushing up of the image to the nodes in the OpenShift cluster so that the application can then be started. This takes a little bit of extra time on the first deploy as all the layers of the base image for this S2I builder will not be present on each node. Subsequent deploys will not see this delay unless the S2I builder image itself were updated. When finally done, for me down here in this Internet deprived land we call OZ, that takes a total time of just under 15 minutes. This included around 5 minutes to pull down the S2I builder the first time and about 5 minutes to push the final image out to the OpenShift nodes the first time. The actual build of the Wagtail application itself, consisting of the pulling down and compilation of the required Python packages, therefore took about 5 minutes. Because we are using an S2I builder, which downloads the application code from the Git repository, and downloads any Python packages, compiling and installing them, all in one step, we have no way of speeding things up by using separate layers in Docker. Well we could, but it would mean needing to create a custom version of the S2I builder which had preinstalled into the base image all the Python packages we required. Although technically possible, this would not be the preferred option. Using a Python Wheelhouse If we were using Docker directly, an alternative one can use with Python is what is called a Wheelhouse. What this entails is downloading and pre building all the Python packages we require to produce what are called Python wheels. These are stored in a directory called a ‘wheelhouse’. When we now go to build our Python application, when installing all the packages we want, we would point the Python ‘pip’ program used to install the packages at our directory of wheels we pre built for the packages. What ‘pip’ will then do is that rather than download the packages and build them again, it will use our pre built wheels instead. We are therefore able to skip all that time taken to download and compile everything, resulting in a reduction of the time taken to build the Docker image. Integrating the use of a wheelhouse directory into a build process when using Docker directly can be quite fiddly and involves a number of steps. Using the capabilities of OpenShift, we can however make that a very simple process. All we need is an S2I builder for Python which is setup to be able to use a wheelhouse directory, as well as a way of constructing the wheelhouse directory in the first place. Having that, we can then use the ‘Image Source’ feature of OpenShift to combine the two. As it happens the S2I builder I have been using here has both these capabilities, so lets see how that can work. So we already have our Wagtail CMS application running with the name ‘wagtail-demo-site’. The next step is to create that wheelhouse. To do this we are going to use oc new-build with the same S2I builder and Git repository as we used before, but we are going to set an environment variable to have the S2I builder create a wheelhouse instead of preparing the image for our application. $ oc new-build warp0-debian8-python27~ --env WARPDRIVE_BUILD_TARGET=wheelhouse --name wagtail-wheelhouse -->-wheelhouse:latest" --> Creating resources with label build=wagtail-wheelhouse ... imagestream "wagtail-wheelhouse" created buildconfig "wagtail-wheelhouse" created --> Success Build configuration "wagtail-wheelhouse" created and build triggered. Run 'oc logs -f bc/wagtail-wheelhouse' to stream the build progress. Since we have already downloaded the S2I builder when initially deploying the application, and because we aren’t deploying anything, just building an image, this should take about 5 minutes. This is equivalent to what we saw for installing the packages as part of the application build. Right now the wheelhouse build and the application build are separate. The next step is to link these together so that the application build can use the by products of what is created by the wheelhouse build. To do this we are going to edit the build configuration for the application. To see the current build configuration from the command line, you can run oc get bc wagtail-demo-site -o yaml. We are only going to be concerned with a part of that configuration, so I am only quoting the source and strategy sections. source: git: uri: secrets: [] type: Git strategy: sourceStrategy: from: kind: ImageStreamTag name: warp0-debian8-python27:latest namespace: image-source type: Source The main change we are going to make is to enable the Image Source feature. To do this we are going to change the source section. This can be done using oc edit bc wagtail-demo-site. We are going to change the section to read: source: git: uri: images: - from: kind: ImageStreamTag name: wagtail-wheelhouse:latest namespace: image-source paths: - destinationDir: .warpdrive/wheelhouse sourcePath: /opt/warpdrive/.warpdrive/wheelhouse/. secrets: [] type: Git What we have added is the images sub section. Here we have linked the application image to our wheelhouse image called wagtail-wheelhouse. We have also under paths described where the pre built files are located that we want to have copied from the wheelhouse image into our application image. These being in the directory /opt/warpdrive/.warpdrive/wheelhouse/. and that we want them copied into the directory .warpdrive/wheelhouse relative to our application code directory. A second change we make, although this is actually optional, is that since we have pre-built all the packages we know are needed by ‘pip’, it need not actually bother checking with the Python Package Index (PyPi) at all. We can therefore say to trust that the package versions in the wheelhouse are exactly what we need. This we can do by setting an environment variable in the sourceStrategy sub section. strategy: sourceStrategy: env: - name: WARPDRIVE_PIP_NO_INDEX value: "1" from: kind: ImageStreamTag name: warp0-debian8-python27:latest namespace: image-source type: Source Having made these changes we can now trigger a rebuild and see whether things have improved. Tracking build times As to tracking building times, the best visual way of doing that is by using the build view in the web interface of OpenShift. Using this, what we find as a our end result is the following. Ignoring our initial build, which as explained will take longer due to needing to first download the S2I builder and distribute it to nodes, our build time for the application turned out to be a bit under 5 minutes. We would have expected this built time to always be about that for every application code change we made, even though we hadn’t changed what packages needed to be installed. When we introduced the wheelhouse image and linked our application build to it so that the pre built packages could be used, the build time for the application has now dropped down to about a minute and a half. Hardly enough time to go get a fresh cup of coffee. Wheelhouse build time We have successfully managed to offload the more time consuming parts of the application image build off to the wheelhouse image. Because the wheelhouse is only concerned with pre building any required Python packages it doesn’t need to be rebuilt every time an application change is made. You only need to trigger a rebuild of it when you want to change what packages are to be built, or what versions of the packages. Having to rebuild the wheelhouse would therefore generally be a rare event. Even so, there is actually a way we can reduce how long it takes to be rebuilt as well. This is by using an optional feature of S2I builds called incremental builds. With support for incremental builds already implemented in the special S2I builder for Python I am using, to enable incremental builds all we need to do is edit the build configuration for the wheelhouse and enable it. In this case we are going to amend the sourceStrategy sub section and add the incremental setting and give it the value true. strategy: sourceStrategy: env: - name: WARPDRIVE_BUILD_TARGET value: wheelhouse from: kind: ImageStreamTag name: warp0-debian8-python27:latest namespace: image-source incremental: true type: Source By doing this, what will now happen is that when the wheelhouse is being rebuilt, a copy of the ‘wheelhouse’ directory of the prior build will first be copied over from the prior version of the wheelhouse image. Similar with how the application build time was sped up, ‘pip’ will realise that it already has pre-built versions of the packages it is interested in and skip rebuilding them. It would only need to go out and download a package if it was a new package that had been added, or the version required had been changed. The end result is that by using both the Image Source feature of builds and the incremental builds, we have not only reduced how long it takes to build our application image, we have reduced how long it would take to rebuild our wheelhouse image that contains our pre-built packages. Experimental S2I builder As indicated above, this has all been done using an experimental S2I Python builder, it is not the default S2I Python builder that comes with OpenShift. The main point of this post hasn’t been to promote this experimental builder, but to highlight the Image Source feature of builds in OpenShift and provide an example of how it might be used. The experimental builder only exists at this point as a means for me personally to experiment with better ways of handling Python builds with OpenShift. What I learn from this is being fed back to the OpenShift developers so they can determine what direction the default S2I Python builder will take. If you are interested in the experiments I am doing with my own S2I Python builder, and how that can fit into a broader system for making Python web application deployments easier, I would suggest keeping an eye on my personal blog site. I have recently written two blogs posts about some of my work that may be of interest. - Building a better user experience for deploying Python web applications. - Speeding up Docker build times for Python applications. You can drop me any comments if you have feedback about that separate project via Twitter (@GrahamDumpleton).
https://blog.openshift.com/using-image-source-reduce-build-times/
CC-MAIN-2017-13
en
refinedweb
Catalyst::Plugin::Navigation - A navigation plugin for Catalyst use Catalyst(qw/ Navigation /); # When navigation needed. my $menu = $c->navigation->get_navigation($c, {level => 0}); ... # When defining an action. sub new_action : Local Menu('Menu Title') MenuTitle('Menu Mouse Over Title') MenuParent('#Menu Parent') MenuArgs('$stash_arg') { # Do action items. ... } The Catalyst::Plugin::Navigation plugin provides a way to define the navigation elements and hierarchy within the Catalyst controllers. By defining the menu structure from the controller attributes a controller can then ask for any level of menu and be presented with the current chain to the active page as well as all other visable menus from the hirearchy. Instead of having to define the menu structure and navigation elements and links in an external source this can be done from the infomation available from the controllers themselves. When using the Catalyst::Plugin::Navigation plugin the following methods are added to the base Catalyst object. Returns the CatalystX::NavigationMenu object that relates to the existing menu structure defined through the controller attributes. See the L(CatalystX::MenuNavigation) man page for more details. The following attributes are understood by the Catalyst::Plugin::Navigation plugin. The Menu() attribute is the only required attribute. Without this attribute the action element will not be included in the navigation tree. This provides the label to be used for the menu link. This is the actual text of the href. This item is required in order to have the action appear in the menu navigation. Provides the path to the parent item. If the parent doesn't exist then it will be created in the tree structure so that the child can be accessed even if the parent is never defined. For more information on the Path value that can be passed see the PATHS section below. Provides informaiton on what to use to populate arguments and URI placeholders for the current action. If the current action is chained or requires arguments then these are used to populate the URI accordingly. The arguments are passed in the order they appear in the attribute list. More than one MenuArgs attribute can be attached to a single action. If the argument is preceeded by a $ symbol then the name of the argument is pulled from the stash variable. Otherwise the argument is included as plain text. For example the following entry MenuArgs('$stash_value') will call out and get the stash value for the keyword 'stash_value' ($c->stash->{'stash_value'}). URL arguments are also handled with the MenuArgs() attribute. These are defined by preceeding the argument with the @ symbol. The same rules above apply, so the argument @$var will use the var value from the stash as a URL argument and @var will use the literal string var as the URL argument. In order for the menu item to be included in the navigation display the condition provided must evaluate to a true value. The argument ('Cond') value passed in is evaluated in an eval, allowing complex conditions to be executed. More than one condition can be passed as an attribute, in which case all conditions must evaluate to true. Defines the order in which the menu elements shoudl be displayed. If you would like your menu items to show up in a particular order you can define that order using the MenuOrder attribute. In the event that more than one action has the same order value then they are sorted alphabetically by their Menu label value. If you are using the Authentication::Roles plugin then you can define which roles must be provided in order to display the given action in the navigation tree. If more than one MenuRoles are included in the attributes list all those roles must be found. If you want to show the menu item depending on one of several roles then you can separate those roles with a | character. So the following attribute: MenuRoles('role1|role2|role3') will allow the action to be included in the navigation tree if the logged in user has a role of either role1, role2 or role3. Provides the value to use for the title attribute of the a link. The Catalyst::Pugin::Navigation plugin defines the navigation menu structure using a path system. This allows you to define a complex path to reach a particular action. There are a few ways to define path elements. In most instances you will just want to use the path to the controller item as the path to an action (ie; controller_name/action_name). In some instances you may want to provide a parent that is just a place holder for a label. In this case you can prepend the path value with a # symbol. This is used to define a label instead of an action. If you provide a path of #Parent#Child/controller/entry, then the current action will be found in this chain: #Parent #Child /controller/entry new_entry When items are added into the navigation tree their defined by their namespace and action name. This defines the private path to the entry. So future elements can be added into the tree under an item by referring to the path of the entry it shoudl appear under. There should be no need to include multiple path details in the MenuPath variable unless you are defining Labels to be used. A Label can occur anywhere in the navigation entry. So both of these paths are valid: #Label/path/to/action or /path/to/action/#Label. CatalystX::NavigationMenu, CatalystX::NavigationMenuItem Derek Wueppelmann <derek@roaringpenguin.com> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~dwueppel/Catalyst-Plugin-Navigation-1.002/lib/Catalyst/Plugin/Navigation.pm
CC-MAIN-2017-13
en
refinedweb
Dynamic specifcations, was deprecated, (N3051). A new languge feature, noexcept, was introduced to describe the important case of knowing when a function could guarantee to not throw any exceptions. Looking ahead to C++17, there is a desire to incorporate exception specifications into the type system, N4533. This solves a number of awkward corners that arise from exception specifications not being part of the type of a function, but still does nothing for the case of deprecated dynamic exception specifications, so the actual language does not get simpler, and we must still document and teach the awkwards corners that arise from dynamic exception specifications outside the type system. The recommendation of this paper is to remove dynamic exception specifications from the language. However, the syntax of the throw() specification should be retained, but no longer as a dyanmic. The wording changes in this initial proposal are deliberately minimal in an effort to ensure the least risk of accidental change. However, rather than using the language of sets of compatible exception specifications (where there are now only two such sets, the empty set and the set of all types) it would be possible to write a simpler, clearer form describing instead functions that permit exceptions, and functions that permit no exceptions. While such a specification would be preferred, it is also beyond the drafting skills of the proposal author. The redrafting goes slightly beyond minimal by eliminating use of the pseudo-type "any". This change, while improving clarity, also avoids confusion with the standard library type any from the Library Fundamentals TS in the event that it becomes part of a future standard library. languge model (exceptions are key to understanding constructors, destructors, and RAII) is an impediment that must be explained. It remains embarassing to explain to new developers that there are features of the language that conforming compilers are required to support, yet should never be used. Exception specifications in general occupy an awkward corner of the grammer, where they do not affect the type system, yet critically affect how a virtual function can be overridden, or which functions can bind to certain function pointer variables. As noted above, N4533 would go fuction propage, unexcepted callback in a manner that is guatanteed to fail before the subsequent call to terminate. This paper proposes treating such (still deprecate) idiosynracies of this platform. One remaining task is to survey popular open source libraries and see what level of usage, if any, remains in large, easily accessible codebases. 1 The following rules describe the scope of names declared in classes.1) The potential scope of a name declared in a class consists not only of the declarative region following the name’s point of declaration, but also of all function bodies, default arguments, an exception-specification, in the brace-or-equal-initializer of a non-static data member (9.2), or in the definition of a class member outside of the definition of X, following the member's declarator-id31, shall be declared in one of the following ways: 31) That is, an unqualified name that occurs, for instance, in a type in the parameter-declaration-clause or in the exception-specification. ] 2 A class is considered a completely-defined object type (3.9) (or complete type) at the closing } of the class-specifier. Within the class member-specification, the class is regarded as complete within function bodies, default arguments, using-declarations introducing inheriting constructors (12.9), exception-specifications, and brace-or-equal-initializers for non-static data members (including such things in nested classes). Otherwise it is regarded as incomplete within its own class member-specification. 3 [ Note: A single name can denote several function members provided their types are sufficiently different (Clause13). — end note ] 4. 2. (4.7) — In a dynamic-exception-specification (15.4); the pattern is a type-id. 11 [Note: For purposes of name lookup, default arguments and exception-specifications of function templates and default arguments and exception-specifications of member functions of class templates are considered definitions(14.5). — end note ] 3. [ Note: Within a template declaration, a local class: 15 The exception-specification of a function template specialization is not instantiated along with the function declaration; it is instantiated when needed (15.4).. ] 7 exception-specification is instantiated, at which point a program is ill-formed if the substitution results in an invalid type or expression. — end note ] 1 The exception specification of a function is a (possibly empty) set of types, indicating that the function might exit via an exception that matches a handler of one of the types in the set; the (conceptual) set of all types is used to denote that the function might exit via an exception of arbitrary type. If the set is empty, the function is said to have a non-throwing exception specification. The exception specification is either defined explicitly by using an exception-specification as a suffix of a function declaration's declarator (8.3.5) or implicitly. exception-specification: (5.20) that is contextually converted to bool (Clause 4). A ( token that follows noexcept is part of the noexcept-specification and does not commence an initializer (8.5).: void f()— end example ] throw(int); // OK void (*fp)() throw (int); // OK void g(void pfa() throw(int)); // OK typedef int (*pf)() throw(int); // ill-formed. 3 The exception-specification noexcept or noexcept(constant-expression), where the constant-expression yields true, denotes an exception specification that is the empty set. The exception-specification noexcept(constant-expression), where the constant-expression yields false, or the absence of an exception-specification in a function declarator other than that for a destructor (12.4) or a deallocation function (3.7.4.2) denotes an exception specification that is the set of all types. 4 Two exception-specifications are compatible if the sets of types they denote are the same. 5 If any declaration of a function has an exception-specification that is not a noexcept-specification allowing all exceptions, all declarations, including the definition and any explicit specialization, of that function shall have a compatible exception-specification. an exception-specification may be specified, but is not required. If an exception-specification is specified in an explicit instantiation directive, it shall be compatible with the exception-specifications of other declarations of that function. A diagnostic is required only if the exception-specifications are not compatible within a single translation unit. 6 If a virtual function has an exception specification, all declarations, including the definition, of any function that overrides that virtual function in any derived class shall only allow exceptions that are allowed by the exception specification of the base class virtual function, unless the overriding function is defined as deleted. [ Example: struct B { virtual void f()The declaration of D::f is ill-formed because it allows all exceptions, whereas B::f allows throw (int, double); virtual void g(); }; struct D: B { void f(); // ill-formed void g() throw (int); // OK }; class A { /∗...∗/ }; void (*pf1)(); // no exception specification void (*pf2)()— end example ] throw(A); void f() { pf1 = pf2; // OK: pf1 is less restrictive pf2 = pf1; // error: pf2 is more restrictive } 7 In such an assignment or initialization, exception-specifications on return types and parameter types shall be compatible. In other assignments or initializations, exception-specifications shall be compatible. 8 An exception-specification can include the same type more than once and can include classes that are related by inheritance, even though doing so is redundant. [ Note: An exception-specification can also include the class std::bad_exception (18.8.2). — end note ] 9 A function is said to allow an exception of type E if its exception specification contains a type T for which a handler of type T would be a match (15.3) for an exception of type E. A function is said to allow all exceptions if its exception specification is the set of all types. 10 Whenever an exception of type E is thrown and the search for a handler (15.3) encounters the outermost block of a function with an exception specification that does not allow E, then, (10.1) — if the function definition has a dynamic-exception-specification, the function std::unexpected() is called (15.5.2), (10.2) — otherwise, the function std::terminate() is called (15.5.1). [ Example: class X { }; class Y { }; class Z: public X { }; class W { }; void f() throw (X, Y) { int n = 0; if (n) throw X(); // OK if (n) throw Z(); // also OK throw W(); // will call std::unexpected() } — end example ] [Note: A function can have multiple declarations with different non-throwing exception-specifications; for this purpose, the one on the function definition is used. — end note ] 11 An implementation shall not reject an expression merely because when executed it throws or might throw an exception that the containing function does not allow. [ Example: extern void f() throw(the call to f is well-formed even though when called, f might throw exception X, Y); void g() throw(X){ f(); // OK } 12 [ Note: An exception specification is not considered part of a function’s type; see 8.3.5. — end note ] 13 A potential exception of a given context is either a type that might be thrown as an exception or a pseudo-type, denoted by "any", that represents the situation where an exception of an arbitrary type might be thrown. A subexpression e1 of an expression e is an immediate subexpression if there is no subexpression e2 of e such that e1 is a subexpression of e2. 14 The set of potential exceptions of a function, function pointer, or member function pointer f is defined as follows: (14.1) — If the exception specification of f is the set of all types, the set consists of the pseudo-type "any". (14.2) — Otherwise, the set consists of every type in the exception specification of f.: (15.1) — If e is a function call (5.2.2): (15.1.1) — If its postfix-expression is a (possibly parenthesized) id-expression (5.1.1), class member access (5.2.5), or pointer-to-member operation (5.5) whose cast-expression is an id-expression, S is the set of potential exceptions of the entity selected by the contained id-expression (after overload resolution, if applicable). (15.1.2) — Otherwise, S contains the pseudo-type "any". (15.2) — If e implicitly invokes a function (such as an overloaded operator, an allocation function in a new-expression, or a destructor if e is a full-expression (1.9)), S is the set of potential exceptions of the function. (15.3) — if e is a throw-expression (5.17), S consists of the type of the exception object that would be initialized by the operand, if present, or the pseudo-type "any" otherwise. (15.4) — if e is a dynamic_cast expression that casts to a reference type and requires a run-time check (5.2.7), S consists of the type std::bad_cast. (15.5) — if e is a typeid expression applied to a glvalue expression whose type is a polymorphic class type (5.2.8), S consists of the type std::bad_typeid. (15.6) — if e is a new-expression with a non-constant expression in the noptr-new-declarator (5.3.4), S consists of the type std::bad_array_new_length. [ Example: Given the following declarations void f()the set of potential exceptions for some sample expressions is: throw(int); void g(); struct A { A(); }; struct B { B() noexcept; }; struct D { D() throw (double); }; (15. 7) — for f(), the set consists of int; (15. 8) — for g(), the set consists of "any"; (15. 9) — for new A, the set consists of "any"; (15. 10) — for B(), the set is empty; (15. 11) — for new D, the set consists of "any" and double. — end example ] 16 Given a member function f of some class X, where f is an inheriting constructor (12.9) or an implicitly-declared special member function, the set of potential exceptions of the implicitly-declared member function f consists of all the members from the following sets: (16.1) — if f is a constructor, (16.1.1) — the sets of potential exceptions of the constructor invocations (16.1.1.1) — for X's non-variant non-static data members, (16.1.1.2) — for X's direct base classes, and (16.1.1.3) — if X is non-abstract (10.4), for X's virtual base classes, (including default argument expressions used in such invocations) as selected by overload resolution for the implicit definition of f (12.1). [ Note: Even though destructors for fully-constructed subobjects are invoked when an exception is thrown during the execution of a constructor (15.2), their exception specifications do not contribute to the exception specification of the constructor, because an exception thrown from such a destructor could never escape the constructor (15.1, 15.5.1). — end note] (16.1.2) — the sets of potential exceptions of the initialization of non-static data members from brace-or-equal-initializers that are not ignored (12.6.2); (16.2) — if f is an assignment operator, the sets of potential exceptions of the assignment operator invocations for X's non-variant non-static data members and for X's direct base classes (including default argument expressions used in such invocations), as selected by overload resolution for the implicit definition of f (12.8); (16.3) — if f is a destructor, the sets of potential exceptions of the destructor invocations for X's non-variant non-static data members and for X's virtual and direct base classes. 17 An inheriting constructor (12.9) and an implicitly-declared special member function (Clause 12) are considered to have an implicit exception specification, as follows, where S is the set of potential exceptions of the implicitly-declared member function: (17.1) — if S contains the pseudo-type "any", the implicit exception specification is the set of all types; (17.2) — otherwise, the implicit exception specification contains all the types in S. [ Note: An instantiation of an inheriting constructor template has an implied exception specification as if it were a non-template inheriting constructor. — end note ] [ Example: ] 18 A deallocation function (3.7.4.2) with no explicit exception-specification has an exception specification that is the empty set. 19 An exception-specification is considered to be needed when: (19.1) — in an expression, the function is the unique lookup result or the selected member of a set of overloaded functions (3.4, 13.3, 13.4); (19.2) — the function is odr-used (3.2) or, if it appears in an unevaluated operand, would be odr-used if the expression were potentially-evaluated; (19.3) — the exception-specification is compared to that of another declaration (e.g., an explicit specialization or an overriding virtual function); (19.4) — the function is defined; or (19.5) —.The exception-specification of a defaulted special member function is evaluated as described above only when needed; similarly, the exception-specification of a specialization of a function template or member function of a class template is instantiated only when needed. 20 In a dynamic-exception-specification, a type-id followed by an ellipsis is a pack expansion (14.5.3). 21 [ Note: The use of dynamic-exception-specifications is deprecated (see Annex D). — end note ] 1 The function s std::terminate() (15.5.1) and std::unexpected() (15.5.2) are used by the exception handling mechanism for coping with errors related to the exception handling mechanism itself. The function std::current_exception() (18.8.5) and the class std::nested_exception (18.8.6) can be used by a program to capture the currently handled exception..2) exits via an exception, or (1.6) — when destruction of an object with static or thread storage duration exits via an exception (3.6.8.1), or (1.11) — when the function std::nested_exception::rethrow_nested is called for an object that has captured no exception (18.8.6), or (1.12) — when execution of the initial function of a thread exits via an exception (30.3.1.2), or (1.13) — when the destructor or the copy assignment operator is invoked on an object of type std::thread that refers to a joinable thread (30.3.1.3, 30.3.1.4), or (1.14) — when a call to a wait(), wait_until(), or wait_for() function on a condition variable (30.5.1, 30.5.2) fails to meet a postcondition. — end note ](). 1 If a function with a dynamic-exception-specification exits via an exception of a type that is not allowed by its exception specification, the function std::unexpected() is called (D.8) immediately after completing the stack unwinding for the former function. 2 [ Note: By default, std::unexpected() calls std::terminate(), but a program can install its own handler function (D.8.2)191 except when such a function calls a program-supplied function that throws an exception.192 4 Destructor operations defined in the C++ standard library shall not throw exceptions. Every destructor in the C++ standard library shall behave as if it had a non-throwing exception specification. Any other functions efined in the C++ standard library that do not have an exception-specification may throw implementation-defined exceptions unless otherwise specified.193 An implementation may strengthen this implicit exception-specification by adding an explicit one. 194 194) That is, an implementation may provide an explicit exception-specification that defines the subset of "any" exceptions thrown by that function. This implies that the implementation may list implementation-defined types in such an exception-specification. 1 The header <exception> defines several types and functions related to the handling of exceptions in a C++ program. namespace std { class exception; class bad_exception; class nested_exception; typedef void (*unexpected_handler)(); unexpected_handler get_unexpected() noexcept; unexpected_handler set_unexpected(unexpected_handler f) noexcept; [[noreturn]] void unexpected();typedef void (*terminate_handler)(); terminate_handler get_terminate() noexcept; terminate_handler set_terminate(terminate_handler f) noexcept; [[noreturn]] void terminate() noexcept; int uncaught_exceptions() noexcept; // D. 9, uncaught_exception (deprecated) bool uncaught_exception() noexcept; typedef unspecified exception_ptr;); } 1 The class bad_exception defines the type of object s thrown as described in (15.5.2). 4 Member function swap() shall have a noexcept-specification which is equivalent to noexcept(true). (2.40) — Throw specifications on a single function declaration [256]. 1 The use of dynamic-exception-specifications is deprecated. typedef void (*unexpected_handler)();(). unexpected_handler set_unexpected(unexpected_handler f) noexcept; 1 Effects: Establishes the function designated by f as the current unexpected_handler. 2 Remark: It is unspecified whether a null pointer value designates the default unexpected_handler. 3 Returns: The previous unexpected_handler. unexpected_handler get_unexpected() noexcept; 1 Returns: The current unexpected_handler. [ Note: This may be a null pointer value. — end note ] [[noreturn]] void unexpected(); 1 Remarks: Called by the implementation when a function exits via an exception not allowed by its exception-specification (15.5.2), in effect after evaluating the throw-expression (D.8.1). May also be called directly by the program. 2 Effects: Calls the current unexpected_handler function. [ Note: A default unexpected_handler is always considered a callable handler in this context. — end note ]
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0003r0.html
CC-MAIN-2017-13
en
refinedweb
Knowing: - A single property handler gets created per file. Thus… - A property handler must be the authority on the layout of the file stream it is handed. - It is not possible to “augment” the set of properties on a file. - The property handler must glean its properties from the file stream itself. Thus… - The property system, in general, cannot store properties on every file. Not all files allow storing properties inside them. - Not all properties can get written to all file types. Bummer. - The caller is not guaranteed to ask the property handler for properties. This means… - To be considerate, the property handler should intialize quickly, delaying heavy work until it is actually needed. - The property handler is dealing with files. This means… - For large files, if you have control over the file layout, consider making it stream-friendly. - Reserve extra space in the property section. - Clump things together so that the handler doesn’t have to read the whole disk to piece together data. - Property handlers are an extensibility point of the file system namespace. Thus… - Other namespaces may have different extensibility mechanisms (or none at all). - Other namespaces may be able to delegate to the file system namespace when it comes to properties. (The search results do this) - Other namespaces may choose to reuse the same extensibility mechanism. (ZIP folders do this). - Property handlers just provide values. Thus… - Although they provide the data, the windows shell controls the way it gets displayed. - It is not directly possible to customize the way the data is visualized(1) - When I say “property handlers” make the shell do this or that, I really mean that the shell will do “this or that” in the presense of a property handler that provides the right set of data.. -Ben Karas (1) Property descriptions contain hints about how to display a value (e.g. using a stars or a text box). Property descriptions are system-wide and therefore a handler only can set the hints for properties it introduced to the system. So… if you wanted to have properties stored in an alternate data stream or in a seperate file, you wouldn’t be able to write a property handler for it, right? Hiya Tim! No, it is not possible using the recommended APIs. There are workarounds you could (ab)use to accomplish this, but they are intended for legacy applications and not for using secondary storages. I hope to talk to this exact point at a later date. Viewed as a data flow component , a property handler has a single file stream input and outputs a one Shell extensions operating on streams are supposed to enable them to work on things that are non-files, such as files inside a Zip. Unfortunately, Vista’s built-in thumbnail handlers (for BMP, JPEG, etc) don’t seem to work on files inside Zips. Or maybe it’s just me… 🙂
https://blogs.msdn.microsoft.com/benkaras/2007/01/28/understanding-the-role-of-property-handlers/
CC-MAIN-2017-13
en
refinedweb
I am running into some issue with scraping data. If I hardcode value for key "lbo race" in the code below it is able to scrape the data but if I try to set key "lbo race" to a variable which is being read in it doesn't seem to scrape the data correctly. I tried to put a time to slow it down but that doesn't seem to be the issue. Would I use threading to solve this problem? Thanks! import urllib.parse import urllib.request import csv import time def parseTable(html): #Each "row" of the HTML table will be a list, and the items #in that list will be the TD data items. ourTable = [] #We keep these set to NONE when not actively building a #row of data or a data item. ourTD = None #Stores one table data item ourTR = None #List to store each of the TD items in. #State we keep track of inTable = False inTR = False inTD = False #Start looking for a start tag at the beginning! tagStart = html.find("<", 0) while( tagStart != -1): tagEnd = html.find(">", tagStart) if tagEnd == -1: #We are done, return the data! return ourTable tagText = html[tagStart+1:tagEnd] #only look at the text immediately following the < tagList = tagText.split() tag = tagList[0] tag = tag.lower() #Watch out for TABLE (start/stop) tags! if tag == "table": #We entered the table! inTable = True if tag == "/table": #We exited a table. inTable = False #Detect/Handle Table Rows (TR's) if tag == "tr": inTR = True ourTR = [] #Started a new Table Row! #If we are at the end of a row, add the data we collected #so far to the main list of table data. if tag == "/tr": inTR = False ourTable.append(ourTR) ourTR = None #We are starting a Data item! if tag== "td" or tag== "th": inTD = True ourTD = "" #Start with an empty item! #We are ending a data item! if tag == "/td" or tag=="/th": inTD = False if ourTD != None and ourTR != None: cleanedTD = ourTD.strip() #Remove extra spaces ourTR.append( ourTD.strip() ) ourTD = None #Look for the NEXT start tag. Anything between the current #end tag and the next Start Tag is potential data! tagStart = html.find("<", tagEnd+1) #If we are in a Table, and in a Row and also in a TD, # Save anything that's not a tag! (between tags) # #Note that this may happen multiple times if the table #data has tags inside of it! #e.g. <td>some <b>bold</b> text</td> # #Because of this, we need to be sure to put a space between each #item that may have tags separating them. We remove any extra #spaces (above) before we append the ourTD data to the ourTR list. if inTable and inTR and inTD: ourTD = ourTD + html[tagEnd+1:tagStart] + " " #print("td:", ourTD) #for debugging #If we end the while loop looking for the next start tag, we #are done, return ourTable of data. return(ourTable) url = "" files = open('1992DemocraticPrimaryElection.txt', 'r') values = {'election' : "1992 Democratic Primary Election", 'lboRace' : "", 'btnSubmit' : "Submit"} for line in files: linenew = line linenew = linenew.replace(' ','') linenew = linenew.replace('\n','') linenew = linenew.replace('"', '') file = open('1992DemocraticPrimaryElection.'+linenew+'.csv', 'w') for k, v in values.items(): values['lboRace'] = line print(k, v) data = urllib.parse.urlencode(values) data = data.encode('ascii') req = urllib.request.Request(url, data) response = urllib.request.urlopen(req) html_bytes = response.read() html = str(html_bytes) dataTable = parseTable(html) writer = csv.writer(file) for item in dataTable: writer.writerow(item) file.close() files.close()
https://www.daniweb.com/programming/software-development/threads/419349/data-scraping-using-urllib-with-multiple-option-select-param
CC-MAIN-2017-13
en
refinedweb
PyUblas solves one main difficulty of developing hybrid numerical codes in Python and C++: It integrates two major linear algebra libraries across the two languages, namely numpy and Boost.Ublas. In Python, you are working with native numpy arrays, whereas in C++, PyUblas lets you work with matrix and vector types immediately derived from and closely integrated with Ublas. And best of all: There’s no copying at the language boundary. PyUblas is built using and meant to be used with Boost Python. PyUblas also has its own web page. Ok, here’s a simple sample extension: #include <pyublas/numpy.hpp> pyublas::numpy_vector<double> doublify(pyublas::numpy_vector<double> x) { return 2*x; } BOOST_PYTHON_MODULE(sample_ext) { boost::python::def("doublify", doublify); } and some Python that uses it: import numpy import sample_ext import pyublas # not explicitly used--but makes converters available vec = numpy.ones((5,), dtype=float) print vec print sample_ext.doublify(vec) and this is what gets printed: [ 1. 1. 1. 1. 1.] [ 2. 2. 2. 2. 2.]
https://documen.tician.de/pyublas/
CC-MAIN-2021-04
en
refinedweb
NAME Perform an operation on VMOs mapped into this VMAR. SYNOPSIS #include <zircon/syscalls.h> zx_status_t zx_vmar_op_range(zx_handle_t handle, uint32_t op, zx_vaddr_t address, size_t size, void* buffer, size_t buffer_size); DESCRIPTION zx_vmo_op_range() performs operation op on VMOs mapped in the range address to address+size. address and size must fall entirely within this VMAR, and must meet the alignment requirements specified for op by zx_vmo_op_range(). buffer and buffer_size are currently unused, and must be empty The supported operations are: ZX_VMO_OP_DECOMMIT - Deprecated. Use ZX_VMAR_OP_DECOMMIT instead. ZX_VMAR_OP_DECOMMIT - Requires the ZX_RIGHT_WRITE right, and applies only to writable mappings. ZX_VMAR_OP_MAP_RANGE - Populates entries in the CPU page tables (or architectural equivalent) for committed pages in the given range. Entries for uncommitted pages in the range are not populated. Fails if entries already exist for any page in the range (this may change in the future). The operation's semantics are otherwise as described by zx_vmo_op_range(). RIGHTS If op is ZX_VMO_OP_DECOMMIT, affected mappings must be writable. RETURN VALUE zx_vmar_op_range() returns ZX_OK on success. In the event of failure, a negative error value is returned. ERRORS ZX_ERR_ACCESS_DENIED handle, or one of the affected VMO mappings, does not have sufficient rights to perform the operation. ZX_ERR_BAD_HANDLE handle is not a valid handle. ZX_ERR_BAD_STATE handle is not a live VMAR, or the range specified by address and size spans un-mapped pages. ZX_ERR_INVALID_ARGS buffer is non-null, or buffer_size is non-zero, op is not a valid operation, size is zero, or address was not page-aligned. ZX_ERR_NOT_SUPPORTED op was not ZX_VMO_OP_DECOMMIT, or one or more mapped VMOs do not support the requested op. ZX_ERR_OUT_OF_RANGE The range specified by address and size is not wholy within the VM address range specified by handle. ZX_ERR_WRONG_TYPE handle is not a VMAR handle.
https://fuchsia.dev/fuchsia-src/reference/syscalls/vmar_op_range
CC-MAIN-2021-04
en
refinedweb
MBSTOWCS(3) Linux Programmer's Manual MBSTOWCS(3) mbstowcs - convert a multibyte string to a wide-character string #include <stdlib.h> size_t mbstowcs(wchar_t *dest, const char *src, size_t n); If. The mbstowcs() function returns the number of wide characters that make up the converted part of the wide-character string, not including the terminating null wide character. If an invalid multibyte sequence was encountered, (size_t) -1 is returned.mbstowcs() │ Thread safety │ MT-Safe │ └───────────┴───────────────┴─────────┘ POSIX.1-2001, POSIX.1-2008, C99. The behavior of mbstowcs() depends on the LC_CTYPE category of the current locale. The function mbsrtowcs(3) provides a better interface to the same functionality.); } mblen(3), mbsrtowcs(3), mbtowc(3), wcstombs(3), wctomb(3) This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. GNU 2020-11-01 MBSTOWCS(3) Pages that refer to this page: MB_CUR_MAX(3), mbsrtowcs(3), mbtowc(3), wcstombs(3), wctomb(3), wprintf(3), locale(7)
https://man7.org/linux/man-pages/man3/mbstowcs.3.html
CC-MAIN-2021-04
en
refinedweb
Class WBootstrapTheme public class WBootstrapTheme extends WTheme This theme implements support for building a JWt web application that uses Twitter Bootstrap as a theme for its (layout and) styling. The theme comes with CSS from Bootstrap version 2.2.2 or JWt distribution, but since the Twitter Bootstrap CSS API is a popular API for custom themes, you can easily replace the CSS with custom-built CSS (by reimplementing getStyleSheets()). Although this theme facilitates the use of Twitter Bootstrap with JWt, it is still important to understand how Bootstrap expects markup to be, especially related to layout using its grid system, for which we refer to the official bootstrap documentation, see - See Also: WApplication.setTheme(WTheme theme) Nested Class Summary Nested classes/interfaces inherited from class eu.webtoolkit.jwt.WObject WObject.FormData Constructor Summary Method Summary Methods inherited from class eu.webtoolkit.jwt.WTheme applyValidationStyle, getResourcesUrl, serveCss WBootstrapThemeConstructor. WBootstrapThemepublic WBootstrapTheme()Constructor. Calls this((WObject)null) Method Details setResponsivepublic void setResponsive(boolean enabled)Enables responsive features. Responsive features can be enabled only at application startup. For bootstrap 3, you need to use the progressive bootstrap feature of JWt as it requires setting HTML meta flags. Responsive features are disabled by default. isResponsivepublic boolean isResponsive()Returns whether responsive features are enabled. - See Also: setResponsive(boolean enabled) setVersionpublic void setVersion(WBootstrapTheme.Version version)Sets the bootstrap version. The default bootstrap version is 2 (but this may change in the future and thus we recommend setting the version). Since Twitter Bootstrap breaks its API with a major version change, the version has a big impact on how how the markup is done for various widgets. Note that the two Bootstrap versions have a different license: Apache 2.0 for Bootstrap version 2.2.2, and MIT for version 3.1. See these licenses for details. getVersionpublic WBootstrapTheme.Version getVersion()Returns the bootstrap version. setFormControlStyleEnabledpublic void setFormControlStyleEnabled(boolean enabled)Enables form-control on all applicable form widgets. This is relevant only for bootstrap 3. By applying "form-control" on form widgets, they will become block level elements that take the size of the parent (which is in bootstrap's philosphy a grid layout). The default value is true. getNamepublic java.lang.String getName()Returns a theme name. Returns a unique name for the theme. This name is used by the default implementation of getResourcesUrl()to compute a location for the theme's resources. getStyleSheetspublic java.util.List<W. getDisabledClasspublic java.lang.String getDisabledClass()Returns a generic CSS class name for a disabled element. - Specified by: getDisabledClassin class WTheme getActiveClasspublic java.lang.String getActiveClass()Returns a generic CSS class name for an active element. - Specified by: getActiveClassin class WTheme utilityCssClasspublic java.lang.String utilityCssClass(int utilityCssClassRole)Returns, java.util.EnumSet<ValidationStyleFlag> styles)Applies a style that indicates the result of validation. - Specified by: applyValidationStylein class WTheme canBorderBoxElementpublic boolean canBorderBoxElement(DomElement element) - Specified by: canBorderBoxElementin class WTheme
https://webtoolkit.eu/jwt/jwt3/doc/javadoc/eu/webtoolkit/jwt/WBootstrapTheme.html
CC-MAIN-2021-04
en
refinedweb
I am facing a problem with the exception handling in Java, following is my code. I am facing compiler error when I try to execute my code. The error is as below: exception MojException is never thrown in body of corresponding try statement Below is my code: public class MyTest { public static void main(String[] args) throws MyMojException { // TODO Auto-generated method stub for(int m=1;m<args.length;m++){ try{ Integer.parseInt(args[m-1]); } catch(MyMojException e){ throw new MyMojException("The Bledne dane"); } try{ WierszTrojkataPascala w = new WierszTrojkataPascala(Integer.parseInt(args[0])); System.out.println(args[m]+" : "+w.wspolczynnik(Integer.parseInt(args[m]))); } catch(MojException e){ throw new MojException(args[m]+" "+e.getMessage()); } } } } And here is the code for MyMojException: public class MyMojException extends Exception{ MyMojException(String str){ super(str); Can anyone suggest solution on my issue? The catch-block in your try statement needs to catch exactly the exception that the code inside your try {}-block can throw or the super class of that as below try { //do something that throws ExceptionA, e.g. throw new ExceptionA("I am the Exception Alpha!"); catch(ExceptionA e) { //do something to handle the exception, e.g. System.out.println("Message: " + e.getMessage()); But what you are trying to do is as following : throw new ExceptionB("I am the Exception Bravo!"); This will definitely lead to the compiler error, as java knows that you are trying to catch an exception that will never be thrown. So you will get: exception ExceptionA is never thrown in body of corresponding try statement.
https://kodlogs.com/34351/exception-filenotfoundexception-is-never-thrown-in-body-of-corresponding-try-statement
CC-MAIN-2021-04
en
refinedweb
FLOCK(2) Linux Programmer's Manual FLOCK(2) flock - apply or remove an advisory lock on an open file #include <sys/file.h> int flock(int fd, int operation); Apply or remove an advisory lock on the open file specified by fd. The argumentblocking request, include LOCK_NB (by ORing) with any of the above operations. A single file may not simultaneously have both shared and exclusive locks. Locks created by flock() are associated with an open file descriptor. A process may hold onlyBADF fd is not.) flock() places advisory locks only; given suitable permissions on a file, a process is free to ignore the use of flock() and perform I/O on the file. flock() and fcntl(2) locks have different semantics with respect to forked processes and dup(2). On systems that implement flock() using fcntl(2), the semantics of flock() will be different from those described in this manual page. Converting a lock (shared to exclusive, or vice versa) is not guaranteed to be atomic: the existing lock is first removed, and then a new lock is established. Between these two steps, a pending lock request by another process may be granted, with the result that the conversion either blocks, or fails if LOCK_NB was specified. (This is the original BSD behavior, and occurs on many other implementations.) NFS details fcntl(2) byte-range locks on the entire file. This means that fcntl(2) and flock() locks do interact with one another over NFS. It also means that in order to place an exclusive lock, the file must be opened for writing.), lslocks(8) Documentation/filesystems/locks.txt in the Linux kernel source tree (Documentation/locks.txt in older kernels) This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 FLOCK(2) Pages that refer to this page: flock(1), chown(2), fcntl(2), fork(2), getrlimit(2), syscalls(2), dbopen(3), flockfile(3), lockf(3), nfs(5), proc(5), tmpfiles.d(5), signal(7), cryptsetup(8), fsck(8), lslocks(8), vipw(8@@util-linux)
https://man7.org/linux/man-pages/man2/flock.2.html
CC-MAIN-2021-04
en
refinedweb
from flask import Flask from flask import request import subprocess import shlex import urllib.parse app = Flask(__name__) @app.route("/run/",methods = ['POST', 'GET']) def execute(): command = 'no command' print("============") command = (request.data).decode("utf-8") print(command) if request.method == 'POST': print('Started executing command') command = shlex.split(command) process = subprocess.Popen(command, stdout = subprocess.PIPE) print("Run successfully") output, err = process.communicate() return output return "not executed" if __name__ == "__main__": app.run() Read next GAME MODE 2020: Building a Game in One Month Jason C. McDonald - VSCode: Setting line lengths in the Black Python code formatter Adam Lombard (he/him) - VSCode: Using Black to automatically format Python Adam Lombard (he/him) - Documenting a Django API with OpenAPI and Dataclasses Errietta Erry Kostala - Discussion Please no one use this code unless you specifically are looking to set up a honeypot to see what havoc can be created. Social experiment maybe? At best you'll get your machine destroyed by remote commands running with perms of the web server (which could be pretty wide reaching). At worst your machine will become a zombie for use in more nefarious schemes. I have to ask: what is a legit reason for doing this? Just seems like a really, really bad idea. Borders on negligent to post this as a how-to article with warnings and explanation. Newbies beware please. Exactly @thebouv this is not secure one and not recommendable code. But this is just an example. For some reason, while we are building application which runs only behind the VPN and VPC that time we can use it. Thanks
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mu/python-flask-app-to-run-shell-script-from-web-service-32e
CC-MAIN-2021-04
en
refinedweb
After reading this article, you can also implement a redux. The code corresponding to this article is as follows: …It is suggested that first cloneCode, and then read this article against the code. 1. ReduxWhat is it? ReduxYes JavaScriptThe state container provides predictable state management. ReduxIn addition to and ReactBesides being used together, other interface libraries are also supported. ReduxThe body is small and small, only 2KB. Here we need to be clear: ReduxAnd ReactBetween, there is no strong binding relationship. This article aims to understand and implement a ReduxBut it will not involve react-redux(It is enough to understand one knowledge point at a time. react-reduxWill appear in the next article). 2. Realize one from scratch Redux Let’s forget ReduxThe concept of, starting with an example, using create-react-appCreate a project: toredux. Code directory: myredux/to-reduxChina. will public/index.htmlIn bodyAmend to read as follows: <div id="app"> <div id="header"> 前端宇宙 </div> <div id="main"> <div id="content">大家好,我是前端宇宙作者刘小夕</div> <button class="change-theme" id="to-blue">Blue</button> <button class="change-theme" id="to-pink">Pink</button> </div> </div> The function we want to achieve is shown in the above figure. When clicking the button, we can modify the color of the font of the entire application. Modify src/index.jsThe following (code: to-redux/src/index1.js): let state = { color: 'blue' } //渲染应用 function renderApp() { renderHeader(); renderContent(); } //渲染 title 部分 function renderHeader() { const header = document.getElementById('header'); header.style.color = state.color; } //渲染内容部分 function renderContent() { const content = document.getElementById('content'); content.style.color = state.color; } renderApp(); //点击按钮,更改字体颜色 document.getElementById('to-blue').onclick = function () { state.color = 'rgb(0, 51, 254)'; renderApp(); } document.getElementById('to-pink').onclick = function () { state.color = 'rgb(247, 109, 132)'; renderApp(); } This application is very simple, but it has a problem: stateIs a shared state, but anyone can modify it, once we arbitrarily modify this state, it can lead to errors, for example, in renderHeaderInside, set state = {}, easy to cause unexpected errors. However, most of the time, we do need to share the status, so we can consider setting some thresholds. For example, we have agreed that we cannot directly modify the global status, and we must modify it through a certain route. To this end, we define a changeStateFunction, which is responsible for modifying the global state. //在 index.js 中继续追加代码 function changeState(action) { switch(action.type) { case 'CHANGE_COLOR': return { ...state, color: action.color } default: return state; } } We agreed that only through changeStateTo modify the state, it accepts a parameter actionA that contains typeThe ordinary object of the field, typeThe field is used to identify your type of operation (i.e. how to modify the status). We want to click on the button to change the font color of the entire application. //在 index.js 中继续追加代码 document.getElementById('to-blue').onclick = function() { let state = changeState({ type: 'CHANGE_COLOR', color: 'rgb(0, 51, 254)' }); //状态修改完之后,需要重新渲染页面 renderApp(state); } document.getElementById('to-pink').onclick = function() { let state = changeState({ type: 'CHANGE_COLOR', color: 'rgb(247, 109, 132)' }); renderApp(state); } Pull away from store Although we have now agreed on how to modify the status, however stateIs a global variable, we can easily modify it, so we can consider turning it into a local variable and defining it inside a function ( createStoreHowever, it needs to be used externally. stateSo we need to provide a method getState()In order that we may createStoreGet state. function createStore (state) { const getState = () => state; return { getState } } Now, we can pass store.getState()Method to get the state (what needs to be explained here is that, stateIt is usually an object, so this object can be directly modified externally, but if it is deeply copied stateReturn, then it must not be modified externally, given that reduxSource code is directly returned state, here we also do not make a deep copy, after all, cost performance). It is not enough to just obtain the state. We also need to have methods to modify the state. Now the state is a private variable. We must also put the methods to modify the state into the createStore, and expose it to external use. function createStore (state) { const getState = () => state; const changeState = () => { //...changeState 中的 code } return { getState, changeState } } Now, index.jsThe code in becomes the following ( to-redux/src/index2.js): function createStore() { let state = { color: 'blue' } const getState = () => state; function changeState(action) { switch (action.type) { case 'CHANGE_COLOR': state = { ...state, color: action.color } return state; default: return state; } } return { getState, changeState } }.changeState({ type: 'CHANGE_COLOR', color: 'rgb(0, 51, 254)' }); renderApp(store.getState()); } document.getElementById('to-pink').onclick = function () { store.changeState({ type: 'CHANGE_COLOR', color: 'rgb(247, 109, 132)' }); renderApp(store.getState()); } const store = createStore(); renderApp(store.getState()); Although, we are pulling away now createStoreMethod, but obviously this method is not universal at all. stateAnd changeStateMethods are defined in createStoreChina. In this case, other applications cannot reuse this mode. changeStateThe logic of is supposed to be defined externally, because the logic of modifying the state must be different for each application. We stripped this part of the logic to the outside and renamed it reducer(suppress ask why call reducerThe reason for asking is to make peace with reduxKeep consistent). reducerWhat is it, to put it bluntly, is based on actionTo calculate the new state. Because it is not in createStoreInternally defined, not directly accessible stateSo we need to pass the current state to it as a parameter. As follows: function reducer(state, action) { switch(action.type) { case 'CHANGE_COLOR': return { ...state, color: action.color } default: return state; } } CreateStore evolution function createStore(reducer) { let state = { color: 'blue' } const getState = () => state; //将此处的 changeState 更名为 `dispatch` const dispatch = (action) => { //reducer 接收老状态和action,返回一个新状态 state = reducer(state, action); } return { getState, dispatch } } Different applications stateIt must be different, we will stateThe value of is defined in the createStoreThe interior must be unreasonable. function createStore(reducer) { let state; const getState = () => state; const dispatch = (action) => { //reducer(state, action) 返回一个新状态 state = reducer(state, action); } return { getState, dispatch } } Attention, everyone reducerThe definition of is to directly return to the old state when encountering unrecognized actions. Now, we use this to return to the initial state. If you want to stateThere is an initial state, actually very simple, we will be the initial stateThe initialization value of is as reducerThe default value of the parameter for the createStoreTo distribute one in reducerIf you don’t understand, you can do it. such getStateOn the first call, you can get the default value of the state. CreateStore evolution version 2.0 function createStore(reducer) { let state; const getState = () => state; //每当 `dispatch` 一个动作的时候,我们需要调用 `reducer` 以返回一个新状态 const dispatch = (action) => { //reducer(state, action) 返回一个新状态 state = reducer(state, action); } //你要是有个 action 的 type 的值是 `@@redux/__INIT__${Math.random()}`,我敬你是个狠人 dispatch({ type: `@@redux/__INIT__${Math.random()}` }); return { getState, dispatch } } Now this createStoreIt can be used everywhere, but do you feel that every time dispatchAfter that, all manually renderApp()It seems stupid. In the current application, it is called twice. If it needs to be modified 1000 times stateDo you call it 1,000 times manually? renderApp()? Can you simplify it? Called automatically every time the data changes. renderApp(). Of course, we can’t renderApp()Write in createStore()The dispatchBecause in other applications, the function name may not be called renderApp()And may not only trigger renderApp(). It can be introduced here Publish subscription modeTo notify all subscribers when the state changes. CreateStore evolution version 3.0 function createStore(reducer) { let state; let listeners = []; const getState = () => state; //subscribe 每次调用,都会返回一个取消订阅的方法 const subscribe = (ln) => { listeners.push(ln); //订阅之后,也要允许取消订阅。 //难道我订了某本杂志之后,就不允许我退订吗?可怕~ const unsubscribe = () => { listeners = listeners.filter(listener => ln !== listener); } return unsubscribe; }; const dispatch = (action) => { //reducer(state, action) 返回一个新状态 state = reducer(state, action); listeners.forEach(ln => ln()); } //你要是有个 action 的 type 的值正好和 `@@redux/__INIT__${Math.random()}` 相等,我敬你是个狠人 dispatch({ type: `@@redux/__INIT__${Math.random()}` }); return { getState, dispatch, subscribe } } At this point, one of the simplest reduxIt’s already been created, createStoreYes reduxThe core of. Let’s use this condensed version reduxRewrite our code, index.jsThe contents of the file are updated as follows ( to-redux/src/index.js): function createStore() { //code(自行将上面createStore的代码拷贝至此处) } const initialState = { color: 'blue' } function reducer(state = initialState, action) { switch (action.type) { case 'CHANGE_COLOR': return { ...state, color: action.color } default: return state; } } const store = createStore(reducer);.dispatch({ type: 'CHANGE_COLOR', color: 'rgb(0, 51, 254)' }); } document.getElementById('to-pink').onclick = function () { store.dispatch({ type: 'CHANGE_COLOR', color: 'rgb(247, 109, 132)' }); } renderApp(store.getState()); //每次state发生改变时,都重新渲染 store.subscribe(() => renderApp(store.getState())); If we want to finish clicking now PinkAfter that, the font color cannot be modified, so we can also unsubscribe: const unsub = store.subscribe(() => renderApp(store.getState())); document.getElementById('to-pink').onclick = function () { //code... unsub(); //取消订阅 } By the way: reducerIt is a pure function (if you don’t know the concept of pure function, consult the data yourself), it receives the previous stateAnd actionAnd returns a new state. Don’t ask why. actionThere must be typeField, this is just a convention ( reduxThis is how it was designed) Legacy: Why reducerBe sure to return a new one state, rather than directly modify stateWhat? Welcome to leave your answer in the comment area. In front of us we deduced step by step reduxNow let’s review the core code of reduxThe design idea of: Reduxdesign philosophy ReduxThe entire application state ( state) to a place (usually we call it store) - When we need to modify the status, we must distribute ( dispatch) One action( actionIt’s one with typeThe object of the field) - Special state processing function reducerReceive old stateAnd actionAnd returns a new state - via subscribeSet up subscriptions and notify all subscribers each time a distribution action is made. We now have a basic version reduxHowever, it still cannot meet our needs. Our usual business development is not as simple as the example written above, so there will be a problem: reducerFunctions can be very long because actionThere will be many types of. This is definitely not conducive to code writing and reading. Imagine that there are 100 kinds of businesses in your business. actionNeed to deal with, write this one hundred cases in a. reducerNot only is the writing disgusting, but the colleagues who maintain the code later also want to kill people. Therefore, we had better write it separately. reducer, and then to reducerTo merge. Please welcome our combineReducers(and) reduxThe name of the library remains the same) combineReducers First of all, we need to be clear: combineReducersJust a tool function, as we said earlier, it will be multiple reducerMerge into one reducer. combineReducersWhat is returned is reducerThat is to say, it is a higher order function. We will still use an example to illustrate, although reduxNot necessarily with reactCooperation, but in view of its relationship with reactCooperation is most suitable, here, in order to reactCode for example: This time, in addition to the above display, we added a counter function (using ReactRefactoring = = > to-redux2): //现在我们的 state 结构如下: let state = { theme: { color: 'blue' }, counter: { number: 0 } } Obviously, the modification theme and the counter can be separated by different reducerTo deal with it is a better choice. store/reducers/counter.js State responsible for handling counters. import { INCRENENT, DECREMENT } from '../action-types'; export default counter(state = {number: 0}, action) { switch (action.type) { case INCRENENT: return { ...state, number: state.number + action.number } case DECREMENT: return { ...state, number: state.number - action.number } default: return state; } } store/reducers/theme.js State, which is responsible for modifying the theme color. import { CHANGE_COLOR } from '../action-types'; export default function theme(state = {color: 'blue'}, action) { switch (action.type) { case CHANGE_COLOR: return { ...state, color: action.color } default: return state; } } Each .. reducerOnly responsible for managing the overall situation statePart of its responsibility. Each .. reducerThe stateThe parameters are different and correspond to the part it manages. stateData. import counter from './counter'; import theme from './theme'; export default function appReducer(state={}, action) { return { theme: theme(state.theme, action), counter: counter(state.counter, action) } } appReducerAfter the merger reducerBut when reducerMore often, this writing is also cumbersome, so we write a tool function to generate this appReducer, we named this tool function combineReducers. Let’s try writing this tool function combineReducers: Ideas: combineReducersReturn reducer combineReducersThere are multiple references to reducerThe object of the composition - Each .. reducerOnly deal with the global stateI am responsible for my own part //reducers 是一个对象,属性值是每一个拆分的 reducer export default function combineReducers(reducers) { return function combination(state={}, action) { //reducer 的返回值是新的 state let newState = {}; for(var key in reducers) { newState[key] = reducers[key](state[key], action); } return newState; } } Son reducerWill be responsible for return stateThe default value of. For example, in this example, createStoreDispatch({type: @@redux/__INIT__${Math.random()}}), and passed to the createStoreThe truth is combineReducers(reducers)Function returned combination. According to state=reducer(state,action), newState.theme=theme(undefined, action), newState.counter=counter(undefined, action), counterAnd themeTwo children reducerReturn separately newState.themeAnd newState.counterThe initial value of the. Use this combineReducersCan be rewritten store/reducers/index.js import counter from './counter'; import theme from './theme'; import { combineReducers } from '../redux'; //明显简洁了许多~ export default combineReducers({ counter, theme }); We wrote it combineReducersAlthough it seems to be able to meet our needs, it has one disadvantage: it returns a new one every time. stateObject, which causes meaningless re-rendering when the data does not change. Therefore, we can judge the data and return the original data when there is no change in the data. stateJust. Combinatorial combineReducers evolutionary edition //代码中省略了一些判断,默认传递的参数均是符合要求的,有兴趣可以查看源码中对参数合法性的判断及处理 export default function combineReducers(reducers) { return function combination(state={}, action) { let nextState = {}; let hasChanged = false; //状态是否改变 for(let key in reducers) { const previousStateForKey = state[key]; const nextStateForKey = reducers[key](previousStateForKey, action); nextState[key] = nextStateForKey; //只有所有的 nextStateForKey 均与 previousStateForKey 相等时,hasChanged 的值才是 false hasChanged = hasChanged || nextStateForKey !== previousStateForKey; } //state 没有改变时,返回原对象 return hasChanged ? nextState : state; } } applyMiddleware Official documentsAbout applyMiddlewareThe explanation is very clear, the following contents also refer to the contents of the official documents: Logging Consider a small problem, if we want to print it out in the console before every state change. stateSo how do we do it? The simplest is: //... <button onClick={() => { console.log(store.getState()); store.dispatch(actions.add(2)); }}>+</button> //... Of course, this method is definitely not desirable. If we distribute it 100 times in our code, we cannot write it 100 times. Since it is printing when the status changes state, that is to say, in dispatchPreviously printed state, then we can rewrite store.dispatchMethod, print before distribution stateJust. let store = createStore(reducer); const next = store.dispatch; //next 的命令是为了和中间件的源码一致 store.dispatch = action => { console.log(store.getState()); next(action); } Crash information Suppose we don’t just need to print state, you also need to print out the error message when there is an exception in distribution. const next = store.dispatch; //next 的命名是为了和中间件的源码一致 store.dispatch = action => { try{ console.log(store.getState()); next(action); } catct(err) { console.error(err); } } However, if we have other requirements, we need to constantly revise them. store.dispatchMethod, resulting in this part of the code is difficult to maintain. So we need separation loggerMiddlewareAnd exceptionMiddleware. let store = createStore(reducer); const next = store.dispatch; //next 的命名是为了和中间件的源码一致 const loggerMiddleware = action => { console.log(store.getState()); next(action); } const exceptionMiddleware = action => { try{ loggerMiddleware(action); }catch(err) { console.error(err); } } store.dispatch = exceptionMiddleware; We know, a lot middlewareAll provided by a third party, then storeMust be passed as a parameter to middleware, further rewrite: const loggerMiddleware = store => action => { const next = store.dispatch; console.log(store.getState()); next(action); } const exceptionMiddleware = store => action => { try{ loggerMiddleware(store)(action); }catch(err) { console.error(err); } } //使用 store.dispatch = exceptionMiddleware(store)(action); There is still a small problem. exceptionMiddlewarehit the target loggerMiddlewareIt is written to death, which is certainly unreasonable. We hope that this is a parameter, so it is flexible to use. There is no reason only exceptionMiddlewareNeed to be flexible, regardless of loggerMiddlewareTo be further rewritten as follows: const loggerMiddleware = store => next => action => { console.log(store.getState()); return next(action); } const exceptionMiddleware = store => next => action => { try{ return next(action); }catch(err) { console.error(err); } } //使用 const next = store.dispatch; const logger = loggerMiddleware(store); store.dispatch = exceptionMiddleware(store)(logger(next)); Now, we already have GM middlewareThe writing format of the. middlewareReceived one next()The dispatchFunction and returns a dispatchFunction, the returned function will be taken as the next middlewareThe However, there is a small problem. When there is a lot of middleware, the code for using middleware becomes very complicated. To this end, reduxOne is provided applyMiddlewareThe tool function of. As we can see from the above, what we need to change in the end is actually dispatchSo we need to rewrite it store, return modified dispatchAfter the method store. Therefore, we can clarify the following points: applyMiddlewareThe return value is store applyMiddlewareIt must be accepted middlewareAs a parameter applyMiddlewareTo accept {dispatch, getState}As a participant, however reduxSource code into the reference is createStoreAnd createStoreI think it is not necessary to create a new one outside storeAfter all, created externally storeIn addition to being passed as a parameter into a function, it has no other effect, so it is better to put the createStoreAnd createStoreParameters to be used are passed in. //applyMiddleWare 返回 store. const applyMiddleware = middleware => createStore => (...args) => { let store = createStore(...args); let middle = loggerMiddleware(store); let dispatch = middle(store.dispatch); //新的dispatch方法 //返回一个新的store---重写了dispatch方法 return { ...store, dispatch } } The above is one middlewareBut we know that, middlewareIt may be one or more, and we mainly need to solve more than one middlewareThe problem of, further rewrite. //applyMiddleware 返回 store. const applyMiddleware = (...middlewares) => createStore => (...args) => { let store = createStore(...args); let dispatch; const middlewareAPI = { getState: store.getstate, dispatch: (...args) => dispatch(...args) } //传递修改后的 dispatch let middles = middlewares.map(middleware => middleware(middlewareAPI)); //现在我们有多个 middleware,需要多次增强 dispatch dispatch = middles.reduceRight((prev, current) => current(prev), store.dispatch); return { ...store, dispatch } } I don’t know if everyone understands the above middles.reduceRight, the following detailed explanation for everyone: /*三个中间件*/ let logger1 = ({dispatch, getState}) => dispatch => action => { console.log('111'); dispatch(action); console.log('444'); } let logger2 = ({ dispatch, getState }) => dispatch => action => { console.log('222'); dispatch(action); console.log('555') } let logger3 = ({ dispatch, getState }) => dispatch => action => { console.log('333'); dispatch(action); console.log('666'); } let middle1 = logger1({ dispatch, getState }); let middle2 = logger2({ dispatch, getState }); let middle3 = logger3({ dispatch, getState }); //applyMiddleware(logger1,logger2,logger3)(createStore)(reducer) //如果直接替换 store.dispatch = middle1(middle2(middle3(store.dispatch))); Observe the above middle1(middle2(middle3(store.dispatch)))If we put middle1, middle2, middle3As each item of the array, if you are familiar with the API of the array, you can think of reduceIf you are not familiar with it reduce, you can viewMDN document. //applyMiddleware(logger1,logger3,logger3)(createStore)(reducer) //reduceRight 从右到左执行 middles.reduceRight((prev, current) => current(prev), store.dispatch); //第一次 prev: store.dispatch current: middle3 //第二次 prev: middle3(store.dispatch) current: middle2 //第三次 prev: middle2(middle3(store.dispatch)) current: middle1 //结果 middle1(middle2(middle3(store.dispatch))) Read it reduxThe source code of the students, may know that the source code is to provide a. composeFunction, and composeNot used in function reduceRight, but instead uses the reduceSo the code is slightly different. But the analysis process is still the same. compose.js export default function compose(...funcs) { //如果没有中间件 if (funcs.length === 0) { return arg => arg } //中间件长度为1 if (funcs.length === 1) { return funcs[0] } return funcs.reduce((prev, current) => (...args) => prev(current(...args))); } About reduceIt is suggested that the writing style be like the one above. reduceRightSimilarly, conduct an analysis Use composeTool function rewrite applyMiddleware. const applyMiddleware = (...middlewares) => createStore => (...args) => { let store = createStore(...args); let dispatch; const middlewareAPI = { getState: store.getstate, dispatch: (...args) => dispatch(...args) } let middles = middlewares.map(middleware => middleware(middlewareAPI)); dispatch = compose(...middles)(store.dispatch); return { ...store, dispatch } } bindActionCreators reduxIt also provides us with bindActionCreatorsTool function, this tool function code is very simple, we seldom use it directly in the code. react-reduxWill be used in. Here, briefly explain: //通常我们会这样编写我们的 actionCreator import { INCRENENT, DECREMENT } from '../action-types'; const counter = { add(number) { return { type: INCRENENT, number } }, minus(number) { return { type: DECREMENT, number } } } export default counter; At the time of distribution, we need to write like this: import counter from 'xx/xx'; import store from 'xx/xx'; store.dispatch(counter.add()); Of course, we can also write our actionCreator as follows: function add(number) { return { type: INCRENENT, number } } When distributing, need to write like this: store.dispatch(add(number)); The above codes have one thing in common, that is, they are all store.dispatchSend out an action. So we can consider writing a function that will store.dispatchAnd actionCreatorBind them together. function bindActionCreator(actionCreator, dispatch) { return (...args) => dispatch(actionCreator(...args)); } function bindActionCreators(actionCreator, dispatch) { //actionCreators 可以是一个普通函数或者是一个对象 if(typeof actionCreator === 'function') { //如果是函数,返回一个函数,调用时,dispatch 这个函数的返回值 bindActionCreator(actionCreator, dispatch); }else if(typeof actionCreator === 'object') { //如果是一个对象,那么对象的每一项都要都要返回 bindActionCreator const boundActionCreators = {} for(let key in actionCreator) { boundActionCreators[key] = bindActionCreator(actionCreator[key], dispatch); } return boundActionCreators; } } In use: let counter = bindActionCreators(counter, store.dispatch); //派发时 counter.add(); counter.minus(); It doesn’t seem that there is too much simplification here, but it will be analyzed later. react-reduxWhen, will explain why this tool function is needed. At this point, my reduxIt has been basically written. And reduxCompared with the source code of, there are still some differences, such as createStoreProvided replaceReducerMethods, and createStoreThe second and third parameters of the are not mentioned, so you can understand them by looking at the code a little bit, and they will not be expanded here. Reference link - React.js small book - Redux chinese document - Fully understand redux (implementing one redux from zero)
https://ddcode.net/2019/11/05/react-series-handwriting-redux-from-scratch/
CC-MAIN-2021-04
en
refinedweb
TripleO Routed Networks Deployment (Spine-and-Leaf Clos)¶ TripleO uses shared L2 networks today, so each node is attached to the provisioning network, and any other networks are also shared. This significantly reduces the complexity required to deploy on bare metal, since DHCP and PXE booting are simply done over a shared broadcast domain. This also makes the network switch configuration easy, since there is only a need to configure VLANs and ports, but no added complexity from dynamic routing between all switches. This design has limitations, however, and becomes unwieldy beyond a certain scale. As the number of nodes increases, the background traffic from Broadcast, Unicast, and Multicast (BUM) traffic also increases. This design also requires all top-of-rack switches to trunk the VLANs back to the core switches, which centralizes the layer 3 gateway, usually on a single core switch. That creates a bottleneck which is not present in Clos architecture. This spec serves as a detailed description of the overall problem set, and applies to the master blueprint. The sub-blueprints for the various implementation items also have their own associated spec. Problem Description¶ Where possible, modern high-performance datacenter networks typically use routed networking to increase scalability and reduce failure domains. Using routed networks makes it possible to optimize a Clos (also known as “spine-and-leaf”) architecture for scalability: ,=========. ,=========. | spine 1 |__ __| spine 2 | '==|\=====\_ \__________________/ _/=====/|==' | \_ \___ / \ ___/ _/ | ^ | \___ / \ _______ / \ ___/ | |-- Dynamic routing (BGP, OSPF, | / \ / \ / \ | v EIGRP) ,------. ,------ ,------. ,------. |leaf 1|....|leaf 2| |leaf 3|....|leaf 4| ======== Layer 2/3 boundary '------' '------' '------' '------' | | | | | | | | |-[serv-A1]=-| |-[serv-B1]=-| |-[serv-A2]=-| |-[serv-B2]=-| |-[serv-A3]=-| |-[serv-B3]=-| Rack A Rack B In the above diagram, each server is connected via an Ethernet bond to both top-of-rack leaf switches, which are clustered and configured as a virtual switch chassis. Each leaf switch is attached to each spine switch. Within each rack, all servers share a layer 2 domain. The subnets are local to the rack, and the default gateway is the top-of-rack virtual switch pair. Dynamic routing between the leaf switches and the spine switches permits East-West traffic between the racks. This is just one example of a routed network architecture. The layer 3 routing could also be done only on the spine switches, or there may even be distribution level switches that sit in between the top-of-rack switches and the routed core. The distinguishing feature that we are trying to enable is segregating local systems within a layer 2 domain, with routing between domains. In a shared layer-2 architecture, the spine switches typically have to act in an active/passive mode to act as the L3 gateway for the single shared VLAN. All leaf switches must be attached to the active switch, and the limit on North-South bandwidth is the connection to the active switch, so there is an upper bound on the scalability. The Clos topology is favored because it provides horizontal scalability. Additional spine switches can be added to increase East-West and North-South bandwidth. Equal-cost multipath routing between switches ensures that all links are utlized simultaneously. If all ports are full on the spine switches, an additional tier can be added to connect additional spines, each with their own set of leaf switches, providing hyperscale expandability. Each network device may be taken out of service for maintenance without the entire network being offline. This topology also allows the switches to be configured without physical loops or Spanning Tree, since the redundant links are either delivered via bonding or via multiple layer 3 uplink paths with equal metrics. Some advantages of using this architecture with separate subnets per rack are: Reduced domain for broadcast, unknown unicast, and multicast (BUM) traffic. Reduced failure domain. Geographical separation. Association between IP address and rack location. Better cross-vendor support for multipath forwarding using equal-cost multipath forwarding (ECMP) via L3 routing, instead of proprietary “fabric”. This topology is significantly different from the shared-everything approach that TripleO takes today. Problem Descriptions¶ As this is a complex topic, it will be easier to break the problems down into their constituent parts, based on which part of TripleO they affect: Problem #1: TripleO uses DHCP/PXE on the Undercloud provisioning net (ctlplane). Neutron on the undercloud does not yet support DHCP relays and multiple L2 subnets, since it does DHCP/PXE directly on the provisioning network. Possible Solutions, Ideas, or Approaches: Modify Ironic and/or Neutron to support multiple DHCP ranges in the dnsmasq configuration, use DHCP relay running on top-of-rack switches which receives DHCP requests and forwards them to dnsmasq on the Undercloud. There is a patch in progress to support that 11. Modify Neutron to support DHCP relay. There is a patch in progress to support that 10. Currently, if one adds a subnet to a network, Neutron DHCP agent will pick up the changes and configure separate subnets correctly in dnsmasq. For instance, after adding a second subnet to the ctlplane network, here is the resulting startup command for Neutron’s instance of dnsmasq: dnsmasq --no-hosts --no-resolv --strict-order --except-interface=lo \ --pid-file=/var/lib/neutron/dhcp/aae53442-204e-4c8e-8a84-55baaeb496cf/pid \ --dhcp-hostsfile=/var/lib/neutron/dhcp/aae53442-204e-4c8e-8a84-55baaeb496cf/host \ --addn-hosts=/var/lib/neutron/dhcp/aae53442-204e-4c8e-8a84-55baaeb496cf/addn_hosts \ --dhcp-optsfile=/var/lib/neutron/dhcp/aae53442-204e-4c8e-8a84-55baaeb496cf/opts \ --dhcp-leasefile=/var/lib/neutron/dhcp/aae53442-204e-4c8e-8a84-55baaeb496cf/leases \ --dhcp-match=set:ipxe,175 --bind-interfaces --interface=tap4ccef953-e0 \ --dhcp-range=set:tag0,172.19.0.0,static,86400s \ --dhcp-range=set:tag1,172.20.0.0,static,86400s \ --dhcp-option-force=option:mtu,1500 --dhcp-lease-max=512 \ --conf-file=/etc/dnsmasq-ironic.conf --domain=openstacklocal The router information gets put into the dhcp-optsfile, here are the contents of /var/lib/neutron/dhcp/aae53442-204e-4c8e-8a84-55baaeb496cf/opts: tag:tag0,option:classless-static-route,172.20.0.0/24,0.0.0.0,0.0.0.0/0,172.19.0.254 tag:tag0,249,172.20.0.0/24,0.0.0.0,0.0.0.0/0,172.19.0.254 tag:tag0,option:router,172.19.0.254 tag:tag1,option:classless-static-route,169.254.169.254/32,172.20.0.1,172.19.0.0/24,0.0.0.0,0.0.0.0/0,172.20.0.254 tag:tag1,249,169.254.169.254/32,172.20.0.1,172.19.0.0/24,0.0.0.0,0.0.0.0/0,172.20.0.254 tag:tag1,option:router,172.20.0.254 The above options file will result in separate routers being handed out to separate IP subnets. Furthermore, Neutron appears to “do the right thing” with regard to routes for other subnets on the same network. We can see that the option “classless-static-route” is given, with pointers to both the default route and the other subnet(s) on the same Neutron network. In order to modify Ironic-Inspector to use multiple subnets, we will need to extend instack-undercloud to support network segments. There is a patch in review to support segments in instack undercloud 0. Potential Workaround One possibility is to use an alternate method to DHCP/PXE boot, such as using DHCP configuration directly on the router, or to configure a host on the remote network which provides DHCP and PXE URLs, then provides routes back to the ironic-conductor and metadata server as part of the DHCP response. It is not always feasible for groups doing testing or development to configure DHCP relay on the switches. For proof-of-concept implementations of spine-and-leaf, we may want to configure all provisioning networks to be trunked back to the Undercloud. This would allow the Undercloud to provide DHCP for all networks without special switch configuration. In this case, the Undercloud would act as a router between subnets/VLANs. This should be considered a small-scale solution, as this is not as scalable as DHCP relay. The configuration file for dnsmasq is the same whether all subnets are local or remote, but dnsmasq may have to listen on multiple interfaces (today it only listens on br-ctlplane). The dnsmasq process currently runs with --bind-interface=tap-XXX, but the process will need to be run with either binding to multiple interfaces, or with --except-interface=lo and multiple interfaces bound to the namespace. For proof-of-concept deployments, as well as testing environments, it might make sense to run a DHCP relay on the Undercloud, and trunk all provisioning VLANs back to the Undercloud. This would allow dnsmasq to listen on the tap interface, and DHCP requests would be forwarded to the tap interface. The downside of this approach is that the Undercloud would need to have IP addresses on each of the trunked interfaces. Another option is to configure dedicated hosts or VMs to be used as DHCP relay and router for subnets on multiple VLANs, all of which would be trunked to the relay/router host, thus acting exactly like routing switches. Problem #2: Neutron’s model for a segmented network that spans multiple L2 domains uses the segment object to allow multiple subnets to be assigned to the same network. This functionality needs to be integrated into the Undercloud. Possible Solutions, Ideas, or Approaches: Implement Neutron segments on the undercloud. The spec for Neutron routed network segments 1 provides a schema that we can use to model a routed network. By implementing support for network segments, we can provide assign Ironic nodes to networks on routed subnets. This allows us to continue to use Neutron for IP address management, as ports are assigned by Neutron and tracked in the Neutron database on the Undercloud. See approach #1 below. Multiple Neutron networks (1 set per rack), to model all L2 segments. By using a different set of networks in each rack, this provides us with the flexibility to use different network architectures on a per-rack basis. Each rack could have its own set of networks, and we would no longer have to provide all networks in all racks. Additionally, a split-datacenter architecture would naturally have a different set of networks in each site, so this approach makes sense. This is detailed in approach #2 below. Multiple subnets per Neutron network. This is probably the best approach for provisioning, since Neutron is already able to handle DHCP relay with multiple subnets as part of the same network. Additionally, this allows a clean separation between local subnets associated with provisioning, and networks which are used in the overcloud, such as External networks in two different datacenters). This is covered in more detail in approach #3 below. Use another system for IPAM, instead of Neutron. Although we could use a database, flat file, or some other method to keep track of IP addresses, Neutron as an IPAM back-end provides many integration benefits. Neutron integrates DHCP, hardware switch port configuration (through the use of plugins), integration in Ironic, and other features such as IPv6 support. This has been deemed to be infeasible due to the level of effort required in replacing both Neutron and the Neutron DHCP server (dnsmasq). Approaches to Problem #2: Approach 1 (Implement Neutron segments on the Undercloud): The Neutron segments model provides a schema in Neutron that allows us to model the routed network. Using multiple subnets provides the flexibility we need without creating exponentially more resources. We would create the same provisioning network that we do today, but use multiple segments associated to different routed subnets. The disadvantage to this approach is that it makes it impossible to represent network VLANs with more than one IP subnet (Neutron technically supports more than one subnet per port). Currently TripleO only supports a single subnet per isolated network, so this should not be an issue. Approach 2 (Multiple Neutron networks (1 set per rack), to model all L2 segments): We will be using multiple networks to represent isolated networks in multiple L2 domains. One sticking point is that although Neutron will configure multiple routes for multiple subnets within a given network, we need to be able to both configure static IPs and routes, and be able to scale the network by adding additional subnets after initial deployment. Since we control addresses and routes on the host nodes using a combination of Heat templates and os-net-config, it is possible to use static routes to supernets to provide L2 adjacency. This approach only works for non-provisioning networks, since we rely on Neutron DHCP servers providing routes to adjacent subnets for the provisioning network. Example: Suppose 2 subnets are provided for the Internal API network: 172.19.1.0/24 and 172.19.2.0/24. We want all Internal API traffic to traverse the Internal API VLANs on both the controller and a remote compute node. The Internal API network uses different VLANs for the two nodes, so we need the routes on the hosts to point toward the Internal API gateway instead of the default gateway. This can be provided by a supernet route to 172.19.x.x pointing to the local gateway on each subnet (e.g. 172.19.1.1 and 172.19.2.1 on the respective subnets). This could be represented in os-net-config with the following: - type: interface name: nic3 addresses: - ip_netmask: {get_param: InternalApiIpSubnet} routes: - ip_netmask: {get_param: InternalApiSupernet} next_hop: {get_param: InternalApiRouter} Where InternalApiIpSubnet is the IP address on the local subnet, InternalApiSupernet is ‘172.19.0.0/16’, and InternalApiRouter is either 172.19.1.1 or 172.19.2.1 depending on which local subnet the host belongs to. The end result of this is that each host has a set of IP addresses and routes that isolate traffic by function. In order for the return traffic to also be isolated by function, similar routes must exist on both hosts, pointing to the local gateway on the local subnet for the larger supernet that contains all Internal API subnets. The downside of this is that we must require proper supernetting, and this may lead to larger blocks of IP addresses being used to provide ample space for scaling growth. For instance, in the example above an entire /16 network is set aside for up to 255 local subnets for the Internal API network. This could be changed into a more reasonable space, such as /18, if the number of local subnets will not exceed 64, etc. This will be less of an issue with native IPv6 than with IPv4, where scarcity is much more likely. Approach 3 (Multiple subnets per Neutron network): The approach we will use for the provisioning network will be to use multiple subnets per network, using Neutron segments. This will allow us to take advantage of Neutron’s ability to support multiple networks with DHCP relay. The DHCP server will supply the necessary routes via DHCP until the nodes are configured with a static IP post-deployment. Problem #3: Ironic introspection DHCP doesn’t yet support DHCP relay This makes it difficult to do introspection when the hosts are not on the same L2 domain as the controllers. Patches are either merged or in review to support DHCP relay. Possible Solutions, Ideas, or Approaches: A patch to support a dnsmasq PXE filter driver has been merged. This will allow us to support selective DHCP when using DHCP relay (where the packet is not coming from the MAC of the host but rather the MAC of the switch) 12. A patch has been merged to puppet-ironic to support multiple DHCP subnets for Ironic Inspector 13. A patch is in review to add support for multiple subnets for the provisioning network in the instack-undercloud scripts 14. For more information about solutions, please refer to the tripleo-routed-networks-ironic-inspector blueprint 5 and spec 6. Problem #4: The IP addresses on the provisioning network need to be static IPs for production. Possible Solutions, Ideas, or Approaches: Dan Prince wrote a patch 9 in Newton to convert the ctlplane network addresses to static addresses post-deployment. This will need to be refactored to support multiple provisioning subnets across routers. Solution Implementation This work is done and merged for the legacy use case. During the initial deployment, the nodes receive their IP address via DHCP, but during Heat deployment the os-net-config script is called, which writes static configuration files for the NICs with static IPs. This work will need to be refactored to support assigning IPs from the appropriate subnet, but the work will be part of the TripleO Heat Template refactoring listed in Problems #6, and #7 below. For the deployment model where the IPs are specified (ips-from-pool-all.yaml), we need to develop a model where the Control Plane IP can be specified on multiple deployment subnets. This may happen in a later cycle than the initial work being done to enable routed networks in TripleO. For more information, reference the tripleo-predictable-ctlplane-ips blueprint 7 and spec 8. Problem #5: Heat Support For Routed Networks The Neutron routed networks extensions were only added in recent releases, and there was a dependency on TripleO Heat Templates. Possible Solutions, Ideas or Approaches: Add the required objects to Heat. At minimum, we will probably have to add OS::Neutron::Segment, which represents layer 2 segments, the OS::Neutron::Networkwill be updated to support the l2-adjacencyattribute, OS::Neutron::Subnetand OS::Neutron:portwould be extended to support the segment_idattribute. Solution Implementation: Heat now supports the OS::Neutron::Segment resource. For example: heat_template_version: 2015-04-30 ... resources: ... the_resource: type: OS::Neutron::Segment properties: description: String name: String network: String network_type: String physical_network: String segmentation_id: Integer This work has been completed in Heat with this review 15. Problem #6: Static IP assignment: Choosing static IPs from the correct subnet Some roles, such as Compute, can likely be placed in any subnet, but we will need to keep certain roles co-located within the same set of L2 domains. For instance, whatever role is providing Neutron services will need all controllers in the same L2 domain for VRRP to work properly. The network interfaces will be configured using templates that create configuration files for os-net-config. The IP addresses that are written to each node’s configuration will need to be on the correct subnet for each host. In order for Heat to assign ports from the correct subnets, we will need to have a host-to-subnets mapping. Possible Solutions, Ideas or Approaches: The simplest implementation of this would probably be a mapping of role/index to a set of subnets, so that it is known to Heat that Controller-1 is in subnet set X and Compute-3 is in subnet set Y. We could associate particular subnets with roles, and then use one role per L2 domain (such as per-rack). The roles and templates should be refactored to allow for dynamic IP assignment within subnets associated with the role. We may wish to evaluate the possibility of storing the routed subnets in Neutron using the routed networks extensions that are still under development. This would provide additional flexibility, but is probably not required to implement separate subnets in each rack. A scalable long-term solution is to map which subnet the host is on during introspection. If we can identify the correct subnet for each interface, then we can correlate that with IP addresses from the correct allocation pool. This would have the advantage of not requiring a static mapping of role to node to subnet. In order to do this, additional integration would be required between Ironic and Neutron (to make Ironic aware of multiple subnets per network, and to add the ability to make that association during introspection). Solution Impelementation: Solutions 1 and 2 above have been implemented in the “composable roles” series of patches 16. The initial implementation uses separate Neutron networks for different L2 domains. These templates are responsible for assigning the isolated VLANs used for data plane and overcloud control planes, but does not address the provisioning network. Future work may refactor the non-provisioning networks to use segments, but for now non-provisioning networks must use different networks for different roles. Ironic autodiscovery may allow us to determine the subnet where each node is located without manual entry. More work is required to automate this process. Problem #7: Isolated Networking Requires Static Routes to Ensure Correct VLAN is Used In order to continue using the Isolated Networks model, routes will need to be in place on each node, to steer traffic to the correct VLAN interfaces. The routes are written when os-net-config first runs, but may change. We can’t just rely on the specific routes to other subnets, since the number of subnets will increase or decrease as racks are added or taken away. Rather than try to deal with constantly changing routes, we should use static routes that will not need to change, to avoid disruption on a running system. Possible Solutions, Ideas or Approaches: Require that supernets are used for various network groups. For instance, all the Internal API subnets would be part of a supernet, for instance 172.17.0.0/16 could be used, and broken up into many smaller subnets, such as /24. This would simplify the routes, since only a single route for 172.17.0.0/16 would be required pointing to the local router on the 172.17.x.0/24 network. Modify os-net-config so that routes can be updated without bouncing interfaces, and then run os-net-config on all nodes when scaling occurs. A review for this functionality was considered and abandeded 3. The patch was determined to have the potential to lead to instability. os-net-config configures static routes for each interface. If we can keep the routing simple (one route per functional network), then we would be able to isolate traffic onto functional VLANs like we do today. It would be a change to the existing workflow to have os-net-config run on updates as well as deployment, but if this were a non-impacting event (the interfaces didn’t have to be bounced), that would probably be OK. At a later time, the possibility of using dynamic routing should be considered, since it reduces the possibility of user error and is better suited to centralized management. SDN solutions are one way to provide this, or other approaches may be considered, such as setting up OVS tunnels. Proposed Change¶ The proposed changes are discussed below. Overview¶ In order to provide spine-and-leaf networking for deployments, several changes will have to be made to TripleO: Support for DHCP relay in Ironic and Neutron DHCP servers. Implemented in patch 15 and the patch series 17. Refactoring of TripleO Heat Templates network isolation to support multiple subnets per isolated network, as well as per-subnet and supernet routes. The bulk of this work is done in the patch series 16 and in patch 10. Changes to Infra CI to support testing. Documentation updates. Alternatives¶ The approach outlined here is very prescriptive, in that the networks must be known ahead of time, and the IP addresses must be selected from the appropriate pool. This is due to the reliance on static IP addresses provided by Heat. One alternative approach is to use DHCP servers to assign IP addresses on all hosts on all interfaces. This would simplify configuration within the Heat templates and environment files. Unfortunately, this was the original approach of TripleO, and it was deemed insufficient by end-users, who wanted stability of IP addresses, and didn’t want to have an external dependency on DHCP. Another approach is to use the DHCP server functionality in the network switch infrastructure in order to PXE boot systems, then assign static IP addresses after the PXE boot is done via DHCP. This approach only solves for part of the requirement: the net booting. It does not solve the desire to have static IP addresses on each network. This could be achieved by having static IP addresses in some sort of per-node map. However, this approach is not as scalable as programatically determining the IPs, since it only applies to a fixed number of hosts. We want to retain the ability of using Neutron as an IP address management (IPAM) back-end, ideally. Another approach which was considered was simply trunking all networks back to the Undercloud, so that dnsmasq could respond to DHCP requests directly, rather than requiring a DHCP relay. Unfortunately, this has already been identified as being unacceptable by some large operators, who have network architectures that make heavy use of L2 segregation via routers. This also won’t work well in situations where there is geographical separation between the VLANs, such as in split-site deployments. Security Impact¶ One of the major differences between spine-and-leaf and standard isolated networking is that the various subnets are connected by routers, rather than being completely isolated. This means that without proper ACLs on the routers, networks which should be private may be opened up to outside traffic. This should be addressed in the documentation, and it should be stressed that ACLs should be in place to prevent unwanted network traffic. For instance, the Internal API network is sensitive in that the database and message queue services run on that network. It is supposed to be isolated from outside connections. This can be achieved fairly easily if supernets are used, so that if all Internal API subnets are a part of the 172.19.0.0/16 supernet, an ACL rule will allow only traffic between Internal API IPs (this is a simplified example that could be applied to any Internal API VLAN, or as a global ACL): allow traffic from 172.19.0.0/16 to 172.19.0.0/16 deny traffic from * to 172.19.0.0/16 Other End User Impact¶ Deploying with spine-and-leaf will require additional parameters to provide the routing information and multiple subnets required. This will have to be documented. Furthermore, the validation scripts may need to be updated to ensure that the configuration is validated, and that there is proper connectivity between overcloud hosts. Performance Impact¶ Much of the traffic that is today made over layer 2 will be traversing layer 3 routing borders in this design. That adds some minimal latency and overhead, although in practice the difference may not be noticeable. One important consideration is that the routers must not be too overcommitted on their uplinks, and the routers must be monitored to ensure that they are not acting as a bottleneck, especially if complex access control lists are used. Other Deployer Impact¶ A spine-and-leaf deployment will be more difficult to troubleshoot than a deployment that simply uses a set of VLANs. The deployer may need to have more network expertise, or a dedicated network engineer may be needed to troubleshoot in some cases. Developer Impact¶ Spine-and-leaf is not easily tested in virt environments. This should be possible, but due to the complexity of setting up libvirt bridges and routes, we may want to provide a simulation of spine-and-leaf for use in virtual environments. This may involve building multiple libvirt bridges and routing between them on the Undercloud, or it may involve using a DHCP relay on the virt-host as well as routing on the virt-host to simulate a full routing switch. A plan for development and testing will need to be developed, since not every developer can be expected to have a routed environment to work in. It may take some time to develop a routed virtual environment, so initial work will be done on bare metal. Implementation¶ Assignee(s)¶ - Primary assignee: Dan Sneddon <dsneddon@redhat.com> Approver(s)¶ - Primary approver: Emilien Macchi <emacchi@redhat.com> Work Items¶ Add static IP assignment to Control Plane [DONE] Modify Ironic Inspector dnsmasq.confgeneration to allow export of multiple DHCP ranges, as described in Problem #1 and Problem #3. Evaluate the Routed Networks work in Neutron, to determine if it is required for spine-and-leaf, as described in Problem #2. Add OS::Neutron::Segment and l2-adjacency support to Heat, as described in Problem #5. This may or may not be a dependency for spine-and-leaf, based on the results of work item #3. Modify the Ironic-Inspector service to record the host-to-subnet mappings, perhaps during introspection, to address Problem #6. Add parameters to Isolated Networking model in Heat to support supernet routes for individual subnets, as described in Problem #7. Modify Isolated Networking model in Heat to support multiple subnets, as described in Problem #8. Add support for setting routes to supernets in os-net-config NIC templates, as described in the proposed solution to Problem #2. Implement support for iptables on the Controller, in order to mitigate the APIs potentially being reachable via remote routes. Alternatively, document the mitigation procedure using ACLs on the routers. Document the testing procedures. Modify the documentation in tripleo-docs to cover the spine-and-leaf case. Implementation Details¶ Workflow: Operator configures DHCP networks and IP address ranges Operator imports baremetal instackenv.json When introspection or deployment is run, the DHCP server receives the DHCP request from the baremetal host via DHCP relay If the node has not been introspected, reply with an IP address from the introspection pool* and the inspector PXE boot image If the node already has been introspected, then the server assumes this is a deployment attempt, and replies with the Neutron port IP address and the overcloud-full deployment image The Heat templates are processed which generate os-net-config templates, and os-net-config is run to assign static IPs from the correct subnets, as well as routes to other subnets via the router gateway addresses. The introspection pool will be different for each provisioning subnet. When using spine-and-leaf, the DHCP server will need to provide an introspection IP address on the appropriate subnet, depending on the information contained in the DHCP relay packet that is forwarded by the segment router. dnsmasq will automatically match the gateway address (GIADDR) of the router that forwarded the request to the subnet where the DHCP request was received, and will respond with an IP and gateway appropriate for that subnet. The above workflow for the DHCP server should allow for provisioning IPs on multiple subnets. Dependencies¶ There may be a dependency on the Neutron Routed Networks. This won’t be clear until a full evaluation is done on whether we can represent spine-and-leaf using only multiple subnets per network. There will be a dependency on routing switches that perform DHCP relay service for production spine-and-leaf deployments. Testing¶ In order to properly test this framework, we will need to establish at least one CI test that deploys spine-and-leaf. As discussed in this spec, it isn’t necessary to have a full routed bare metal environment in order to test this functionality, although there is some work to get it working in virtual environments such as OVB. For bare metal testing, it is sufficient to trunk all VLANs back to the Undercloud, then run DHCP proxy on the Undercloud to receive all the requests and forward them to br-ctlplane, where dnsmasq listens. This will provide a substitute for routers running DHCP relay. For Neutron DHCP, some modifications to the iptables rule may be required to ensure that all DHCP requests from the overcloud nodes are received by the DHCP proxy and/or the Neutron dnsmasq process running in the dhcp-agent namespace. Documentation Impact¶ The procedure for setting up a dev environment will need to be documented, and a work item mentions this requirement. The TripleO docs will need to be updated to include detailed instructions for deploying in a spine-and-leaf environment, including the environment setup. Covering specific vendor implementations of switch configurations is outside this scope, but a specific overview of required configuration options should be included, such as enabling DHCP relay (or “helper-address” as it is also known) and setting the Undercloud as a server to receive DHCP requests. The updates to TripleO docs will also have to include a detailed discussion of choices to be made about IP addressing before a deployment. If supernets are to be used for network isolation, then a good plan for IP addressing will be required to ensure scalability in the future. References¶ - 0 Review: TripleO Heat Templates: Tripleo routed networks ironic inspector, and Undercloud - 1 Spec: Routed Networks for Neutron - 3 Review: Modify os-net-config to make changes without bouncing interface - 5 Blueprint: Modify TripleO Ironic Inspector to PXE Boot Via DHCP Relay - 6 Spec: Modify TripleO Ironic Inspector to PXE Boot Via DHCP Relay - 7 Blueprint: User-specifiable Control Plane IP on TripleO Routed Isolated Networks - 8 Spec: User-specifiable Control Plane IP on TripleO Routed Isolated Networks - 9 Review: Configure ctlplane network with a static IP - 10(1,2) Review: Neutron: Make “on-link” routes for subnets optional - 11 Review: Ironic Inspector: Make “on-link” routes for subnets optional - 12 Review: Ironic Inspector: Introducing a dnsmasq PXE filter driver - 13 Review: Multiple DHCP Subnets for Ironic Inspector - 14 Review: Instack Undercloud: Add support for multiple inspection subnets - 15(1,2) Review: DHCP Agent: Separate local from non-local subnets - 16(1,2) Review Series: topic:bp/composable-networks - 17 Review Series: project:openstack/networking-baremetal
https://specs.openstack.org/openstack/tripleo-specs/specs/queens/tripleo-routed-networks-deployment.html
CC-MAIN-2021-04
en
refinedweb
- Product - Customers - Solutions Unified service tagging ties Datadog telemetry together through the use of three reserved tags: env, service, and version. With these three tags you can: Unified service tagging requires setup of the Datadog Agent. Unified service tagging requires a tracer version that supports new configurations of the reserved tags. More information can be found per language in the setup instructions. Unified service tagging requires knowledge of configuring tags. If you are unsure of how to configure tags, read the Getting Started with Tagging and Assigning Tags documentation before proceeding to configuration. To begin configuration of unified service tagging, choose your environment: In containerized environments, env, service, and version are set through the service’s environment variables or labels (for example, Kubernetes deployment and pod labels, Docker container labels). The Datadog Agent detects this tagging configuration and applies it to the data it collects from containers. To setup unified service tagging in a containerized environment: Enable Autodiscovery. This allows the Datadog Agent to automatically identify services running on a specific container and gathers data from those services to map environment variables to the env, service, and version tags. If you are using Docker, make sure the Agent can access your container’s Docker socket. This allows the Agent detect the environment variables and map them to the standard tags. Configure your environment based on either full configuration or partial configuration detailed below. To get the full range of unified service tagging when using Kubernetes, add environment variables to both the deployment object level and the pod template spec level: apiVersion: apps/v1 kind: Deployment metadata: labels: tags.datadoghq.com/env: "<ENV>" tags.datadoghq.com/service: "<SERVICE>" tags.datadoghq.com/version: "<VERSION>" ... template: metadata: labels: tags.datadoghq.com/env: "<ENV>" tags.datadoghq.com/service: "<SERVICE>" tags.datadoghq.com/version: "'] To configure pod-level metrics, add the following standard labels ( tags.datadoghq.com) to the pod spec of a Deployment, StatefulSet, or Job: template: metadata: labels: tags.datadoghq.com/env: "<ENV>" tags.datadoghq.com/service: "<SERVICE>" tags.datadoghq.com/version: "<VERSION>" These labels cover pod-level Kubernetes CPU, memory, network, and disk metrics, and can be used for injecting DD_ENV, DD_SERVICE, and DD_VERSION into your service’s container through Kubernetes’s downward API. If you have multiple containers per pod, you can specify standard labels by container: tags.datadoghq.com/<container-name>.env tags.datadoghq.com/<container-name>.service tags.datadoghq.com/<container-name>.version To configure Kubernetes State Metrics: Set join_standard_tags to true in your configuration file. Add the same standard labels to the collection of labels for the parent resource (e.g., Deployment). apiVersion: apps/v1 kind: Deployment metadata: labels: tags.datadoghq.com/env: "<ENV>" tags.datadoghq.com/service: "<SERVICE>" tags.datadoghq.com/version: "<VERSION>" spec: template: metadata: labels: tags.datadoghq.com/env: "<ENV>" tags.datadoghq.com/service: "<SERVICE>" tags.datadoghq.com/version: "<VERSION>" To configure APM Tracer and StatsD client environment variables, use the Kubernetes’s downward API in the format'] Set the DD_ENV, DD_SERVICE, and DD_VERSION environment variables and corresponding Docker labels for your container to your get the full range of unified service tagging. The values for service and version can be provided in the Dockerfile: ENV DD_SERVICE <SERVICE> ENV DD_VERSION <VERSION> LABEL com.datadoghq.tags.service="<SERVICE>" LABEL com.datadoghq.tags.version="<VERSION>" Since env is likely determined at deploy time, you can inject the environment variable and label later: docker run -e DD_ENV=<ENV> -l com.datadoghq.tags.env=<ENV> ... You may also prefer to set everything at deploy time: docker run -e DD_ENV="<ENV>" \ -e DD_SERVICE="<SERVICE>" \ -e DD_VERSION="<VERSION>" \ -l com.datadoghq.tags.env="<ENV>" \ -l com.datadoghq.tags.service="<SERVICE>" \ -l com.datadoghq.tags.version="<VERSION>" \ ... If your service has no need for the Datadog environment variables (for example, third party software like Redis, PostgreSQL, NGINX, and applications not traced by APM) you can just use the Docker labels: com.datadoghq.tags.env com.datadoghq.tags.service com.datadoghq.tags.version As explained in the full configuration, these labels can be set in a Dockerfile or as arguments for launching the container. Set the DD_ENV, DD_SERVICE, and DD_VERSION environment variables and corresponding Docker labels in your container’s runtime environment to your get the full range of unified service tagging. For instance, you can set all of this configuration in one place through your ECS task definition: "environment": [ { "name": "DD_ENV", "value": "<ENV>" }, { "name": "DD_SERVICE", "value": "<SERVICE>" }, { "name": "DD_VERSION", "value": "<VERSION>" } "dockerLabels": { "com.datadoghq.tags.env": "<ENV>", "com.datadoghq.tags.service": "<SERVICE>", "com.datadoghq.tags.version": "<VERSION>" } ] If your service has no need for the Datadog environment variables (for example, third party software like Redis, PostgreSQL, NGINX, and applications not traced by APM) you can just use the Docker labels in your ECS task definition: "dockerLabels": { "com.datadoghq.tags.env": "<ENV>", "com.datadoghq.tags.service": "<SERVICE>", "com.datadoghq.tags.version": "<VERSION>" } Depending on how you build and deploy your services' binaries or executables, you may have several options available for setting environment variables. Since you may run one or more services per host, it is recommended that these environment variables be scoped to a single process. To form a single point of configuration for all telemetry emitted directly from your service’s runtime for traces, logs, and StatsD metrics, you can either: Export the environment variables in the command for your executable: DD_ENV=<env> DD_SERVICE=<service> DD_VERSION=<version> /bin/my-service Or use Chef, Ansible, or another orchestration tool to populate a service’s systemd or initd configuration file with the DD environment variables. That way when the service process is started it will have access to those variables. When configuring your traces for unified service tagging: Configure the APM Tracer with DD_ENV to keep the definition of env closer to the application that is generating the traces. This method allows the env tag to be sourced automatically from a tag in the span metadata. Configure spans with DD_VERSION to add version to all spans that fall under the service that belongs to the tracer (generally DD_SERVICE). This means that if your service creates spans with the name of an external service, those spans will not receive version as a tag. As long as version is present in spans, it will be added to trace metrics generated from those spans. The version can be added manually in-code or automatically by the APM Tracer. When configured, at the very least these will be used by the APM and Dogstatsd clients to tag trace data and StatsD metrics with env, service, and version. If enabled, the APM tracer will also inject the values of these variables into your logs. Note: There can only be one service per span. Trace metrics generally have a single service as well. However, if you have a different service defined in your hosts' tags, that configured service tag will show up on all trace metrics emitted from that host. If you’re using connected logs and traces, enable automatic logs injection if supported for your APM Tracer. The APM Tracer will then automatically inject env, service, and version into your logs, thereby eliminating manual configuration for those fields elsewhere. Note: The PHP Tracer does not support configuration of unified service tagging for logs. Tags are added in an append-only fashion for custom statsd metrics. For example, if you have two different values for env, the metrics will be tagged with both environments. There is no order in which one tag will override another of the same name. If your service has access to DD_ENV, DD_SERVICE, and DD_VERSION, then the DogStatsD client will automatically add the corresponding tags to your custom metrics. Note: The Datadog DogStatsD clients for .NET and PHP do not yet support this functionality. env and service can also be added to your infrastructure metrics. The tagging configuration for service metrics lives closer to the Agent in non-containerized contexts. Given that this configuration does not change for each invocation of a service’s process, adding version to the configuration is not recommended. Set the following configuration in the Agent’s main configuration file: env: <ENV> tags: - service:<SERVICE> This setup guarantees consistent tagging of env and service for all data emitted by the Agent. Set the following configuration in the Agent’s main configuration file: env: <ENV> To get unique service tags on CPU, memory, and disk I/O metrics at the process level, you can configure a process check: init_config: instances: - name: web-app search_string: ["/bin/web-app"] exact_match: false service: web-app - name: nginx search_string: ["nginx"] exact_match: false service: nginx-web-app Note: If you already have a service tag set globally in your Agent’s main configuration file, the process metrics will be tagged with two services. Since this can cause confusion with interpreting the metrics, it is recommended to configure the service tag only in the configuration of the process check. Depending on how you build and deploy your AWS Lambda-based serverless applications, you may have several options available for applying the env, service and version tags to metrics, traces and logs. Note: These tags are specified through AWS resource tags instead of environment variables. Specifically, the DD_ENV, DD_SERVICE and DD_VERSION environment variables are not supported. Tag your Lambda functions using the tags option: # serverless.yml service: service-name provider: name: aws # to apply the tags to all functions tags: env: "<ENV>" service: "<SERVICE>" version: "<VERSION>" functions: hello: # this function will inherit the service level tags config above handler: handler.hello world: # this function will overwrite the tags handler: handler.users tags: env: "<ENV>" service: "<SERVICE>" version: "<VERSION>" If you have installed the Datadog serverless plugin, the plugin automatically tags the Lambda functions with the service and env tags using the service and stage values from the serverless application definition, unless a service or env tag already exists. Tag your Lambda functions using the Tags option: AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Resources: MyLambdaFunction: Type: AWS::Serverless::Function Properties: Tags: env: "<ENV>" service: "<SERVICE>" version: "<VERSION>" If you have installed the Datadog serverless macro, you can also specify a service and env tag as parameters: Transform: - AWS::Serverless-2016-10-31 - Name: DatadogServerless Parameters: service: "<SERVICE>" env: "<ENV>" Tag your app, stack or individual Lambda functions using the Tags class. If you have installed the Datadog serverless macro, you can also specify a service and env tag as parameters: import * as cdk from "@aws-cdk/core"; class CdkStack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); this.addTransform("DatadogServerless"); new cdk.CfnMapping(this, "Datadog", { mapping: { Parameters: { service: "<SERVICE>", env: "<ENV>", }, }, }); } } Apply the env, service and version tags following the AWS instructions for Tagging Lambda Functions. Ensure the DdFetchLambdaTags option is set to true on the CloudFormation stack for your Datadog Forwarder. This option defaults to true since version 3.19.0. Additional helpful documentation, links, and articles:
https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging/?lang_pref=en
CC-MAIN-2021-04
en
refinedweb