Feb 12 21:57:39.028569 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024
Feb 12 21:57:39.028589 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 21:57:39.028599 kernel: BIOS-provided physical RAM map:
Feb 12 21:57:39.028605 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 12 21:57:39.028611 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 12 21:57:39.028617 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 12 21:57:39.028627 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Feb 12 21:57:39.028633 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Feb 12 21:57:39.028639 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Feb 12 21:57:39.028646 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 12 21:57:39.028652 kernel: NX (Execute Disable) protection: active
Feb 12 21:57:39.028658 kernel: SMBIOS 2.7 present.
Feb 12 21:57:39.028665 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Feb 12 21:57:39.028671 kernel: Hypervisor detected: KVM
Feb 12 21:57:39.028682 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 12 21:57:39.028689 kernel: kvm-clock: cpu 0, msr 50faa001, primary cpu clock
Feb 12 21:57:39.028696 kernel: kvm-clock: using sched offset of 7200591698 cycles
Feb 12 21:57:39.028703 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 12 21:57:39.028710 kernel: tsc: Detected 2499.992 MHz processor
Feb 12 21:57:39.028718 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 12 21:57:39.028727 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 12 21:57:39.028734 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Feb 12 21:57:39.028742 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 12 21:57:39.028831 kernel: Using GB pages for direct mapping
Feb 12 21:57:39.028841 kernel: ACPI: Early table checksum verification disabled
Feb 12 21:57:39.028849 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Feb 12 21:57:39.028856 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Feb 12 21:57:39.028863 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 12 21:57:39.028870 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Feb 12 21:57:39.028880 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Feb 12 21:57:39.028887 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 12 21:57:39.028895 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 12 21:57:39.028901 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Feb 12 21:57:39.028909 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 12 21:57:39.028916 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Feb 12 21:57:39.028923 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Feb 12 21:57:39.028930 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 12 21:57:39.028939 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Feb 12 21:57:39.028946 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Feb 12 21:57:39.028954 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Feb 12 21:57:39.028964 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Feb 12 21:57:39.028972 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Feb 12 21:57:39.028979 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Feb 12 21:57:39.028987 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Feb 12 21:57:39.028997 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Feb 12 21:57:39.029004 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Feb 12 21:57:39.029012 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Feb 12 21:57:39.029019 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 12 21:57:39.029026 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 12 21:57:39.029034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Feb 12 21:57:39.029041 kernel: NUMA: Initialized distance table, cnt=1
Feb 12 21:57:39.029049 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Feb 12 21:57:39.029058 kernel: Zone ranges:
Feb 12 21:57:39.029066 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 12 21:57:39.029074 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Feb 12 21:57:39.029081 kernel:   Normal   empty
Feb 12 21:57:39.029089 kernel: Movable zone start for each node
Feb 12 21:57:39.029096 kernel: Early memory node ranges
Feb 12 21:57:39.029104 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 12 21:57:39.029111 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Feb 12 21:57:39.029119 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Feb 12 21:57:39.029129 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 12 21:57:39.029237 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 12 21:57:39.029246 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Feb 12 21:57:39.029254 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb 12 21:57:39.029261 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 12 21:57:39.029269 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Feb 12 21:57:39.029276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 12 21:57:39.029284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 12 21:57:39.029291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 12 21:57:39.029302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 12 21:57:39.029311 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 12 21:57:39.029355 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb 12 21:57:39.029363 kernel: TSC deadline timer available
Feb 12 21:57:39.029370 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 12 21:57:39.029378 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Feb 12 21:57:39.029385 kernel: Booting paravirtualized kernel on KVM
Feb 12 21:57:39.029393 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 12 21:57:39.029401 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Feb 12 21:57:39.029411 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576
Feb 12 21:57:39.029419 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152
Feb 12 21:57:39.029427 kernel: pcpu-alloc: [0] 0 1 
Feb 12 21:57:39.029434 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0
Feb 12 21:57:39.029455 kernel: kvm-guest: PV spinlocks enabled
Feb 12 21:57:39.029463 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb 12 21:57:39.029471 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Feb 12 21:57:39.029478 kernel: Policy zone: DMA32
Feb 12 21:57:39.029487 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 21:57:39.029498 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 12 21:57:39.029505 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 12 21:57:39.029513 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 12 21:57:39.029520 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 12 21:57:39.029528 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved)
Feb 12 21:57:39.029536 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 12 21:57:39.029544 kernel: Kernel/User page tables isolation: enabled
Feb 12 21:57:39.029551 kernel: ftrace: allocating 34475 entries in 135 pages
Feb 12 21:57:39.029561 kernel: ftrace: allocated 135 pages with 4 groups
Feb 12 21:57:39.029568 kernel: rcu: Hierarchical RCU implementation.
Feb 12 21:57:39.029576 kernel: rcu:         RCU event tracing is enabled.
Feb 12 21:57:39.029584 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 12 21:57:39.029592 kernel:         Rude variant of Tasks RCU enabled.
Feb 12 21:57:39.029599 kernel:         Tracing variant of Tasks RCU enabled.
Feb 12 21:57:39.029607 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 12 21:57:39.029615 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 12 21:57:39.029623 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 12 21:57:39.029643 kernel: random: crng init done
Feb 12 21:57:39.029654 kernel: Console: colour VGA+ 80x25
Feb 12 21:57:39.029662 kernel: printk: console [ttyS0] enabled
Feb 12 21:57:39.029670 kernel: ACPI: Core revision 20210730
Feb 12 21:57:39.029677 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Feb 12 21:57:39.029685 kernel: APIC: Switch to symmetric I/O mode setup
Feb 12 21:57:39.029693 kernel: x2apic enabled
Feb 12 21:57:39.029700 kernel: Switched APIC routing to physical x2apic.
Feb 12 21:57:39.029708 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093255d7c, max_idle_ns: 440795319144 ns
Feb 12 21:57:39.029718 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499992)
Feb 12 21:57:39.029726 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Feb 12 21:57:39.029733 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Feb 12 21:57:39.029741 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 12 21:57:39.029756 kernel: Spectre V2 : Mitigation: Retpolines
Feb 12 21:57:39.029766 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 12 21:57:39.029774 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 12 21:57:39.029782 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Feb 12 21:57:39.029790 kernel: RETBleed: Vulnerable
Feb 12 21:57:39.029798 kernel: Speculative Store Bypass: Vulnerable
Feb 12 21:57:39.029806 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 12 21:57:39.029813 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 12 21:57:39.029821 kernel: GDS: Unknown: Dependent on hypervisor status
Feb 12 21:57:39.029829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 12 21:57:39.029839 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 12 21:57:39.029848 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 12 21:57:39.029855 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Feb 12 21:57:39.029863 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Feb 12 21:57:39.029871 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Feb 12 21:57:39.029882 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Feb 12 21:57:39.029890 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Feb 12 21:57:39.029897 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Feb 12 21:57:39.029905 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 12 21:57:39.029913 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Feb 12 21:57:39.029921 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Feb 12 21:57:39.029929 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Feb 12 21:57:39.029937 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Feb 12 21:57:39.029944 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Feb 12 21:57:39.029952 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Feb 12 21:57:39.029960 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Feb 12 21:57:39.029968 kernel: Freeing SMP alternatives memory: 32K
Feb 12 21:57:39.029978 kernel: pid_max: default: 32768 minimum: 301
Feb 12 21:57:39.029986 kernel: LSM: Security Framework initializing
Feb 12 21:57:39.029993 kernel: SELinux:  Initializing.
Feb 12 21:57:39.030001 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 12 21:57:39.030009 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 12 21:57:39.030017 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
Feb 12 21:57:39.030025 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Feb 12 21:57:39.030033 kernel: signal: max sigframe size: 3632
Feb 12 21:57:39.030041 kernel: rcu: Hierarchical SRCU implementation.
Feb 12 21:57:39.030050 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 12 21:57:39.030060 kernel: smp: Bringing up secondary CPUs ...
Feb 12 21:57:39.030068 kernel: x86: Booting SMP configuration:
Feb 12 21:57:39.030076 kernel: .... node  #0, CPUs:      #1
Feb 12 21:57:39.030084 kernel: kvm-clock: cpu 1, msr 50faa041, secondary cpu clock
Feb 12 21:57:39.030092 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0
Feb 12 21:57:39.030100 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Feb 12 21:57:39.030109 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb 12 21:57:39.030117 kernel: smp: Brought up 1 node, 2 CPUs
Feb 12 21:57:39.030125 kernel: smpboot: Max logical packages: 1
Feb 12 21:57:39.030135 kernel: smpboot: Total of 2 processors activated (9999.96 BogoMIPS)
Feb 12 21:57:39.030143 kernel: devtmpfs: initialized
Feb 12 21:57:39.030151 kernel: x86/mm: Memory block size: 128MB
Feb 12 21:57:39.030159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 12 21:57:39.030167 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 12 21:57:39.030176 kernel: pinctrl core: initialized pinctrl subsystem
Feb 12 21:57:39.030184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 12 21:57:39.030192 kernel: audit: initializing netlink subsys (disabled)
Feb 12 21:57:39.030200 kernel: audit: type=2000 audit(1707775057.881:1): state=initialized audit_enabled=0 res=1
Feb 12 21:57:39.030210 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 12 21:57:39.030218 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 12 21:57:39.030226 kernel: cpuidle: using governor menu
Feb 12 21:57:39.030234 kernel: ACPI: bus type PCI registered
Feb 12 21:57:39.030242 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 12 21:57:39.030250 kernel: dca service started, version 1.12.1
Feb 12 21:57:39.030258 kernel: PCI: Using configuration type 1 for base access
Feb 12 21:57:39.030266 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 12 21:57:39.030274 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb 12 21:57:39.030284 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb 12 21:57:39.030292 kernel: ACPI: Added _OSI(Module Device)
Feb 12 21:57:39.030300 kernel: ACPI: Added _OSI(Processor Device)
Feb 12 21:57:39.030308 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 12 21:57:39.030316 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 12 21:57:39.030323 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb 12 21:57:39.030331 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb 12 21:57:39.030339 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb 12 21:57:39.030347 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Feb 12 21:57:39.030358 kernel: ACPI: Interpreter enabled
Feb 12 21:57:39.030366 kernel: ACPI: PM: (supports S0 S5)
Feb 12 21:57:39.030374 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 12 21:57:39.030382 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 12 21:57:39.030390 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Feb 12 21:57:39.030397 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 12 21:57:39.030562 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 12 21:57:39.030649 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Feb 12 21:57:39.030662 kernel: acpiphp: Slot [3] registered
Feb 12 21:57:39.030670 kernel: acpiphp: Slot [4] registered
Feb 12 21:57:39.030678 kernel: acpiphp: Slot [5] registered
Feb 12 21:57:39.030686 kernel: acpiphp: Slot [6] registered
Feb 12 21:57:39.030694 kernel: acpiphp: Slot [7] registered
Feb 12 21:57:39.030702 kernel: acpiphp: Slot [8] registered
Feb 12 21:57:39.030710 kernel: acpiphp: Slot [9] registered
Feb 12 21:57:39.030718 kernel: acpiphp: Slot [10] registered
Feb 12 21:57:39.030726 kernel: acpiphp: Slot [11] registered
Feb 12 21:57:39.030736 kernel: acpiphp: Slot [12] registered
Feb 12 21:57:39.030744 kernel: acpiphp: Slot [13] registered
Feb 12 21:57:39.030752 kernel: acpiphp: Slot [14] registered
Feb 12 21:57:39.030760 kernel: acpiphp: Slot [15] registered
Feb 12 21:57:39.030768 kernel: acpiphp: Slot [16] registered
Feb 12 21:57:39.030776 kernel: acpiphp: Slot [17] registered
Feb 12 21:57:39.030784 kernel: acpiphp: Slot [18] registered
Feb 12 21:57:39.030792 kernel: acpiphp: Slot [19] registered
Feb 12 21:57:39.030800 kernel: acpiphp: Slot [20] registered
Feb 12 21:57:39.030810 kernel: acpiphp: Slot [21] registered
Feb 12 21:57:39.030818 kernel: acpiphp: Slot [22] registered
Feb 12 21:57:39.030826 kernel: acpiphp: Slot [23] registered
Feb 12 21:57:39.030834 kernel: acpiphp: Slot [24] registered
Feb 12 21:57:39.030842 kernel: acpiphp: Slot [25] registered
Feb 12 21:57:39.030849 kernel: acpiphp: Slot [26] registered
Feb 12 21:57:39.030857 kernel: acpiphp: Slot [27] registered
Feb 12 21:57:39.030865 kernel: acpiphp: Slot [28] registered
Feb 12 21:57:39.030873 kernel: acpiphp: Slot [29] registered
Feb 12 21:57:39.030881 kernel: acpiphp: Slot [30] registered
Feb 12 21:57:39.030982 kernel: acpiphp: Slot [31] registered
Feb 12 21:57:39.030991 kernel: PCI host bridge to bus 0000:00
Feb 12 21:57:39.031087 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 12 21:57:39.031164 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 12 21:57:39.031387 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 12 21:57:39.031488 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Feb 12 21:57:39.031564 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 12 21:57:39.031663 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 12 21:57:39.031754 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb 12 21:57:39.031842 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Feb 12 21:57:39.031925 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb 12 21:57:39.032006 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Feb 12 21:57:39.032231 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Feb 12 21:57:39.032316 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Feb 12 21:57:39.032400 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Feb 12 21:57:39.032498 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Feb 12 21:57:39.032579 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Feb 12 21:57:39.032660 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Feb 12 21:57:39.032748 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Feb 12 21:57:39.032886 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Feb 12 21:57:39.033063 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Feb 12 21:57:39.033154 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 12 21:57:39.033243 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 12 21:57:39.033325 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Feb 12 21:57:39.033412 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 12 21:57:39.033513 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Feb 12 21:57:39.033524 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 12 21:57:39.033536 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 12 21:57:39.033544 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 12 21:57:39.033552 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 12 21:57:39.033561 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 12 21:57:39.033569 kernel: iommu: Default domain type: Translated 
Feb 12 21:57:39.033577 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb 12 21:57:39.033669 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Feb 12 21:57:39.033750 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 12 21:57:39.033831 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Feb 12 21:57:39.033844 kernel: vgaarb: loaded
Feb 12 21:57:39.033852 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 12 21:57:39.033861 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 12 21:57:39.033869 kernel: PTP clock support registered
Feb 12 21:57:39.033877 kernel: PCI: Using ACPI for IRQ routing
Feb 12 21:57:39.033885 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 12 21:57:39.033894 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 12 21:57:39.033902 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Feb 12 21:57:39.033912 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Feb 12 21:57:39.033920 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Feb 12 21:57:39.033928 kernel: clocksource: Switched to clocksource kvm-clock
Feb 12 21:57:39.033936 kernel: VFS: Disk quotas dquot_6.6.0
Feb 12 21:57:39.033944 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 12 21:57:39.033953 kernel: pnp: PnP ACPI init
Feb 12 21:57:39.033961 kernel: pnp: PnP ACPI: found 5 devices
Feb 12 21:57:39.033969 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 12 21:57:39.033977 kernel: NET: Registered PF_INET protocol family
Feb 12 21:57:39.033988 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 12 21:57:39.033996 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Feb 12 21:57:39.034004 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 12 21:57:39.034012 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 12 21:57:39.034021 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Feb 12 21:57:39.034029 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Feb 12 21:57:39.034037 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 12 21:57:39.034046 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 12 21:57:39.034054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 12 21:57:39.034064 kernel: NET: Registered PF_XDP protocol family
Feb 12 21:57:39.034142 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 12 21:57:39.034218 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 12 21:57:39.034291 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 12 21:57:39.034364 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Feb 12 21:57:39.034527 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 12 21:57:39.034621 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Feb 12 21:57:39.034636 kernel: PCI: CLS 0 bytes, default 64
Feb 12 21:57:39.034644 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 12 21:57:39.034653 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093255d7c, max_idle_ns: 440795319144 ns
Feb 12 21:57:39.034661 kernel: clocksource: Switched to clocksource tsc
Feb 12 21:57:39.034670 kernel: Initialise system trusted keyrings
Feb 12 21:57:39.034678 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Feb 12 21:57:39.034686 kernel: Key type asymmetric registered
Feb 12 21:57:39.034694 kernel: Asymmetric key parser 'x509' registered
Feb 12 21:57:39.034702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb 12 21:57:39.034712 kernel: io scheduler mq-deadline registered
Feb 12 21:57:39.034720 kernel: io scheduler kyber registered
Feb 12 21:57:39.034728 kernel: io scheduler bfq registered
Feb 12 21:57:39.034737 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 12 21:57:39.034745 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 12 21:57:39.034754 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 12 21:57:39.034762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 12 21:57:39.034770 kernel: i8042: Warning: Keylock active
Feb 12 21:57:39.034778 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 12 21:57:39.034789 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 12 21:57:39.035053 kernel: rtc_cmos 00:00: RTC can wake from S4
Feb 12 21:57:39.035142 kernel: rtc_cmos 00:00: registered as rtc0
Feb 12 21:57:39.035218 kernel: rtc_cmos 00:00: setting system clock to 2024-02-12T21:57:38 UTC (1707775058)
Feb 12 21:57:39.035293 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Feb 12 21:57:39.035303 kernel: intel_pstate: CPU model not supported
Feb 12 21:57:39.035311 kernel: NET: Registered PF_INET6 protocol family
Feb 12 21:57:39.035319 kernel: Segment Routing with IPv6
Feb 12 21:57:39.035331 kernel: In-situ OAM (IOAM) with IPv6
Feb 12 21:57:39.035339 kernel: NET: Registered PF_PACKET protocol family
Feb 12 21:57:39.035347 kernel: Key type dns_resolver registered
Feb 12 21:57:39.035355 kernel: IPI shorthand broadcast: enabled
Feb 12 21:57:39.035364 kernel: sched_clock: Marking stable (514742195, 304311212)->(990441104, -171387697)
Feb 12 21:57:39.035372 kernel: registered taskstats version 1
Feb 12 21:57:39.035380 kernel: Loading compiled-in X.509 certificates
Feb 12 21:57:39.035388 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8'
Feb 12 21:57:39.035396 kernel: Key type .fscrypt registered
Feb 12 21:57:39.035407 kernel: Key type fscrypt-provisioning registered
Feb 12 21:57:39.035415 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 12 21:57:39.035424 kernel: ima: Allocated hash algorithm: sha1
Feb 12 21:57:39.035432 kernel: ima: No architecture policies found
Feb 12 21:57:39.035440 kernel: Freeing unused kernel image (initmem) memory: 45496K
Feb 12 21:57:39.035464 kernel: Write protecting the kernel read-only data: 28672k
Feb 12 21:57:39.035473 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb 12 21:57:39.035481 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K
Feb 12 21:57:39.035490 kernel: Run /init as init process
Feb 12 21:57:39.035500 kernel:   with arguments:
Feb 12 21:57:39.035508 kernel:     /init
Feb 12 21:57:39.035516 kernel:   with environment:
Feb 12 21:57:39.035524 kernel:     HOME=/
Feb 12 21:57:39.035535 kernel:     TERM=linux
Feb 12 21:57:39.035689 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 12 21:57:39.035701 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 12 21:57:39.035713 systemd[1]: Detected virtualization amazon.
Feb 12 21:57:39.035725 systemd[1]: Detected architecture x86-64.
Feb 12 21:57:39.035733 systemd[1]: Running in initrd.
Feb 12 21:57:39.035742 systemd[1]: No hostname configured, using default hostname.
Feb 12 21:57:39.035824 systemd[1]: Hostname set to <localhost>.
Feb 12 21:57:39.035848 systemd[1]: Initializing machine ID from VM UUID.
Feb 12 21:57:39.035859 systemd[1]: Queued start job for default target initrd.target.
Feb 12 21:57:39.035868 systemd[1]: Started systemd-ask-password-console.path.
Feb 12 21:57:39.035877 systemd[1]: Reached target cryptsetup.target.
Feb 12 21:57:39.035887 systemd[1]: Reached target paths.target.
Feb 12 21:57:39.036022 systemd[1]: Reached target slices.target.
Feb 12 21:57:39.036030 systemd[1]: Reached target swap.target.
Feb 12 21:57:39.036039 systemd[1]: Reached target timers.target.
Feb 12 21:57:39.036049 systemd[1]: Listening on iscsid.socket.
Feb 12 21:57:39.036061 systemd[1]: Listening on iscsiuio.socket.
Feb 12 21:57:39.036070 systemd[1]: Listening on systemd-journald-audit.socket.
Feb 12 21:57:39.036079 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb 12 21:57:39.036088 systemd[1]: Listening on systemd-journald.socket.
Feb 12 21:57:39.036097 systemd[1]: Listening on systemd-networkd.socket.
Feb 12 21:57:39.036106 systemd[1]: Listening on systemd-udevd-control.socket.
Feb 12 21:57:39.036115 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb 12 21:57:39.036124 systemd[1]: Reached target sockets.target.
Feb 12 21:57:39.036133 systemd[1]: Starting kmod-static-nodes.service...
Feb 12 21:57:39.036145 systemd[1]: Finished network-cleanup.service.
Feb 12 21:57:39.036156 systemd[1]: Starting systemd-fsck-usr.service...
Feb 12 21:57:39.036165 systemd[1]: Starting systemd-journald.service...
Feb 12 21:57:39.036174 systemd[1]: Starting systemd-modules-load.service...
Feb 12 21:57:39.036183 systemd[1]: Starting systemd-resolved.service...
Feb 12 21:57:39.036192 systemd[1]: Starting systemd-vconsole-setup.service...
Feb 12 21:57:39.036201 systemd[1]: Finished kmod-static-nodes.service.
Feb 12 21:57:39.036210 systemd[1]: Finished systemd-fsck-usr.service.
Feb 12 21:57:39.036220 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb 12 21:57:39.036231 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb 12 21:57:39.036240 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 12 21:57:39.036253 systemd-journald[184]: Journal started
Feb 12 21:57:39.036308 systemd-journald[184]: Runtime Journal (/run/log/journal/ec29ff7380e80b8297dc46cce644b432) is 4.8M, max 38.7M, 33.9M free.
Feb 12 21:57:39.033401 systemd-resolved[186]: Positive Trust Anchors:
Feb 12 21:57:39.184047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 12 21:57:39.184080 kernel: Bridge firewalling registered
Feb 12 21:57:39.184100 kernel: SCSI subsystem initialized
Feb 12 21:57:39.184116 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 12 21:57:39.184182 kernel: device-mapper: uevent: version 1.0.3
Feb 12 21:57:39.184204 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb 12 21:57:39.033415 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 12 21:57:39.187879 systemd[1]: Started systemd-journald.service.
Feb 12 21:57:39.187906 kernel: audit: type=1130 audit(1707775059.183:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.033461 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 12 21:57:39.040142 systemd-resolved[186]: Defaulting to hostname 'linux'.
Feb 12 21:57:39.212913 kernel: audit: type=1130 audit(1707775059.200:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.042476 systemd-modules-load[185]: Inserted module 'overlay'
Feb 12 21:57:39.221106 kernel: audit: type=1130 audit(1707775059.212:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.076596 systemd-modules-load[185]: Inserted module 'br_netfilter'
Feb 12 21:57:39.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.123517 systemd-modules-load[185]: Inserted module 'dm_multipath'
Feb 12 21:57:39.234393 kernel: audit: type=1130 audit(1707775059.221:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.234422 kernel: audit: type=1130 audit(1707775059.228:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.200967 systemd[1]: Started systemd-resolved.service.
Feb 12 21:57:39.213601 systemd[1]: Finished systemd-modules-load.service.
Feb 12 21:57:39.222002 systemd[1]: Finished systemd-vconsole-setup.service.
Feb 12 21:57:39.234602 systemd[1]: Reached target nss-lookup.target.
Feb 12 21:57:39.236876 systemd[1]: Starting dracut-cmdline-ask.service...
Feb 12 21:57:39.239469 systemd[1]: Starting systemd-sysctl.service...
Feb 12 21:57:39.256950 systemd[1]: Finished systemd-sysctl.service.
Feb 12 21:57:39.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.263460 kernel: audit: type=1130 audit(1707775059.256:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.265235 systemd[1]: Finished dracut-cmdline-ask.service.
Feb 12 21:57:39.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.269263 systemd[1]: Starting dracut-cmdline.service...
Feb 12 21:57:39.285458 kernel: audit: type=1130 audit(1707775059.267:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.301532 dracut-cmdline[207]: dracut-dracut-053
Feb 12 21:57:39.304271 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 21:57:39.377473 kernel: Loading iSCSI transport class v2.0-870.
Feb 12 21:57:39.392481 kernel: iscsi: registered transport (tcp)
Feb 12 21:57:39.417807 kernel: iscsi: registered transport (qla4xxx)
Feb 12 21:57:39.417929 kernel: QLogic iSCSI HBA Driver
Feb 12 21:57:39.456594 systemd[1]: Finished dracut-cmdline.service.
Feb 12 21:57:39.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.466480 kernel: audit: type=1130 audit(1707775059.458:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:39.466561 systemd[1]: Starting dracut-pre-udev.service...
Feb 12 21:57:39.550515 kernel: raid6: avx512x4 gen() 15234 MB/s
Feb 12 21:57:39.568493 kernel: raid6: avx512x4 xor()  5370 MB/s
Feb 12 21:57:39.586492 kernel: raid6: avx512x2 gen() 13866 MB/s
Feb 12 21:57:39.603496 kernel: raid6: avx512x2 xor() 17569 MB/s
Feb 12 21:57:39.620499 kernel: raid6: avx512x1 gen() 13270 MB/s
Feb 12 21:57:39.638493 kernel: raid6: avx512x1 xor() 18314 MB/s
Feb 12 21:57:39.656524 kernel: raid6: avx2x4   gen()  9684 MB/s
Feb 12 21:57:39.674481 kernel: raid6: avx2x4   xor()  5322 MB/s
Feb 12 21:57:39.691502 kernel: raid6: avx2x2   gen() 15272 MB/s
Feb 12 21:57:39.709487 kernel: raid6: avx2x2   xor() 12234 MB/s
Feb 12 21:57:39.730499 kernel: raid6: avx2x1   gen() 10040 MB/s
Feb 12 21:57:39.748494 kernel: raid6: avx2x1   xor() 13351 MB/s
Feb 12 21:57:39.773428 kernel: raid6: sse2x4   gen()  5047 MB/s
Feb 12 21:57:39.796639 kernel: raid6: sse2x4   xor()  2816 MB/s
Feb 12 21:57:39.814492 kernel: raid6: sse2x2   gen()  4983 MB/s
Feb 12 21:57:39.832481 kernel: raid6: sse2x2   xor()  4661 MB/s
Feb 12 21:57:39.850491 kernel: raid6: sse2x1   gen()  8074 MB/s
Feb 12 21:57:39.869061 kernel: raid6: sse2x1   xor()  3647 MB/s
Feb 12 21:57:39.869227 kernel: raid6: using algorithm avx2x2 gen() 15272 MB/s
Feb 12 21:57:39.869262 kernel: raid6: .... xor() 12234 MB/s, rmw enabled
Feb 12 21:57:39.872091 kernel: raid6: using avx512x2 recovery algorithm
Feb 12 21:57:39.917618 kernel: xor: automatically using best checksumming function   avx       
Feb 12 21:57:40.046476 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Feb 12 21:57:40.055768 systemd[1]: Finished dracut-pre-udev.service.
Feb 12 21:57:40.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:40.058462 systemd[1]: Starting systemd-udevd.service...
Feb 12 21:57:40.057000 audit: BPF prog-id=7 op=LOAD
Feb 12 21:57:40.057000 audit: BPF prog-id=8 op=LOAD
Feb 12 21:57:40.063931 kernel: audit: type=1130 audit(1707775060.056:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:40.078347 systemd-udevd[384]: Using default interface naming scheme 'v252'.
Feb 12 21:57:40.084087 systemd[1]: Started systemd-udevd.service.
Feb 12 21:57:40.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:40.086160 systemd[1]: Starting dracut-pre-trigger.service...
Feb 12 21:57:40.112237 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation
Feb 12 21:57:40.149250 systemd[1]: Finished dracut-pre-trigger.service.
Feb 12 21:57:40.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:40.151935 systemd[1]: Starting systemd-udev-trigger.service...
Feb 12 21:57:40.212409 systemd[1]: Finished systemd-udev-trigger.service.
Feb 12 21:57:40.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:40.292464 kernel: cryptd: max_cpu_qlen set to 1000
Feb 12 21:57:40.322259 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 12 21:57:40.322382 kernel: AES CTR mode by8 optimization enabled
Feb 12 21:57:40.322403 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 12 21:57:40.322628 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 12 21:57:40.339879 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 12 21:57:40.340141 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 12 21:57:40.340162 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Feb 12 21:57:40.341465 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 12 21:57:40.343458 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:49:f4:de:8c:6b
Feb 12 21:57:40.345490 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 12 21:57:40.345525 kernel: GPT:9289727 != 16777215
Feb 12 21:57:40.345550 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 12 21:57:40.345567 kernel: GPT:9289727 != 16777215
Feb 12 21:57:40.345583 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 12 21:57:40.345599 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:57:40.350207 (udev-worker)[442]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:57:40.506670 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (443)
Feb 12 21:57:40.480990 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb 12 21:57:40.564350 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb 12 21:57:40.573863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb 12 21:57:40.583131 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb 12 21:57:40.583263 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb 12 21:57:40.588352 systemd[1]: Starting disk-uuid.service...
Feb 12 21:57:40.597401 disk-uuid[594]: Primary Header is updated.
Feb 12 21:57:40.597401 disk-uuid[594]: Secondary Entries is updated.
Feb 12 21:57:40.597401 disk-uuid[594]: Secondary Header is updated.
Feb 12 21:57:40.606465 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:57:40.615469 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:57:40.621469 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:57:41.619216 disk-uuid[595]: The operation has completed successfully.
Feb 12 21:57:41.620799 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 12 21:57:41.815747 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 12 21:57:41.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:41.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:41.815861 systemd[1]: Finished disk-uuid.service.
Feb 12 21:57:41.829685 systemd[1]: Starting verity-setup.service...
Feb 12 21:57:41.862475 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 12 21:57:41.998535 systemd[1]: Found device dev-mapper-usr.device.
Feb 12 21:57:42.000583 systemd[1]: Mounting sysusr-usr.mount...
Feb 12 21:57:42.003254 systemd[1]: Finished verity-setup.service.
Feb 12 21:57:42.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:42.164480 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb 12 21:57:42.165879 systemd[1]: Mounted sysusr-usr.mount.
Feb 12 21:57:42.167983 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb 12 21:57:42.170204 systemd[1]: Starting ignition-setup.service...
Feb 12 21:57:42.172244 systemd[1]: Starting parse-ip-for-networkd.service...
Feb 12 21:57:42.196151 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 21:57:42.196207 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 12 21:57:42.196225 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Feb 12 21:57:42.207593 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 12 21:57:42.223599 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 12 21:57:42.257947 systemd[1]: Finished ignition-setup.service.
Feb 12 21:57:42.261242 systemd[1]: Starting ignition-fetch-offline.service...
Feb 12 21:57:42.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:42.269803 systemd[1]: Finished parse-ip-for-networkd.service.
Feb 12 21:57:42.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:42.272000 audit: BPF prog-id=9 op=LOAD
Feb 12 21:57:42.274003 systemd[1]: Starting systemd-networkd.service...
Feb 12 21:57:42.306794 systemd-networkd[1107]: lo: Link UP
Feb 12 21:57:42.306806 systemd-networkd[1107]: lo: Gained carrier
Feb 12 21:57:42.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:42.308953 systemd-networkd[1107]: Enumeration completed
Feb 12 21:57:42.309070 systemd[1]: Started systemd-networkd.service.
Feb 12 21:57:42.310093 systemd-networkd[1107]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 12 21:57:42.310323 systemd[1]: Reached target network.target.
Feb 12 21:57:42.314487 systemd[1]: Starting iscsiuio.service...
Feb 12 21:57:42.321929 systemd-networkd[1107]: eth0: Link UP
Feb 12 21:57:42.321934 systemd-networkd[1107]: eth0: Gained carrier
Feb 12 21:57:42.325702 systemd[1]: Started iscsiuio.service.
Feb 12 21:57:42.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:42.329975 systemd[1]: Starting iscsid.service...
Feb 12 21:57:42.336275 iscsid[1112]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb 12 21:57:42.336275 iscsid[1112]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb 12 21:57:42.336275 iscsid[1112]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb 12 21:57:42.336275 iscsid[1112]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb 12 21:57:42.336275 iscsid[1112]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb 12 21:57:42.352233 iscsid[1112]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb 12 21:57:42.347773 systemd-networkd[1107]: eth0: DHCPv4 address 172.31.21.40/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 12 21:57:42.349095 systemd[1]: Started iscsid.service.
Feb 12 21:57:42.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:42.358970 systemd[1]: Starting dracut-initqueue.service...
Feb 12 21:57:42.379092 systemd[1]: Finished dracut-initqueue.service.
Feb 12 21:57:42.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:42.380955 systemd[1]: Reached target remote-fs-pre.target.
Feb 12 21:57:42.384197 systemd[1]: Reached target remote-cryptsetup.target.
Feb 12 21:57:42.384256 systemd[1]: Reached target remote-fs.target.
Feb 12 21:57:42.389346 systemd[1]: Starting dracut-pre-mount.service...
Feb 12 21:57:42.408938 systemd[1]: Finished dracut-pre-mount.service.
Feb 12 21:57:42.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.008737 ignition[1103]: Ignition 2.14.0
Feb 12 21:57:43.008753 ignition[1103]: Stage: fetch-offline
Feb 12 21:57:43.008895 ignition[1103]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:57:43.008937 ignition[1103]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:57:43.027720 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:57:43.028361 ignition[1103]: Ignition finished successfully
Feb 12 21:57:43.031232 systemd[1]: Finished ignition-fetch-offline.service.
Feb 12 21:57:43.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.033799 systemd[1]: Starting ignition-fetch.service...
Feb 12 21:57:43.043696 ignition[1131]: Ignition 2.14.0
Feb 12 21:57:43.043709 ignition[1131]: Stage: fetch
Feb 12 21:57:43.044236 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:57:43.044403 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:57:43.058602 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:57:43.060091 ignition[1131]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:57:43.080178 ignition[1131]: INFO     : PUT result: OK
Feb 12 21:57:43.085115 ignition[1131]: DEBUG    : parsed url from cmdline: ""
Feb 12 21:57:43.085115 ignition[1131]: INFO     : no config URL provided
Feb 12 21:57:43.085115 ignition[1131]: INFO     : reading system config file "/usr/lib/ignition/user.ign"
Feb 12 21:57:43.091560 ignition[1131]: INFO     : no config at "/usr/lib/ignition/user.ign"
Feb 12 21:57:43.091560 ignition[1131]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:57:43.091560 ignition[1131]: INFO     : PUT result: OK
Feb 12 21:57:43.095858 ignition[1131]: INFO     : GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 12 21:57:43.097398 ignition[1131]: INFO     : GET result: OK
Feb 12 21:57:43.098413 ignition[1131]: DEBUG    : parsing config with SHA512: e56a00bb6a20c4a2b6f3823a748b13847d968594d2d92393135bc05a444daf2198a149cb9cbc2c3f5afbaecdf83f8e49b077acadd96920a53d37c75c1f846a2a
Feb 12 21:57:43.150828 unknown[1131]: fetched base config from "system"
Feb 12 21:57:43.150842 unknown[1131]: fetched base config from "system"
Feb 12 21:57:43.150852 unknown[1131]: fetched user config from "aws"
Feb 12 21:57:43.155161 ignition[1131]: fetch: fetch complete
Feb 12 21:57:43.155173 ignition[1131]: fetch: fetch passed
Feb 12 21:57:43.155248 ignition[1131]: Ignition finished successfully
Feb 12 21:57:43.159502 systemd[1]: Finished ignition-fetch.service.
Feb 12 21:57:43.163474 kernel: kauditd_printk_skb: 17 callbacks suppressed
Feb 12 21:57:43.163569 kernel: audit: type=1130 audit(1707775063.160:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.162125 systemd[1]: Starting ignition-kargs.service...
Feb 12 21:57:43.188817 ignition[1137]: Ignition 2.14.0
Feb 12 21:57:43.188830 ignition[1137]: Stage: kargs
Feb 12 21:57:43.189022 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:57:43.189053 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:57:43.198779 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:57:43.200481 ignition[1137]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:57:43.203242 ignition[1137]: INFO     : PUT result: OK
Feb 12 21:57:43.206223 ignition[1137]: kargs: kargs passed
Feb 12 21:57:43.206357 ignition[1137]: Ignition finished successfully
Feb 12 21:57:43.208707 systemd[1]: Finished ignition-kargs.service.
Feb 12 21:57:43.218911 kernel: audit: type=1130 audit(1707775063.208:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.209834 systemd[1]: Starting ignition-disks.service...
Feb 12 21:57:43.221360 ignition[1143]: Ignition 2.14.0
Feb 12 21:57:43.221371 ignition[1143]: Stage: disks
Feb 12 21:57:43.221532 ignition[1143]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:57:43.221552 ignition[1143]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:57:43.234218 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:57:43.236258 ignition[1143]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:57:43.238590 ignition[1143]: INFO     : PUT result: OK
Feb 12 21:57:43.242399 ignition[1143]: disks: disks passed
Feb 12 21:57:43.242483 ignition[1143]: Ignition finished successfully
Feb 12 21:57:43.245038 systemd[1]: Finished ignition-disks.service.
Feb 12 21:57:43.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.246312 systemd[1]: Reached target initrd-root-device.target.
Feb 12 21:57:43.254473 kernel: audit: type=1130 audit(1707775063.245:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.252509 systemd[1]: Reached target local-fs-pre.target.
Feb 12 21:57:43.254383 systemd[1]: Reached target local-fs.target.
Feb 12 21:57:43.255360 systemd[1]: Reached target sysinit.target.
Feb 12 21:57:43.257968 systemd[1]: Reached target basic.target.
Feb 12 21:57:43.260771 systemd[1]: Starting systemd-fsck-root.service...
Feb 12 21:57:43.304824 systemd-fsck[1151]: ROOT: clean, 602/553520 files, 56013/553472 blocks
Feb 12 21:57:43.309255 systemd[1]: Finished systemd-fsck-root.service.
Feb 12 21:57:43.318513 kernel: audit: type=1130 audit(1707775063.310:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.312053 systemd[1]: Mounting sysroot.mount...
Feb 12 21:57:43.332053 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb 12 21:57:43.332293 systemd[1]: Mounted sysroot.mount.
Feb 12 21:57:43.335003 systemd[1]: Reached target initrd-root-fs.target.
Feb 12 21:57:43.352718 systemd[1]: Mounting sysroot-usr.mount...
Feb 12 21:57:43.354594 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Feb 12 21:57:43.354667 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 12 21:57:43.354705 systemd[1]: Reached target ignition-diskful.target.
Feb 12 21:57:43.358570 systemd[1]: Mounted sysroot-usr.mount.
Feb 12 21:57:43.378313 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb 12 21:57:43.384114 systemd[1]: Starting initrd-setup-root.service...
Feb 12 21:57:43.398467 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1168)
Feb 12 21:57:43.402924 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 21:57:43.403145 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 12 21:57:43.403178 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Feb 12 21:57:43.412857 initrd-setup-root[1173]: cut: /sysroot/etc/passwd: No such file or directory
Feb 12 21:57:43.416845 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 12 21:57:43.420378 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb 12 21:57:43.445583 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory
Feb 12 21:57:43.451100 initrd-setup-root[1207]: cut: /sysroot/etc/shadow: No such file or directory
Feb 12 21:57:43.456432 initrd-setup-root[1215]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 12 21:57:43.639440 systemd[1]: Finished initrd-setup-root.service.
Feb 12 21:57:43.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.641809 systemd[1]: Starting ignition-mount.service...
Feb 12 21:57:43.648971 kernel: audit: type=1130 audit(1707775063.640:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.649973 systemd[1]: Starting sysroot-boot.service...
Feb 12 21:57:43.656133 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Feb 12 21:57:43.656252 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Feb 12 21:57:43.691782 ignition[1234]: INFO     : Ignition 2.14.0
Feb 12 21:57:43.693216 ignition[1234]: INFO     : Stage: mount
Feb 12 21:57:43.693216 ignition[1234]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:57:43.693216 ignition[1234]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:57:43.706959 systemd[1]: Finished sysroot-boot.service.
Feb 12 21:57:43.713544 kernel: audit: type=1130 audit(1707775063.708:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.713612 ignition[1234]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:57:43.713612 ignition[1234]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:57:43.713612 ignition[1234]: INFO     : PUT result: OK
Feb 12 21:57:43.717988 ignition[1234]: INFO     : mount: mount passed
Feb 12 21:57:43.717988 ignition[1234]: INFO     : Ignition finished successfully
Feb 12 21:57:43.720791 systemd[1]: Finished ignition-mount.service.
Feb 12 21:57:43.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.725514 systemd[1]: Starting ignition-files.service...
Feb 12 21:57:43.730689 kernel: audit: type=1130 audit(1707775063.720:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:57:43.734855 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb 12 21:57:43.748469 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1243)
Feb 12 21:57:43.748528 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 21:57:43.751226 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 12 21:57:43.751256 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Feb 12 21:57:43.757470 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 12 21:57:43.760544 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb 12 21:57:43.782646 ignition[1262]: INFO     : Ignition 2.14.0
Feb 12 21:57:43.782646 ignition[1262]: INFO     : Stage: files
Feb 12 21:57:43.785867 ignition[1262]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:57:43.785867 ignition[1262]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:57:43.801560 ignition[1262]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:57:43.803342 ignition[1262]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:57:43.806029 ignition[1262]: INFO     : PUT result: OK
Feb 12 21:57:43.810648 ignition[1262]: DEBUG    : files: compiled without relabeling support, skipping
Feb 12 21:57:43.815625 ignition[1262]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 12 21:57:43.815625 ignition[1262]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 12 21:57:43.836061 ignition[1262]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 12 21:57:43.837986 ignition[1262]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 12 21:57:43.840052 unknown[1262]: wrote ssh authorized keys file for user: core
Feb 12 21:57:43.841533 ignition[1262]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 12 21:57:43.844200 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb 12 21:57:43.846984 ignition[1262]: INFO     : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1
Feb 12 21:57:44.165618 systemd-networkd[1107]: eth0: Gained IPv6LL
Feb 12 21:57:44.315468 ignition[1262]: INFO     : GET result: OK
Feb 12 21:57:44.648178 ignition[1262]: DEBUG    : file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540
Feb 12 21:57:44.651512 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb 12 21:57:44.651512 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb 12 21:57:44.651512 ignition[1262]: INFO     : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Feb 12 21:58:00.066371 ignition[1262]: INFO     : GET result: OK
Feb 12 21:58:00.204075 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb 12 21:58:00.209086 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb 12 21:58:00.209086 ignition[1262]: INFO     : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1
Feb 12 21:58:00.480667 ignition[1262]: INFO     : GET result: OK
Feb 12 21:58:00.635935 ignition[1262]: DEBUG    : file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a
Feb 12 21:58:00.639108 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb 12 21:58:00.639108 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/bin/kubectl"
Feb 12 21:58:00.643393 ignition[1262]: INFO     : GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1
Feb 12 21:58:00.770669 ignition[1262]: INFO     : GET result: OK
Feb 12 21:58:01.232173 ignition[1262]: DEBUG    : file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83
Feb 12 21:58:01.235347 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl"
Feb 12 21:58:01.235347 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb 12 21:58:01.235347 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb 12 21:58:01.235347 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/eks/bootstrap.sh"
Feb 12 21:58:01.235347 ignition[1262]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 21:58:01.297591 ignition[1262]: INFO     : op(1): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3321866644"
Feb 12 21:58:01.309673 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1264)
Feb 12 21:58:01.310673 ignition[1262]: CRITICAL : op(1): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem3321866644": device or resource busy
Feb 12 21:58:01.310673 ignition[1262]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3321866644", trying btrfs: device or resource busy
Feb 12 21:58:01.310673 ignition[1262]: INFO     : op(2): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3321866644"
Feb 12 21:58:01.310673 ignition[1262]: INFO     : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3321866644"
Feb 12 21:58:01.346149 ignition[1262]: INFO     : op(3): [started]  unmounting "/mnt/oem3321866644"
Feb 12 21:58:01.346149 ignition[1262]: INFO     : op(3): [finished] unmounting "/mnt/oem3321866644"
Feb 12 21:58:01.346149 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh"
Feb 12 21:58:01.346149 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb 12 21:58:01.346149 ignition[1262]: INFO     : GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1
Feb 12 21:58:01.343040 systemd[1]: mnt-oem3321866644.mount: Deactivated successfully.
Feb 12 21:58:01.405505 ignition[1262]: INFO     : GET result: OK
Feb 12 21:58:02.034900 ignition[1262]: DEBUG    : file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836
Feb 12 21:58:02.038254 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb 12 21:58:02.038254 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb 12 21:58:02.038254 ignition[1262]: INFO     : GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1
Feb 12 21:58:02.105575 ignition[1262]: INFO     : GET result: OK
Feb 12 21:58:03.029299 ignition[1262]: DEBUG    : file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560
Feb 12 21:58:03.032341 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb 12 21:58:03.032341 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 12 21:58:03.038784 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 12 21:58:03.038784 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 12 21:58:03.038784 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 12 21:58:03.038784 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 12 21:58:03.038784 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 12 21:58:03.038784 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 12 21:58:03.038784 ignition[1262]: INFO     : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Feb 12 21:58:03.488501 ignition[1262]: INFO     : GET result: OK
Feb 12 21:58:03.645811 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 12 21:58:03.648607 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/home/core/install.sh"
Feb 12 21:58:03.648607 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh"
Feb 12 21:58:03.648607 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 12 21:58:03.648607 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 12 21:58:03.648607 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(11): [started]  writing file "/sysroot/etc/systemd/system/nvidia.service"
Feb 12 21:58:03.648607 ignition[1262]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 21:58:03.675177 ignition[1262]: INFO     : op(4): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3253064204"
Feb 12 21:58:03.676970 ignition[1262]: CRITICAL : op(4): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem3253064204": device or resource busy
Feb 12 21:58:03.676970 ignition[1262]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3253064204", trying btrfs: device or resource busy
Feb 12 21:58:03.676970 ignition[1262]: INFO     : op(5): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3253064204"
Feb 12 21:58:03.683109 ignition[1262]: INFO     : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3253064204"
Feb 12 21:58:03.683109 ignition[1262]: INFO     : op(6): [started]  unmounting "/mnt/oem3253064204"
Feb 12 21:58:03.686292 ignition[1262]: INFO     : op(6): [finished] unmounting "/mnt/oem3253064204"
Feb 12 21:58:03.686292 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service"
Feb 12 21:58:03.686292 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(12): [started]  writing file "/sysroot/etc/amazon/ssm/seelog.xml"
Feb 12 21:58:03.686292 ignition[1262]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 21:58:03.699426 systemd[1]: mnt-oem3253064204.mount: Deactivated successfully.
Feb 12 21:58:03.710753 ignition[1262]: INFO     : op(7): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2227691990"
Feb 12 21:58:03.713166 ignition[1262]: CRITICAL : op(7): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem2227691990": device or resource busy
Feb 12 21:58:03.713166 ignition[1262]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2227691990", trying btrfs: device or resource busy
Feb 12 21:58:03.713166 ignition[1262]: INFO     : op(8): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2227691990"
Feb 12 21:58:03.719818 ignition[1262]: INFO     : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2227691990"
Feb 12 21:58:03.719818 ignition[1262]: INFO     : op(9): [started]  unmounting "/mnt/oem2227691990"
Feb 12 21:58:03.719818 ignition[1262]: INFO     : op(9): [finished] unmounting "/mnt/oem2227691990"
Feb 12 21:58:03.719818 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml"
Feb 12 21:58:03.719818 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(13): [started]  writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json"
Feb 12 21:58:03.719818 ignition[1262]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 21:58:03.755005 ignition[1262]: INFO     : op(a): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2863006699"
Feb 12 21:58:03.757542 ignition[1262]: CRITICAL : op(a): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem2863006699": device or resource busy
Feb 12 21:58:03.760531 ignition[1262]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2863006699", trying btrfs: device or resource busy
Feb 12 21:58:03.760531 ignition[1262]: INFO     : op(b): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2863006699"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2863006699"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : op(c): [started]  unmounting "/mnt/oem2863006699"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : op(c): [finished] unmounting "/mnt/oem2863006699"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(14): [started]  processing unit "coreos-metadata-sshkeys@.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(15): [started]  processing unit "amazon-ssm-agent.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(15): op(16): [started]  writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(15): op(16): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(15): [finished] processing unit "amazon-ssm-agent.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(17): [started]  processing unit "nvidia.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(17): [finished] processing unit "nvidia.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(18): [started]  processing unit "prepare-cni-plugins.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(18): op(19): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(18): op(19): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(18): [finished] processing unit "prepare-cni-plugins.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(1a): [started]  processing unit "prepare-critools.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(1a): op(1b): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb 12 21:58:03.765359 ignition[1262]: INFO     : files: op(1a): [finished] processing unit "prepare-critools.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(1c): [started]  processing unit "prepare-helm.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(1c): op(1d): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(1c): [finished] processing unit "prepare-helm.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(1e): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(1e): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(1f): [started]  setting preset to enabled for "amazon-ssm-agent.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(1f): [finished] setting preset to enabled for "amazon-ssm-agent.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(20): [started]  setting preset to enabled for "nvidia.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(20): [finished] setting preset to enabled for "nvidia.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(21): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(22): [started]  setting preset to enabled for "prepare-critools.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(22): [finished] setting preset to enabled for "prepare-critools.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(23): [started]  setting preset to enabled for "prepare-helm.service"
Feb 12 21:58:03.814313 ignition[1262]: INFO     : files: op(23): [finished] setting preset to enabled for "prepare-helm.service"
Feb 12 21:58:03.867695 kernel: audit: type=1130 audit(1707775083.843:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.802154 systemd[1]: mnt-oem2863006699.mount: Deactivated successfully.
Feb 12 21:58:03.870852 ignition[1262]: INFO     : files: createResultFile: createFiles: op(24): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 12 21:58:03.870852 ignition[1262]: INFO     : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 12 21:58:03.870852 ignition[1262]: INFO     : files: files passed
Feb 12 21:58:03.870852 ignition[1262]: INFO     : Ignition finished successfully
Feb 12 21:58:03.833774 systemd[1]: Finished ignition-files.service.
Feb 12 21:58:03.857153 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb 12 21:58:03.868778 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb 12 21:58:03.870107 systemd[1]: Starting ignition-quench.service...
Feb 12 21:58:03.888111 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 12 21:58:03.890347 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 12 21:58:03.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.890685 systemd[1]: Finished ignition-quench.service.
Feb 12 21:58:03.904729 kernel: audit: type=1130 audit(1707775083.895:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.904766 kernel: audit: type=1131 audit(1707775083.895:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.895817 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb 12 21:58:03.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.908804 systemd[1]: Reached target ignition-complete.target.
Feb 12 21:58:03.914930 kernel: audit: type=1130 audit(1707775083.906:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.917738 systemd[1]: Starting initrd-parse-etc.service...
Feb 12 21:58:03.941675 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 12 21:58:03.941916 systemd[1]: Finished initrd-parse-etc.service.
Feb 12 21:58:03.967699 kernel: audit: type=1130 audit(1707775083.944:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.967732 kernel: audit: type=1131 audit(1707775083.966:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:03.967222 systemd[1]: Reached target initrd-fs.target.
Feb 12 21:58:03.972598 systemd[1]: Reached target initrd.target.
Feb 12 21:58:03.979298 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb 12 21:58:03.987862 systemd[1]: Starting dracut-pre-pivot.service...
Feb 12 21:58:04.020257 systemd[1]: Finished dracut-pre-pivot.service.
Feb 12 21:58:04.027563 kernel: audit: type=1130 audit(1707775084.019:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.029610 systemd[1]: Starting initrd-cleanup.service...
Feb 12 21:58:04.041908 systemd[1]: Stopped target nss-lookup.target.
Feb 12 21:58:04.042301 systemd[1]: Stopped target remote-cryptsetup.target.
Feb 12 21:58:04.047591 systemd[1]: Stopped target timers.target.
Feb 12 21:58:04.050323 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 12 21:58:04.051617 systemd[1]: Stopped dracut-pre-pivot.service.
Feb 12 21:58:04.065108 kernel: audit: type=1131 audit(1707775084.052:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.063973 systemd[1]: Stopped target initrd.target.
Feb 12 21:58:04.067157 systemd[1]: Stopped target basic.target.
Feb 12 21:58:04.067432 systemd[1]: Stopped target ignition-complete.target.
Feb 12 21:58:04.071035 systemd[1]: Stopped target ignition-diskful.target.
Feb 12 21:58:04.074011 systemd[1]: Stopped target initrd-root-device.target.
Feb 12 21:58:04.076887 systemd[1]: Stopped target remote-fs.target.
Feb 12 21:58:04.079274 systemd[1]: Stopped target remote-fs-pre.target.
Feb 12 21:58:04.081719 systemd[1]: Stopped target sysinit.target.
Feb 12 21:58:04.084057 systemd[1]: Stopped target local-fs.target.
Feb 12 21:58:04.087537 systemd[1]: Stopped target local-fs-pre.target.
Feb 12 21:58:04.089982 systemd[1]: Stopped target swap.target.
Feb 12 21:58:04.091891 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 12 21:58:04.095119 systemd[1]: Stopped dracut-pre-mount.service.
Feb 12 21:58:04.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.100968 systemd[1]: Stopped target cryptsetup.target.
Feb 12 21:58:04.110869 kernel: audit: type=1131 audit(1707775084.100:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.110977 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 12 21:58:04.112653 systemd[1]: Stopped dracut-initqueue.service.
Feb 12 21:58:04.117015 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 12 21:58:04.127723 kernel: audit: type=1131 audit(1707775084.116:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.117328 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb 12 21:58:04.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.127860 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 12 21:58:04.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.132543 systemd[1]: Stopped ignition-files.service.
Feb 12 21:58:04.138567 systemd[1]: Stopping ignition-mount.service...
Feb 12 21:58:04.143779 systemd[1]: Stopping sysroot-boot.service...
Feb 12 21:58:04.145411 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 12 21:58:04.148797 systemd[1]: Stopped systemd-udev-trigger.service.
Feb 12 21:58:04.163073 ignition[1300]: INFO     : Ignition 2.14.0
Feb 12 21:58:04.163073 ignition[1300]: INFO     : Stage: umount
Feb 12 21:58:04.163073 ignition[1300]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 21:58:04.163073 ignition[1300]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Feb 12 21:58:04.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.168284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 12 21:58:04.179144 systemd[1]: Stopped dracut-pre-trigger.service.
Feb 12 21:58:04.186049 ignition[1300]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 12 21:58:04.186049 ignition[1300]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 12 21:58:04.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.205924 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 12 21:58:04.207311 ignition[1300]: INFO     : PUT result: OK
Feb 12 21:58:04.208511 systemd[1]: Finished initrd-cleanup.service.
Feb 12 21:58:04.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.216489 ignition[1300]: INFO     : umount: umount passed
Feb 12 21:58:04.218181 ignition[1300]: INFO     : Ignition finished successfully
Feb 12 21:58:04.221982 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 12 21:58:04.222259 systemd[1]: Stopped ignition-mount.service.
Feb 12 21:58:04.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.226748 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 12 21:58:04.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.226823 systemd[1]: Stopped ignition-disks.service.
Feb 12 21:58:04.230611 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 12 21:58:04.230689 systemd[1]: Stopped ignition-kargs.service.
Feb 12 21:58:04.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.235629 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 12 21:58:04.235701 systemd[1]: Stopped ignition-fetch.service.
Feb 12 21:58:04.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.237995 systemd[1]: Stopped target network.target.
Feb 12 21:58:04.241312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 12 21:58:04.243756 systemd[1]: Stopped ignition-fetch-offline.service.
Feb 12 21:58:04.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.246752 systemd[1]: Stopped target paths.target.
Feb 12 21:58:04.249150 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 12 21:58:04.251594 systemd[1]: Stopped systemd-ask-password-console.path.
Feb 12 21:58:04.257297 systemd[1]: Stopped target slices.target.
Feb 12 21:58:04.259139 systemd[1]: Stopped target sockets.target.
Feb 12 21:58:04.261111 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 12 21:58:04.261173 systemd[1]: Closed iscsid.socket.
Feb 12 21:58:04.264709 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 12 21:58:04.264877 systemd[1]: Closed iscsiuio.socket.
Feb 12 21:58:04.268924 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 12 21:58:04.269014 systemd[1]: Stopped ignition-setup.service.
Feb 12 21:58:04.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.272742 systemd[1]: Stopping systemd-networkd.service...
Feb 12 21:58:04.274844 systemd[1]: Stopping systemd-resolved.service...
Feb 12 21:58:04.276999 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 12 21:58:04.277612 systemd-networkd[1107]: eth0: DHCPv6 lease lost
Feb 12 21:58:04.285797 systemd[1]: Stopped sysroot-boot.service.
Feb 12 21:58:04.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.289426 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 12 21:58:04.291013 systemd[1]: Stopped systemd-networkd.service.
Feb 12 21:58:04.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.294437 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 12 21:58:04.296892 systemd[1]: Stopped systemd-resolved.service.
Feb 12 21:58:04.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.300000 audit: BPF prog-id=9 op=UNLOAD
Feb 12 21:58:04.300927 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 12 21:58:04.300991 systemd[1]: Closed systemd-networkd.socket.
Feb 12 21:58:04.305000 audit: BPF prog-id=6 op=UNLOAD
Feb 12 21:58:04.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.303392 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 12 21:58:04.303477 systemd[1]: Stopped initrd-setup-root.service.
Feb 12 21:58:04.307165 systemd[1]: Stopping network-cleanup.service...
Feb 12 21:58:04.311145 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 12 21:58:04.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.311229 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb 12 21:58:04.314879 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 12 21:58:04.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.315979 systemd[1]: Stopped systemd-sysctl.service.
Feb 12 21:58:04.318581 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 12 21:58:04.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.318658 systemd[1]: Stopped systemd-modules-load.service.
Feb 12 21:58:04.323015 systemd[1]: Stopping systemd-udevd.service...
Feb 12 21:58:04.327238 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 12 21:58:04.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.327503 systemd[1]: Stopped systemd-udevd.service.
Feb 12 21:58:04.331200 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 12 21:58:04.331289 systemd[1]: Closed systemd-udevd-control.socket.
Feb 12 21:58:04.335117 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 12 21:58:04.335185 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb 12 21:58:04.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.346137 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 12 21:58:04.348740 systemd[1]: Stopped dracut-pre-udev.service.
Feb 12 21:58:04.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.353037 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 12 21:58:04.353185 systemd[1]: Stopped dracut-cmdline.service.
Feb 12 21:58:04.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.357236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 12 21:58:04.360079 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb 12 21:58:04.363183 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb 12 21:58:04.382126 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 12 21:58:04.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.382223 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Feb 12 21:58:04.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.385932 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 12 21:58:04.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.385991 systemd[1]: Stopped kmod-static-nodes.service.
Feb 12 21:58:04.386113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 12 21:58:04.386145 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb 12 21:58:04.397455 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 12 21:58:04.398769 systemd[1]: Stopped network-cleanup.service.
Feb 12 21:58:04.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.400923 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 12 21:58:04.402660 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb 12 21:58:04.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.405160 systemd[1]: Reached target initrd-switch-root.target.
Feb 12 21:58:04.408949 systemd[1]: Starting initrd-switch-root.service...
Feb 12 21:58:04.424602 systemd[1]: Switching root.
Feb 12 21:58:04.459800 iscsid[1112]: iscsid shutting down.
Feb 12 21:58:04.460835 systemd-journald[184]: Received SIGTERM from PID 1 (n/a).
Feb 12 21:58:04.460919 systemd-journald[184]: Journal stopped
Feb 12 21:58:08.592247 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb 12 21:58:08.592332 kernel: SELinux:  Class anon_inode not defined in policy.
Feb 12 21:58:08.592359 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb 12 21:58:08.592378 kernel: SELinux:  policy capability network_peer_controls=1
Feb 12 21:58:08.592407 kernel: SELinux:  policy capability open_perms=1
Feb 12 21:58:08.592427 kernel: SELinux:  policy capability extended_socket_class=1
Feb 12 21:58:08.592464 kernel: SELinux:  policy capability always_check_network=0
Feb 12 21:58:08.592485 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 12 21:58:08.592505 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 12 21:58:08.592526 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 12 21:58:08.592549 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 12 21:58:08.592570 systemd[1]: Successfully loaded SELinux policy in 56.815ms.
Feb 12 21:58:08.592597 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.299ms.
Feb 12 21:58:08.592619 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 12 21:58:08.592640 systemd[1]: Detected virtualization amazon.
Feb 12 21:58:08.592662 systemd[1]: Detected architecture x86-64.
Feb 12 21:58:08.592685 systemd[1]: Detected first boot.
Feb 12 21:58:08.592708 systemd[1]: Initializing machine ID from VM UUID.
Feb 12 21:58:08.592727 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb 12 21:58:08.592748 systemd[1]: Populated /etc with preset unit settings.
Feb 12 21:58:08.592769 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 21:58:08.592791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 21:58:08.592814 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 21:58:08.592835 systemd[1]: iscsiuio.service: Deactivated successfully.
Feb 12 21:58:08.592859 systemd[1]: Stopped iscsiuio.service.
Feb 12 21:58:08.592879 systemd[1]: iscsid.service: Deactivated successfully.
Feb 12 21:58:08.592899 systemd[1]: Stopped iscsid.service.
Feb 12 21:58:08.592919 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 12 21:58:08.592939 systemd[1]: Stopped initrd-switch-root.service.
Feb 12 21:58:08.592960 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 12 21:58:08.592981 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb 12 21:58:08.593002 systemd[1]: Created slice system-addon\x2drun.slice.
Feb 12 21:58:08.593022 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Feb 12 21:58:08.593044 systemd[1]: Created slice system-getty.slice.
Feb 12 21:58:08.593066 systemd[1]: Created slice system-modprobe.slice.
Feb 12 21:58:08.593086 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb 12 21:58:08.593109 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb 12 21:58:08.593129 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb 12 21:58:08.593150 systemd[1]: Created slice user.slice.
Feb 12 21:58:08.593171 systemd[1]: Started systemd-ask-password-console.path.
Feb 12 21:58:08.593192 systemd[1]: Started systemd-ask-password-wall.path.
Feb 12 21:58:08.593216 systemd[1]: Set up automount boot.automount.
Feb 12 21:58:08.593237 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb 12 21:58:08.593260 systemd[1]: Stopped target initrd-switch-root.target.
Feb 12 21:58:08.593281 systemd[1]: Stopped target initrd-fs.target.
Feb 12 21:58:08.593301 systemd[1]: Stopped target initrd-root-fs.target.
Feb 12 21:58:08.593322 systemd[1]: Reached target integritysetup.target.
Feb 12 21:58:08.593342 systemd[1]: Reached target remote-cryptsetup.target.
Feb 12 21:58:08.593363 systemd[1]: Reached target remote-fs.target.
Feb 12 21:58:08.593390 systemd[1]: Reached target slices.target.
Feb 12 21:58:08.593421 systemd[1]: Reached target swap.target.
Feb 12 21:58:08.605408 systemd[1]: Reached target torcx.target.
Feb 12 21:58:08.605628 systemd[1]: Reached target veritysetup.target.
Feb 12 21:58:08.605662 systemd[1]: Listening on systemd-coredump.socket.
Feb 12 21:58:08.605689 systemd[1]: Listening on systemd-initctl.socket.
Feb 12 21:58:08.605717 systemd[1]: Listening on systemd-networkd.socket.
Feb 12 21:58:08.605739 systemd[1]: Listening on systemd-udevd-control.socket.
Feb 12 21:58:08.605760 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb 12 21:58:08.605781 systemd[1]: Listening on systemd-userdbd.socket.
Feb 12 21:58:08.605802 systemd[1]: Mounting dev-hugepages.mount...
Feb 12 21:58:08.605823 systemd[1]: Mounting dev-mqueue.mount...
Feb 12 21:58:08.605844 systemd[1]: Mounting media.mount...
Feb 12 21:58:08.605866 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 12 21:58:08.605888 systemd[1]: Mounting sys-kernel-debug.mount...
Feb 12 21:58:08.605911 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb 12 21:58:08.605934 systemd[1]: Mounting tmp.mount...
Feb 12 21:58:08.605956 systemd[1]: Starting flatcar-tmpfiles.service...
Feb 12 21:58:08.605977 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb 12 21:58:08.605998 systemd[1]: Starting kmod-static-nodes.service...
Feb 12 21:58:08.606019 systemd[1]: Starting modprobe@configfs.service...
Feb 12 21:58:08.606041 systemd[1]: Starting modprobe@dm_mod.service...
Feb 12 21:58:08.606063 systemd[1]: Starting modprobe@drm.service...
Feb 12 21:58:08.606085 systemd[1]: Starting modprobe@efi_pstore.service...
Feb 12 21:58:08.606109 systemd[1]: Starting modprobe@fuse.service...
Feb 12 21:58:08.606130 systemd[1]: Starting modprobe@loop.service...
Feb 12 21:58:08.606153 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 12 21:58:08.606175 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 12 21:58:08.606196 systemd[1]: Stopped systemd-fsck-root.service.
Feb 12 21:58:08.606283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 12 21:58:08.606307 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 12 21:58:08.606329 systemd[1]: Stopped systemd-journald.service.
Feb 12 21:58:08.606354 systemd[1]: Starting systemd-journald.service...
Feb 12 21:58:08.606376 systemd[1]: Starting systemd-modules-load.service...
Feb 12 21:58:08.606396 systemd[1]: Starting systemd-network-generator.service...
Feb 12 21:58:08.606417 systemd[1]: Starting systemd-remount-fs.service...
Feb 12 21:58:08.606438 systemd[1]: Starting systemd-udev-trigger.service...
Feb 12 21:58:08.606469 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 12 21:58:08.606487 systemd[1]: Stopped verity-setup.service.
Feb 12 21:58:08.606506 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 12 21:58:08.616514 systemd[1]: Mounted dev-hugepages.mount.
Feb 12 21:58:08.616554 systemd[1]: Mounted dev-mqueue.mount.
Feb 12 21:58:08.616582 systemd[1]: Mounted media.mount.
Feb 12 21:58:08.616602 systemd[1]: Mounted sys-kernel-debug.mount.
Feb 12 21:58:08.616622 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb 12 21:58:08.616642 systemd[1]: Mounted tmp.mount.
Feb 12 21:58:08.616662 systemd[1]: Finished kmod-static-nodes.service.
Feb 12 21:58:08.616683 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 12 21:58:08.616702 systemd[1]: Finished modprobe@configfs.service.
Feb 12 21:58:08.616727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 12 21:58:08.616746 systemd[1]: Finished modprobe@dm_mod.service.
Feb 12 21:58:08.616773 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 12 21:58:08.616790 systemd[1]: Finished modprobe@drm.service.
Feb 12 21:58:08.616932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 12 21:58:08.616961 systemd[1]: Finished modprobe@efi_pstore.service.
Feb 12 21:58:08.617100 systemd[1]: Finished systemd-modules-load.service.
Feb 12 21:58:08.617128 systemd[1]: Finished systemd-network-generator.service.
Feb 12 21:58:08.617148 systemd[1]: Finished systemd-remount-fs.service.
Feb 12 21:58:08.617167 systemd[1]: Reached target network-pre.target.
Feb 12 21:58:08.617186 systemd[1]: Mounting sys-kernel-config.mount...
Feb 12 21:58:08.617205 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 12 21:58:08.617320 kernel: fuse: init (API version 7.34)
Feb 12 21:58:08.617341 systemd[1]: Starting systemd-hwdb-update.service...
Feb 12 21:58:08.617360 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 12 21:58:08.617383 kernel: loop: module loaded
Feb 12 21:58:08.617401 systemd[1]: Starting systemd-random-seed.service...
Feb 12 21:58:08.617420 systemd[1]: Starting systemd-sysctl.service...
Feb 12 21:58:08.617527 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 12 21:58:08.617547 systemd[1]: Finished modprobe@fuse.service.
Feb 12 21:58:08.617567 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 12 21:58:08.617591 systemd[1]: Finished modprobe@loop.service.
Feb 12 21:58:08.618038 systemd[1]: Mounted sys-kernel-config.mount.
Feb 12 21:58:08.618071 systemd[1]: Finished systemd-random-seed.service.
Feb 12 21:58:08.618831 systemd[1]: Reached target first-boot-complete.target.
Feb 12 21:58:08.618858 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb 12 21:58:08.618907 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb 12 21:58:08.619594 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb 12 21:58:08.619647 systemd-journald[1413]: Journal started
Feb 12 21:58:08.619730 systemd-journald[1413]: Runtime Journal (/run/log/journal/ec29ff7380e80b8297dc46cce644b432) is 4.8M, max 38.7M, 33.9M free.
Feb 12 21:58:04.634000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 12 21:58:04.700000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb 12 21:58:04.700000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb 12 21:58:04.700000 audit: BPF prog-id=10 op=LOAD
Feb 12 21:58:04.700000 audit: BPF prog-id=10 op=UNLOAD
Feb 12 21:58:04.700000 audit: BPF prog-id=11 op=LOAD
Feb 12 21:58:04.700000 audit: BPF prog-id=11 op=UNLOAD
Feb 12 21:58:04.835000 audit[1333]: AVC avc:  denied  { associate } for  pid=1333 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Feb 12 21:58:04.835000 audit[1333]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:58:04.835000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb 12 21:58:04.837000 audit[1333]: AVC avc:  denied  { associate } for  pid=1333 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Feb 12 21:58:04.837000 audit[1333]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b5 a2=1ed a3=0 items=2 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:58:04.837000 audit: CWD cwd="/"
Feb 12 21:58:04.837000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:04.837000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:04.837000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb 12 21:58:08.203000 audit: BPF prog-id=12 op=LOAD
Feb 12 21:58:08.204000 audit: BPF prog-id=3 op=UNLOAD
Feb 12 21:58:08.204000 audit: BPF prog-id=13 op=LOAD
Feb 12 21:58:08.204000 audit: BPF prog-id=14 op=LOAD
Feb 12 21:58:08.204000 audit: BPF prog-id=4 op=UNLOAD
Feb 12 21:58:08.204000 audit: BPF prog-id=5 op=UNLOAD
Feb 12 21:58:08.205000 audit: BPF prog-id=15 op=LOAD
Feb 12 21:58:08.205000 audit: BPF prog-id=12 op=UNLOAD
Feb 12 21:58:08.205000 audit: BPF prog-id=16 op=LOAD
Feb 12 21:58:08.205000 audit: BPF prog-id=17 op=LOAD
Feb 12 21:58:08.205000 audit: BPF prog-id=13 op=UNLOAD
Feb 12 21:58:08.205000 audit: BPF prog-id=14 op=UNLOAD
Feb 12 21:58:08.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.215000 audit: BPF prog-id=15 op=UNLOAD
Feb 12 21:58:08.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.425000 audit: BPF prog-id=18 op=LOAD
Feb 12 21:58:08.425000 audit: BPF prog-id=19 op=LOAD
Feb 12 21:58:08.425000 audit: BPF prog-id=20 op=LOAD
Feb 12 21:58:08.425000 audit: BPF prog-id=16 op=UNLOAD
Feb 12 21:58:08.425000 audit: BPF prog-id=17 op=UNLOAD
Feb 12 21:58:08.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.574000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb 12 21:58:08.574000 audit[1413]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffe68ad140 a2=4000 a3=7fffe68ad1dc items=0 ppid=1 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:58:08.574000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb 12 21:58:08.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.831174 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 21:58:08.202584 systemd[1]: Queued start job for default target multi-user.target.
Feb 12 21:58:08.623596 systemd[1]: Started systemd-journald.service.
Feb 12 21:58:04.831818 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb 12 21:58:08.207165 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 12 21:58:08.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:04.831853 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb 12 21:58:04.831897 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Feb 12 21:58:04.831913 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="skipped missing lower profile" missing profile=oem
Feb 12 21:58:04.831957 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Feb 12 21:58:04.831977 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Feb 12 21:58:04.832327 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Feb 12 21:58:04.832382 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb 12 21:58:04.832401 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb 12 21:58:04.833879 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Feb 12 21:58:04.833932 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Feb 12 21:58:04.833965 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2
Feb 12 21:58:04.833989 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Feb 12 21:58:04.834018 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2
Feb 12 21:58:04.834041 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Feb 12 21:58:08.626151 systemd[1]: Starting systemd-journal-flush.service...
Feb 12 21:58:07.616613 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 21:58:07.616924 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 21:58:07.617077 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 21:58:07.617275 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 21:58:07.617324 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Feb 12 21:58:07.617381 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-12T21:58:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Feb 12 21:58:08.641941 systemd[1]: Finished systemd-sysctl.service.
Feb 12 21:58:08.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.656544 systemd-journald[1413]: Time spent on flushing to /var/log/journal/ec29ff7380e80b8297dc46cce644b432 is 117.962ms for 1225 entries.
Feb 12 21:58:08.656544 systemd-journald[1413]: System Journal (/var/log/journal/ec29ff7380e80b8297dc46cce644b432) is 8.0M, max 195.6M, 187.6M free.
Feb 12 21:58:08.783500 systemd-journald[1413]: Received client request to flush runtime journal.
Feb 12 21:58:08.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.720398 systemd[1]: Finished flatcar-tmpfiles.service.
Feb 12 21:58:08.724145 systemd[1]: Starting systemd-sysusers.service...
Feb 12 21:58:08.768732 systemd[1]: Finished systemd-udev-trigger.service.
Feb 12 21:58:08.771544 systemd[1]: Starting systemd-udev-settle.service...
Feb 12 21:58:08.776296 systemd[1]: Finished systemd-sysusers.service.
Feb 12 21:58:08.779327 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb 12 21:58:08.786877 systemd[1]: Finished systemd-journal-flush.service.
Feb 12 21:58:08.807218 udevadm[1448]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb 12 21:58:08.851827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb 12 21:58:08.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:08.855248 kernel: kauditd_printk_skb: 101 callbacks suppressed
Feb 12 21:58:08.855369 kernel: audit: type=1130 audit(1707775088.853:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:09.546205 systemd[1]: Finished systemd-hwdb-update.service.
Feb 12 21:58:09.562532 kernel: audit: type=1130 audit(1707775089.549:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:09.562637 kernel: audit: type=1334 audit(1707775089.560:139): prog-id=21 op=LOAD
Feb 12 21:58:09.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:09.560000 audit: BPF prog-id=21 op=LOAD
Feb 12 21:58:09.562000 audit: BPF prog-id=22 op=LOAD
Feb 12 21:58:09.565604 kernel: audit: type=1334 audit(1707775089.562:140): prog-id=22 op=LOAD
Feb 12 21:58:09.565671 kernel: audit: type=1334 audit(1707775089.562:141): prog-id=7 op=UNLOAD
Feb 12 21:58:09.562000 audit: BPF prog-id=7 op=UNLOAD
Feb 12 21:58:09.563954 systemd[1]: Starting systemd-udevd.service...
Feb 12 21:58:09.566231 kernel: audit: type=1334 audit(1707775089.562:142): prog-id=8 op=UNLOAD
Feb 12 21:58:09.562000 audit: BPF prog-id=8 op=UNLOAD
Feb 12 21:58:09.590493 systemd-udevd[1452]: Using default interface naming scheme 'v252'.
Feb 12 21:58:09.621385 systemd[1]: Started systemd-udevd.service.
Feb 12 21:58:09.636107 kernel: audit: type=1130 audit(1707775089.622:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:09.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:09.627191 systemd[1]: Starting systemd-networkd.service...
Feb 12 21:58:09.641940 kernel: audit: type=1334 audit(1707775089.624:144): prog-id=23 op=LOAD
Feb 12 21:58:09.624000 audit: BPF prog-id=23 op=LOAD
Feb 12 21:58:09.658467 kernel: audit: type=1334 audit(1707775089.651:145): prog-id=24 op=LOAD
Feb 12 21:58:09.658552 kernel: audit: type=1334 audit(1707775089.651:146): prog-id=25 op=LOAD
Feb 12 21:58:09.651000 audit: BPF prog-id=24 op=LOAD
Feb 12 21:58:09.651000 audit: BPF prog-id=25 op=LOAD
Feb 12 21:58:09.651000 audit: BPF prog-id=26 op=LOAD
Feb 12 21:58:09.652930 systemd[1]: Starting systemd-userdbd.service...
Feb 12 21:58:09.713846 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Feb 12 21:58:09.713934 (udev-worker)[1460]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:58:09.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:09.726464 systemd[1]: Started systemd-userdbd.service.
Feb 12 21:58:09.809474 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Feb 12 21:58:09.816211 kernel: ACPI: button: Power Button [PWRF]
Feb 12 21:58:09.816311 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
Feb 12 21:58:09.816341 kernel: ACPI: button: Sleep Button [SLPF]
Feb 12 21:58:09.854972 systemd-networkd[1461]: lo: Link UP
Feb 12 21:58:09.855393 systemd-networkd[1461]: lo: Gained carrier
Feb 12 21:58:09.856295 systemd-networkd[1461]: Enumeration completed
Feb 12 21:58:09.858582 systemd[1]: Started systemd-networkd.service.
Feb 12 21:58:09.860133 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 12 21:58:09.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:09.867333 systemd-networkd[1461]: eth0: Link UP
Feb 12 21:58:09.867589 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb 12 21:58:09.863000 audit[1454]: AVC avc:  denied  { confidentiality } for  pid=1454 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb 12 21:58:09.867952 systemd-networkd[1461]: eth0: Gained carrier
Feb 12 21:58:09.878668 systemd-networkd[1461]: eth0: DHCPv4 address 172.31.21.40/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 12 21:58:09.882749 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb 12 21:58:09.863000 audit[1454]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560f66188520 a1=32194 a2=7f77c543ebc5 a3=5 items=108 ppid=1452 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:58:09.863000 audit: CWD cwd="/"
Feb 12 21:58:09.863000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=1 name=(null) inode=13152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=2 name=(null) inode=13152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=3 name=(null) inode=13153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=4 name=(null) inode=13152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=5 name=(null) inode=13154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=6 name=(null) inode=13152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=7 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=8 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=9 name=(null) inode=13156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=10 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=11 name=(null) inode=13157 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=12 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=13 name=(null) inode=13158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=14 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=15 name=(null) inode=13159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=16 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=17 name=(null) inode=13160 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=18 name=(null) inode=13152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=19 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=20 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=21 name=(null) inode=13162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=22 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=23 name=(null) inode=13163 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=24 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=25 name=(null) inode=13164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=26 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=27 name=(null) inode=13165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=28 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=29 name=(null) inode=13166 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=30 name=(null) inode=13152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=31 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=32 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=33 name=(null) inode=13168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=34 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=35 name=(null) inode=13169 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=36 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=37 name=(null) inode=13170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=38 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=39 name=(null) inode=13171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=40 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=41 name=(null) inode=13172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=42 name=(null) inode=13152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=43 name=(null) inode=13173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=44 name=(null) inode=13173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=45 name=(null) inode=13174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=46 name=(null) inode=13173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=47 name=(null) inode=13175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=48 name=(null) inode=13173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=49 name=(null) inode=13176 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=50 name=(null) inode=13173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=51 name=(null) inode=13177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=52 name=(null) inode=13173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=53 name=(null) inode=13178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=55 name=(null) inode=13179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=56 name=(null) inode=13179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=57 name=(null) inode=13180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=58 name=(null) inode=13179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=59 name=(null) inode=13181 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=60 name=(null) inode=13179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=61 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=62 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=63 name=(null) inode=13183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=64 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=65 name=(null) inode=13184 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=66 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=67 name=(null) inode=13185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=68 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=69 name=(null) inode=13186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=70 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=71 name=(null) inode=13187 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=72 name=(null) inode=13179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=73 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=74 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=75 name=(null) inode=13189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=76 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=77 name=(null) inode=13190 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=78 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=79 name=(null) inode=13191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=80 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=81 name=(null) inode=13192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=82 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=83 name=(null) inode=13193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=84 name=(null) inode=13179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=85 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=86 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=87 name=(null) inode=13195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=88 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=89 name=(null) inode=13196 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=90 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=91 name=(null) inode=13197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=92 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=93 name=(null) inode=13198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=94 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=95 name=(null) inode=13199 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=96 name=(null) inode=13179 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=97 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=98 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=99 name=(null) inode=13201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=100 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=101 name=(null) inode=13202 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=102 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=103 name=(null) inode=13203 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=104 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=105 name=(null) inode=13204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=106 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PATH item=107 name=(null) inode=13205 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 21:58:09.863000 audit: PROCTITLE proctitle="(udev-worker)"
Feb 12 21:58:09.935624 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Feb 12 21:58:09.943469 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5
Feb 12 21:58:09.959467 kernel: mousedev: PS/2 mouse device common for all mice
Feb 12 21:58:09.965475 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1465)
Feb 12 21:58:10.103195 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb 12 21:58:10.209996 systemd[1]: Finished systemd-udev-settle.service.
Feb 12 21:58:10.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.212733 systemd[1]: Starting lvm2-activation-early.service...
Feb 12 21:58:10.238655 lvm[1566]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 12 21:58:10.267897 systemd[1]: Finished lvm2-activation-early.service.
Feb 12 21:58:10.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.269362 systemd[1]: Reached target cryptsetup.target.
Feb 12 21:58:10.271943 systemd[1]: Starting lvm2-activation.service...
Feb 12 21:58:10.277302 lvm[1567]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 12 21:58:10.309215 systemd[1]: Finished lvm2-activation.service.
Feb 12 21:58:10.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.311667 systemd[1]: Reached target local-fs-pre.target.
Feb 12 21:58:10.315630 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 12 21:58:10.315674 systemd[1]: Reached target local-fs.target.
Feb 12 21:58:10.317082 systemd[1]: Reached target machines.target.
Feb 12 21:58:10.320169 systemd[1]: Starting ldconfig.service...
Feb 12 21:58:10.321842 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb 12 21:58:10.321920 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 21:58:10.324158 systemd[1]: Starting systemd-boot-update.service...
Feb 12 21:58:10.326601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb 12 21:58:10.329716 systemd[1]: Starting systemd-machine-id-commit.service...
Feb 12 21:58:10.331194 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb 12 21:58:10.331288 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb 12 21:58:10.333898 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb 12 21:58:10.350035 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1569 (bootctl)
Feb 12 21:58:10.352069 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb 12 21:58:10.375332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb 12 21:58:10.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.383162 systemd-tmpfiles[1572]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb 12 21:58:10.385309 systemd-tmpfiles[1572]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 12 21:58:10.392312 systemd-tmpfiles[1572]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 12 21:58:10.506637 systemd-fsck[1578]: fsck.fat 4.2 (2021-01-31)
Feb 12 21:58:10.506637 systemd-fsck[1578]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters
Feb 12 21:58:10.511939 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb 12 21:58:10.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.515106 systemd[1]: Mounting boot.mount...
Feb 12 21:58:10.537894 systemd[1]: Mounted boot.mount.
Feb 12 21:58:10.615667 systemd[1]: Finished systemd-boot-update.service.
Feb 12 21:58:10.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.722833 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb 12 21:58:10.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.726293 systemd[1]: Starting audit-rules.service...
Feb 12 21:58:10.730401 systemd[1]: Starting clean-ca-certificates.service...
Feb 12 21:58:10.738246 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb 12 21:58:10.741000 audit: BPF prog-id=27 op=LOAD
Feb 12 21:58:10.745420 systemd[1]: Starting systemd-resolved.service...
Feb 12 21:58:10.749000 audit: BPF prog-id=28 op=LOAD
Feb 12 21:58:10.756617 systemd[1]: Starting systemd-timesyncd.service...
Feb 12 21:58:10.762709 systemd[1]: Starting systemd-update-utmp.service...
Feb 12 21:58:10.775220 systemd[1]: Finished clean-ca-certificates.service.
Feb 12 21:58:10.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.778015 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 12 21:58:10.864851 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb 12 21:58:10.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.871000 audit[1598]: SYSTEM_BOOT pid=1598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.877844 systemd[1]: Finished systemd-update-utmp.service.
Feb 12 21:58:10.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.966360 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 12 21:58:10.971187 systemd[1]: Finished systemd-machine-id-commit.service.
Feb 12 21:58:10.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 21:58:10.973000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb 12 21:58:10.973000 audit[1613]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe893132d0 a2=420 a3=0 items=0 ppid=1592 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 21:58:10.973000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb 12 21:58:10.974368 augenrules[1613]: No rules
Feb 12 21:58:10.975542 systemd[1]: Finished audit-rules.service.
Feb 12 21:58:11.010808 ldconfig[1568]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 12 21:58:11.016914 systemd-resolved[1596]: Positive Trust Anchors:
Feb 12 21:58:11.016938 systemd-resolved[1596]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 12 21:58:11.016979 systemd-resolved[1596]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 12 21:58:11.019968 systemd[1]: Finished ldconfig.service.
Feb 12 21:58:11.027590 systemd[1]: Starting systemd-update-done.service...
Feb 12 21:58:11.047542 systemd[1]: Started systemd-timesyncd.service.
Feb 12 21:58:11.049606 systemd[1]: Finished systemd-update-done.service.
Feb 12 21:58:11.051395 systemd[1]: Reached target time-set.target.
Feb 12 21:58:11.056537 systemd-resolved[1596]: Defaulting to hostname 'linux'.
Feb 12 21:58:11.059306 systemd[1]: Started systemd-resolved.service.
Feb 12 21:58:11.060739 systemd[1]: Reached target network.target.
Feb 12 21:58:11.062264 systemd[1]: Reached target nss-lookup.target.
Feb 12 21:58:11.063426 systemd[1]: Reached target sysinit.target.
Feb 12 21:58:11.064724 systemd[1]: Started motdgen.path.
Feb 12 21:58:11.065715 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb 12 21:58:11.068229 systemd[1]: Started logrotate.timer.
Feb 12 21:58:11.069234 systemd[1]: Started mdadm.timer.
Feb 12 21:58:11.071295 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb 12 21:58:11.072635 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 12 21:58:11.072667 systemd[1]: Reached target paths.target.
Feb 12 21:58:11.074148 systemd[1]: Reached target timers.target.
Feb 12 21:58:11.075691 systemd[1]: Listening on dbus.socket.
Feb 12 21:58:11.078010 systemd[1]: Starting docker.socket...
Feb 12 21:58:11.083457 systemd[1]: Listening on sshd.socket.
Feb 12 21:58:11.084698 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 21:58:11.085311 systemd[1]: Listening on docker.socket.
Feb 12 21:58:11.086694 systemd[1]: Reached target sockets.target.
Feb 12 21:58:11.088100 systemd[1]: Reached target basic.target.
Feb 12 21:58:11.089272 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb 12 21:58:11.089301 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb 12 21:58:11.090486 systemd[1]: Starting containerd.service...
Feb 12 21:58:11.093074 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Feb 12 21:58:11.099270 systemd[1]: Starting dbus.service...
Feb 12 21:58:11.101361 systemd[1]: Starting enable-oem-cloudinit.service...
Feb 12 21:58:11.103959 systemd[1]: Starting extend-filesystems.service...
Feb 12 21:58:11.105728 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb 12 21:58:11.109819 systemd[1]: Starting motdgen.service...
Feb 12 21:58:11.112624 systemd[1]: Starting prepare-cni-plugins.service...
Feb 12 21:58:11.116127 systemd[1]: Starting prepare-critools.service...
Feb 12 21:58:11.119126 systemd[1]: Starting prepare-helm.service...
Feb 12 21:58:11.121995 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb 12 21:58:11.125621 systemd[1]: Starting sshd-keygen.service...
Feb 12 21:58:11.134149 systemd[1]: Starting systemd-logind.service...
Feb 12 21:58:11.137591 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 21:58:11.183938 jq[1635]: true
Feb 12 21:58:11.184229 jq[1625]: false
Feb 12 21:58:11.137813 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 12 21:58:11.139270 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 12 21:58:11.140653 systemd[1]: Starting update-engine.service...
Feb 12 21:58:11.231375 tar[1637]: ./
Feb 12 21:58:11.231375 tar[1637]: ./loopback
Feb 12 21:58:11.144582 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb 12 21:58:11.181740 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 12 21:58:11.181992 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb 12 21:58:11.233698 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 12 21:58:11.233958 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb 12 21:58:11.253394 tar[1639]: crictl
Feb 12 21:58:11.256646 tar[1638]: linux-amd64/helm
Feb 12 21:58:11.236755 systemd-timesyncd[1597]: Contacted time server 205.233.73.201:123 (0.flatcar.pool.ntp.org).
Feb 12 21:58:11.237307 systemd-timesyncd[1597]: Initial clock synchronization to Mon 2024-02-12 21:58:11.515671 UTC.
Feb 12 21:58:11.292643 jq[1642]: true
Feb 12 21:58:11.327258 dbus-daemon[1624]: [system] SELinux support is enabled
Feb 12 21:58:11.330810 systemd[1]: Started dbus.service.
Feb 12 21:58:11.335776 systemd[1]: motdgen.service: Deactivated successfully.
Feb 12 21:58:11.335994 systemd[1]: Finished motdgen.service.
Feb 12 21:58:11.337396 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 12 21:58:11.337435 systemd[1]: Reached target system-config.target.
Feb 12 21:58:11.339073 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 12 21:58:11.339105 systemd[1]: Reached target user-config.target.
Feb 12 21:58:11.351893 dbus-daemon[1624]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1461 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 12 21:58:11.357929 systemd[1]: Starting systemd-hostnamed.service...
Feb 12 21:58:11.364006 extend-filesystems[1626]: Found nvme0n1
Feb 12 21:58:11.385766 extend-filesystems[1626]: Found nvme0n1p1
Feb 12 21:58:11.385766 extend-filesystems[1626]: Found nvme0n1p2
Feb 12 21:58:11.385766 extend-filesystems[1626]: Found nvme0n1p3
Feb 12 21:58:11.385766 extend-filesystems[1626]: Found usr
Feb 12 21:58:11.385766 extend-filesystems[1626]: Found nvme0n1p4
Feb 12 21:58:11.385766 extend-filesystems[1626]: Found nvme0n1p6
Feb 12 21:58:11.385766 extend-filesystems[1626]: Found nvme0n1p7
Feb 12 21:58:11.385766 extend-filesystems[1626]: Found nvme0n1p9
Feb 12 21:58:11.385766 extend-filesystems[1626]: Checking size of /dev/nvme0n1p9
Feb 12 21:58:11.413896 extend-filesystems[1626]: Resized partition /dev/nvme0n1p9
Feb 12 21:58:11.421681 update_engine[1634]: I0212 21:58:11.420580  1634 main.cc:92] Flatcar Update Engine starting
Feb 12 21:58:11.432969 systemd[1]: Created slice system-sshd.slice.
Feb 12 21:58:11.434188 update_engine[1634]: I0212 21:58:11.434056  1634 update_check_scheduler.cc:74] Next update check in 11m59s
Feb 12 21:58:11.434909 systemd[1]: Started update-engine.service.
Feb 12 21:58:11.438232 extend-filesystems[1675]: resize2fs 1.46.5 (30-Dec-2021)
Feb 12 21:58:11.440320 systemd[1]: Started locksmithd.service.
Feb 12 21:58:11.455494 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 12 21:58:11.557575 systemd-networkd[1461]: eth0: Gained IPv6LL
Feb 12 21:58:11.561769 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb 12 21:58:11.563219 systemd[1]: Reached target network-online.target.
Feb 12 21:58:11.566054 systemd[1]: Started amazon-ssm-agent.service.
Feb 12 21:58:11.569508 systemd[1]: Started nvidia.service.
Feb 12 21:58:11.601789 env[1643]: time="2024-02-12T21:58:11.601698168Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb 12 21:58:11.604469 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 12 21:58:11.665196 extend-filesystems[1675]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 12 21:58:11.665196 extend-filesystems[1675]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 12 21:58:11.665196 extend-filesystems[1675]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 12 21:58:11.654662 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 12 21:58:11.688246 bash[1693]: Updated "/home/core/.ssh/authorized_keys"
Feb 12 21:58:11.688361 extend-filesystems[1626]: Resized filesystem in /dev/nvme0n1p9
Feb 12 21:58:11.654870 systemd[1]: Finished extend-filesystems.service.
Feb 12 21:58:11.690098 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb 12 21:58:11.741324 systemd-logind[1633]: Watching system buttons on /dev/input/event1 (Power Button)
Feb 12 21:58:11.741359 systemd-logind[1633]: Watching system buttons on /dev/input/event2 (Sleep Button)
Feb 12 21:58:11.741382 systemd-logind[1633]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb 12 21:58:11.741983 systemd-logind[1633]: New seat seat0.
Feb 12 21:58:11.744635 systemd[1]: Started systemd-logind.service.
Feb 12 21:58:11.853038 tar[1637]: ./bandwidth
Feb 12 21:58:11.870737 amazon-ssm-agent[1699]: 2024/02/12 21:58:11 Failed to load instance info from vault. RegistrationKey does not exist.
Feb 12 21:58:11.877237 amazon-ssm-agent[1699]: Initializing new seelog logger
Feb 12 21:58:11.877461 amazon-ssm-agent[1699]: New Seelog Logger Creation Complete
Feb 12 21:58:11.877561 amazon-ssm-agent[1699]: 2024/02/12 21:58:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 12 21:58:11.877561 amazon-ssm-agent[1699]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 12 21:58:11.877967 amazon-ssm-agent[1699]: 2024/02/12 21:58:11 processing appconfig overrides
Feb 12 21:58:11.971320 dbus-daemon[1624]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 12 21:58:11.971510 systemd[1]: Started systemd-hostnamed.service.
Feb 12 21:58:11.972004 dbus-daemon[1624]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1668 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 12 21:58:11.977362 systemd[1]: Starting polkit.service...
Feb 12 21:58:11.983280 systemd[1]: nvidia.service: Deactivated successfully.
Feb 12 21:58:12.008196 polkitd[1749]: Started polkitd version 121
Feb 12 21:58:12.036630 polkitd[1749]: Loading rules from directory /etc/polkit-1/rules.d
Feb 12 21:58:12.036724 polkitd[1749]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 12 21:58:12.044721 polkitd[1749]: Finished loading, compiling and executing 2 rules
Feb 12 21:58:12.045335 dbus-daemon[1624]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 12 21:58:12.045567 systemd[1]: Started polkit.service.
Feb 12 21:58:12.046162 polkitd[1749]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 12 21:58:12.064422 systemd-hostnamed[1668]: Hostname set to <ip-172-31-21-40> (transient)
Feb 12 21:58:12.064542 systemd-resolved[1596]: System hostname changed to 'ip-172-31-21-40'.
Feb 12 21:58:12.068249 env[1643]: time="2024-02-12T21:58:12.068169413Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 12 21:58:12.068449 env[1643]: time="2024-02-12T21:58:12.068424407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 12 21:58:12.073557 env[1643]: time="2024-02-12T21:58:12.073504603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 12 21:58:12.073557 env[1643]: time="2024-02-12T21:58:12.073555700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 12 21:58:12.074163 env[1643]: time="2024-02-12T21:58:12.074125442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 12 21:58:12.074354 env[1643]: time="2024-02-12T21:58:12.074163609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 12 21:58:12.074354 env[1643]: time="2024-02-12T21:58:12.074185712Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb 12 21:58:12.074354 env[1643]: time="2024-02-12T21:58:12.074200100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 12 21:58:12.074497 env[1643]: time="2024-02-12T21:58:12.074390185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 12 21:58:12.075139 env[1643]: time="2024-02-12T21:58:12.075110151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 12 21:58:12.075735 env[1643]: time="2024-02-12T21:58:12.075701088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 12 21:58:12.075805 env[1643]: time="2024-02-12T21:58:12.075735745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 12 21:58:12.076836 env[1643]: time="2024-02-12T21:58:12.075808192Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb 12 21:58:12.076920 env[1643]: time="2024-02-12T21:58:12.076842394Z" level=info msg="metadata content store policy set" policy=shared
Feb 12 21:58:12.105380 env[1643]: time="2024-02-12T21:58:12.105334791Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 12 21:58:12.105524 env[1643]: time="2024-02-12T21:58:12.105404154Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 12 21:58:12.105524 env[1643]: time="2024-02-12T21:58:12.105424203Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 12 21:58:12.105524 env[1643]: time="2024-02-12T21:58:12.105512039Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.105648 env[1643]: time="2024-02-12T21:58:12.105533693Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.105648 env[1643]: time="2024-02-12T21:58:12.105617288Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.105749 env[1643]: time="2024-02-12T21:58:12.105655759Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.105749 env[1643]: time="2024-02-12T21:58:12.105682298Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.105749 env[1643]: time="2024-02-12T21:58:12.105718850Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.105749 env[1643]: time="2024-02-12T21:58:12.105741788Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.105988 env[1643]: time="2024-02-12T21:58:12.105761783Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.105988 env[1643]: time="2024-02-12T21:58:12.105797530Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 12 21:58:12.106081 env[1643]: time="2024-02-12T21:58:12.106060577Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 12 21:58:12.106305 env[1643]: time="2024-02-12T21:58:12.106282990Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 12 21:58:12.108663 env[1643]: time="2024-02-12T21:58:12.108614690Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 12 21:58:12.108760 env[1643]: time="2024-02-12T21:58:12.108725483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.108760 env[1643]: time="2024-02-12T21:58:12.108751002Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 12 21:58:12.108850 env[1643]: time="2024-02-12T21:58:12.108831221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.108898 env[1643]: time="2024-02-12T21:58:12.108853853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.108898 env[1643]: time="2024-02-12T21:58:12.108874731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.108977 env[1643]: time="2024-02-12T21:58:12.108893604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.108977 env[1643]: time="2024-02-12T21:58:12.108915051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.108977 env[1643]: time="2024-02-12T21:58:12.108934610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.108977 env[1643]: time="2024-02-12T21:58:12.108953321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.109233 env[1643]: time="2024-02-12T21:58:12.108972826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.109233 env[1643]: time="2024-02-12T21:58:12.109008056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 12 21:58:12.109327 env[1643]: time="2024-02-12T21:58:12.109257334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.109327 env[1643]: time="2024-02-12T21:58:12.109284620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.109327 env[1643]: time="2024-02-12T21:58:12.109305164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.109448 env[1643]: time="2024-02-12T21:58:12.109326160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 12 21:58:12.109448 env[1643]: time="2024-02-12T21:58:12.109350456Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb 12 21:58:12.109448 env[1643]: time="2024-02-12T21:58:12.109371378Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 12 21:58:12.109448 env[1643]: time="2024-02-12T21:58:12.109407384Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb 12 21:58:12.109630 env[1643]: time="2024-02-12T21:58:12.109454520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 12 21:58:12.109830 env[1643]: time="2024-02-12T21:58:12.109759125Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 12 21:58:12.112776 env[1643]: time="2024-02-12T21:58:12.109848327Z" level=info msg="Connect containerd service"
Feb 12 21:58:12.112776 env[1643]: time="2024-02-12T21:58:12.109900405Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 12 21:58:12.112892 env[1643]: time="2024-02-12T21:58:12.112831855Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 12 21:58:12.113144 env[1643]: time="2024-02-12T21:58:12.113106830Z" level=info msg="Start subscribing containerd event"
Feb 12 21:58:12.113206 env[1643]: time="2024-02-12T21:58:12.113172911Z" level=info msg="Start recovering state"
Feb 12 21:58:12.113265 env[1643]: time="2024-02-12T21:58:12.113249889Z" level=info msg="Start event monitor"
Feb 12 21:58:12.113307 env[1643]: time="2024-02-12T21:58:12.113283848Z" level=info msg="Start snapshots syncer"
Feb 12 21:58:12.114491 env[1643]: time="2024-02-12T21:58:12.114382141Z" level=info msg="Start cni network conf syncer for default"
Feb 12 21:58:12.114491 env[1643]: time="2024-02-12T21:58:12.114479123Z" level=info msg="Start streaming server"
Feb 12 21:58:12.115708 env[1643]: time="2024-02-12T21:58:12.115683856Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 12 21:58:12.115981 env[1643]: time="2024-02-12T21:58:12.115958442Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 12 21:58:12.116192 systemd[1]: Started containerd.service.
Feb 12 21:58:12.132838 env[1643]: time="2024-02-12T21:58:12.132531894Z" level=info msg="containerd successfully booted in 0.635241s"
Feb 12 21:58:12.221779 tar[1637]: ./ptp
Feb 12 21:58:12.463343 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Create new startup processor
Feb 12 21:58:12.464535 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [LongRunningPluginsManager] registered plugins: {}
Feb 12 21:58:12.464643 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Initializing bookkeeping folders
Feb 12 21:58:12.464643 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO removing the completed state files
Feb 12 21:58:12.464643 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Initializing bookkeeping folders for long running plugins
Feb 12 21:58:12.464643 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Initializing replies folder for MDS reply requests that couldn't reach the service
Feb 12 21:58:12.464643 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Initializing healthcheck folders for long running plugins
Feb 12 21:58:12.464643 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Initializing locations for inventory plugin
Feb 12 21:58:12.464643 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Initializing default location for custom inventory
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Initializing default location for file inventory
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Initializing default location for role inventory
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Init the cloudwatchlogs publisher
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:softwareInventory
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:configureDocker
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:configurePackage
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:refreshAssociation
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:downloadContent
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:runDocument
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:runPowerShellScript
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:updateSsmAgent
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform independent plugin aws:runDockerAction
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Successfully loaded platform dependent plugin aws:runShellScript
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0
Feb 12 21:58:12.464922 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO OS: linux, Arch: amd64
Feb 12 21:58:12.466446 amazon-ssm-agent[1699]: datastore file /var/lib/amazon/ssm/i-061cf22401794f08b/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute
Feb 12 21:58:12.466767 coreos-metadata[1623]: Feb 12 21:58:12.465 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 12 21:58:12.473640 coreos-metadata[1623]: Feb 12 21:58:12.473 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1
Feb 12 21:58:12.474305 coreos-metadata[1623]: Feb 12 21:58:12.474 INFO Fetch successful
Feb 12 21:58:12.474392 coreos-metadata[1623]: Feb 12 21:58:12.474 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 12 21:58:12.475139 coreos-metadata[1623]: Feb 12 21:58:12.475 INFO Fetch successful
Feb 12 21:58:12.477739 unknown[1623]: wrote ssh authorized keys file for user: core
Feb 12 21:58:12.522248 update-ssh-keys[1818]: Updated "/home/core/.ssh/authorized_keys"
Feb 12 21:58:12.522945 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Feb 12 21:58:12.537240 tar[1637]: ./vlan
Feb 12 21:58:12.570067 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] Starting document processing engine...
Feb 12 21:58:12.664863 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] [EngineProcessor] Starting
Feb 12 21:58:12.679352 tar[1637]: ./host-device
Feb 12 21:58:12.759327 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing
Feb 12 21:58:12.812070 tar[1637]: ./tuning
Feb 12 21:58:12.854023 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] Starting session document processing engine...
Feb 12 21:58:12.925471 tar[1637]: ./vrf
Feb 12 21:58:12.948639 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] [EngineProcessor] Starting
Feb 12 21:58:13.020448 tar[1637]: ./sbr
Feb 12 21:58:13.043582 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module.
Feb 12 21:58:13.116881 tar[1637]: ./tap
Feb 12 21:58:13.138643 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-061cf22401794f08b, requestId: 735fd813-0cd8-48d2-b6bf-b8eb5c7fb2d6
Feb 12 21:58:13.222873 tar[1637]: ./dhcp
Feb 12 21:58:13.233938 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [OfflineService] Starting document processing engine...
Feb 12 21:58:13.329492 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [OfflineService] [EngineProcessor] Starting
Feb 12 21:58:13.425122 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [OfflineService] [EngineProcessor] Initial processing
Feb 12 21:58:13.446384 tar[1638]: linux-amd64/LICENSE
Feb 12 21:58:13.446805 tar[1638]: linux-amd64/README.md
Feb 12 21:58:13.458533 systemd[1]: Finished prepare-helm.service.
Feb 12 21:58:13.502200 tar[1637]: ./static
Feb 12 21:58:13.503389 systemd[1]: Finished prepare-critools.service.
Feb 12 21:58:13.521133 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [OfflineService] Starting message polling
Feb 12 21:58:13.544464 tar[1637]: ./firewall
Feb 12 21:58:13.606447 tar[1637]: ./macvlan
Feb 12 21:58:13.617223 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [OfflineService] Starting send replies to MDS
Feb 12 21:58:13.661445 tar[1637]: ./dummy
Feb 12 21:58:13.713632 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [LongRunningPluginsManager] starting long running plugin manager
Feb 12 21:58:13.719927 tar[1637]: ./bridge
Feb 12 21:58:13.797729 tar[1637]: ./ipvlan
Feb 12 21:58:13.809993 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute
Feb 12 21:58:13.866132 tar[1637]: ./portmap
Feb 12 21:58:13.906644 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [HealthCheck] HealthCheck reporting agent health.
Feb 12 21:58:13.930550 tar[1637]: ./host-local
Feb 12 21:58:14.001350 systemd[1]: Finished prepare-cni-plugins.service.
Feb 12 21:58:14.003506 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] Starting message polling
Feb 12 21:58:14.010520 sshd_keygen[1662]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 12 21:58:14.054418 systemd[1]: Finished sshd-keygen.service.
Feb 12 21:58:14.059722 systemd[1]: Starting issuegen.service...
Feb 12 21:58:14.063188 systemd[1]: Started sshd@0-172.31.21.40:22-139.178.89.65:32860.service.
Feb 12 21:58:14.074898 systemd[1]: issuegen.service: Deactivated successfully.
Feb 12 21:58:14.075120 systemd[1]: Finished issuegen.service.
Feb 12 21:58:14.078620 systemd[1]: Starting systemd-user-sessions.service...
Feb 12 21:58:14.093225 systemd[1]: Finished systemd-user-sessions.service.
Feb 12 21:58:14.096862 systemd[1]: Started getty@tty1.service.
Feb 12 21:58:14.111636 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] Starting send replies to MDS
Feb 12 21:58:14.100685 systemd[1]: Started serial-getty@ttyS0.service.
Feb 12 21:58:14.102223 systemd[1]: Reached target getty.target.
Feb 12 21:58:14.103597 systemd[1]: Reached target multi-user.target.
Feb 12 21:58:14.106793 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb 12 21:58:14.119939 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 12 21:58:14.120130 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb 12 21:58:14.121715 systemd[1]: Startup finished in 769ms (kernel) + 25.799s (initrd) + 9.553s (userspace) = 36.122s.
Feb 12 21:58:14.142753 locksmithd[1679]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 12 21:58:14.206480 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [instanceID=i-061cf22401794f08b] Starting association polling
Feb 12 21:58:14.269983 sshd[1834]: Accepted publickey for core from 139.178.89.65 port 32860 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:58:14.274462 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:58:14.295120 systemd[1]: Created slice user-500.slice.
Feb 12 21:58:14.300132 systemd[1]: Starting user-runtime-dir@500.service...
Feb 12 21:58:14.310547 systemd-logind[1633]: New session 1 of user core.
Feb 12 21:58:14.319621 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting
Feb 12 21:58:14.330980 systemd[1]: Finished user-runtime-dir@500.service.
Feb 12 21:58:14.337755 systemd[1]: Starting user@500.service...
Feb 12 21:58:14.350108 (systemd)[1845]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:58:14.417707 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] [Association] Launching response handler
Feb 12 21:58:14.488749 systemd[1845]: Queued start job for default target default.target.
Feb 12 21:58:14.489853 systemd[1845]: Reached target paths.target.
Feb 12 21:58:14.489998 systemd[1845]: Reached target sockets.target.
Feb 12 21:58:14.490059 systemd[1845]: Reached target timers.target.
Feb 12 21:58:14.490076 systemd[1845]: Reached target basic.target.
Feb 12 21:58:14.490210 systemd[1]: Started user@500.service.
Feb 12 21:58:14.492081 systemd[1]: Started session-1.scope.
Feb 12 21:58:14.493332 systemd[1845]: Reached target default.target.
Feb 12 21:58:14.493754 systemd[1845]: Startup finished in 127ms.
Feb 12 21:58:14.515518 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing
Feb 12 21:58:14.613611 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service
Feb 12 21:58:14.647211 systemd[1]: Started sshd@1-172.31.21.40:22-139.178.89.65:32870.service.
Feb 12 21:58:14.712004 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized
Feb 12 21:58:14.810804 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] listening reply.
Feb 12 21:58:14.821158 sshd[1854]: Accepted publickey for core from 139.178.89.65 port 32870 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:58:14.826132 sshd[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:58:14.838582 systemd-logind[1633]: New session 2 of user core.
Feb 12 21:58:14.840169 systemd[1]: Started session-2.scope.
Feb 12 21:58:14.909894 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck
Feb 12 21:58:15.003538 sshd[1854]: pam_unix(sshd:session): session closed for user core
Feb 12 21:58:15.008856 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [StartupProcessor] Executing startup processor tasks
Feb 12 21:58:15.008950 systemd[1]: sshd@1-172.31.21.40:22-139.178.89.65:32870.service: Deactivated successfully.
Feb 12 21:58:15.010410 systemd[1]: session-2.scope: Deactivated successfully.
Feb 12 21:58:15.011556 systemd-logind[1633]: Session 2 logged out. Waiting for processes to exit.
Feb 12 21:58:15.013151 systemd-logind[1633]: Removed session 2.
Feb 12 21:58:15.032942 systemd[1]: Started sshd@2-172.31.21.40:22-139.178.89.65:32878.service.
Feb 12 21:58:15.108127 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running
Feb 12 21:58:15.207383 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk
Feb 12 21:58:15.235391 sshd[1861]: Accepted publickey for core from 139.178.89.65 port 32878 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:58:15.237153 sshd[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:58:15.242951 systemd[1]: Started session-3.scope.
Feb 12 21:58:15.243737 systemd-logind[1633]: New session 3 of user core.
Feb 12 21:58:15.307653 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2
Feb 12 21:58:15.368413 sshd[1861]: pam_unix(sshd:session): session closed for user core
Feb 12 21:58:15.372171 systemd[1]: sshd@2-172.31.21.40:22-139.178.89.65:32878.service: Deactivated successfully.
Feb 12 21:58:15.373133 systemd[1]: session-3.scope: Deactivated successfully.
Feb 12 21:58:15.374065 systemd-logind[1633]: Session 3 logged out. Waiting for processes to exit.
Feb 12 21:58:15.375581 systemd-logind[1633]: Removed session 3.
Feb 12 21:58:15.395556 systemd[1]: Started sshd@3-172.31.21.40:22-139.178.89.65:32880.service.
Feb 12 21:58:15.406979 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-061cf22401794f08b?role=subscribe&stream=input
Feb 12 21:58:15.508742 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-061cf22401794f08b?role=subscribe&stream=input
Feb 12 21:58:15.568935 sshd[1867]: Accepted publickey for core from 139.178.89.65 port 32880 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:58:15.570859 sshd[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:58:15.577665 systemd-logind[1633]: New session 4 of user core.
Feb 12 21:58:15.578361 systemd[1]: Started session-4.scope.
Feb 12 21:58:15.608575 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] Starting receiving message from control channel
Feb 12 21:58:15.710109 amazon-ssm-agent[1699]: 2024-02-12 21:58:12 INFO [MessageGatewayService] [EngineProcessor] Initial processing
Feb 12 21:58:15.709001 sshd[1867]: pam_unix(sshd:session): session closed for user core
Feb 12 21:58:15.712345 systemd[1]: sshd@3-172.31.21.40:22-139.178.89.65:32880.service: Deactivated successfully.
Feb 12 21:58:15.713434 systemd[1]: session-4.scope: Deactivated successfully.
Feb 12 21:58:15.714241 systemd-logind[1633]: Session 4 logged out. Waiting for processes to exit.
Feb 12 21:58:15.715187 systemd-logind[1633]: Removed session 4.
Feb 12 21:58:15.734563 systemd[1]: Started sshd@4-172.31.21.40:22-139.178.89.65:32882.service.
Feb 12 21:58:15.809041 amazon-ssm-agent[1699]: 2024-02-12 21:58:15 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds.
Feb 12 21:58:15.904571 sshd[1873]: Accepted publickey for core from 139.178.89.65 port 32882 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:58:15.906690 sshd[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:58:15.913622 systemd-logind[1633]: New session 5 of user core.
Feb 12 21:58:15.914912 systemd[1]: Started session-5.scope.
Feb 12 21:58:16.044660 sudo[1876]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 12 21:58:16.045088 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb 12 21:58:16.683351 systemd[1]: Starting docker.service...
Feb 12 21:58:16.737652 env[1891]: time="2024-02-12T21:58:16.737589513Z" level=info msg="Starting up"
Feb 12 21:58:16.740328 env[1891]: time="2024-02-12T21:58:16.740215242Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 12 21:58:16.740328 env[1891]: time="2024-02-12T21:58:16.740323459Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 12 21:58:16.740513 env[1891]: time="2024-02-12T21:58:16.740350473Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb 12 21:58:16.740513 env[1891]: time="2024-02-12T21:58:16.740364765Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 12 21:58:16.746999 env[1891]: time="2024-02-12T21:58:16.746971680Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 12 21:58:16.747574 env[1891]: time="2024-02-12T21:58:16.747502157Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 12 21:58:16.747574 env[1891]: time="2024-02-12T21:58:16.747536003Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb 12 21:58:16.747574 env[1891]: time="2024-02-12T21:58:16.747550002Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 12 21:58:16.981123 env[1891]: time="2024-02-12T21:58:16.980999151Z" level=info msg="Loading containers: start."
Feb 12 21:58:17.160479 kernel: Initializing XFRM netlink socket
Feb 12 21:58:17.202183 env[1891]: time="2024-02-12T21:58:17.202140783Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb 12 21:58:17.203502 (udev-worker)[1904]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:58:17.352697 systemd-networkd[1461]: docker0: Link UP
Feb 12 21:58:17.366005 env[1891]: time="2024-02-12T21:58:17.365963390Z" level=info msg="Loading containers: done."
Feb 12 21:58:17.380516 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck181327441-merged.mount: Deactivated successfully.
Feb 12 21:58:17.412370 env[1891]: time="2024-02-12T21:58:17.412318045Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 12 21:58:17.412627 env[1891]: time="2024-02-12T21:58:17.412584686Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Feb 12 21:58:17.412808 env[1891]: time="2024-02-12T21:58:17.412783332Z" level=info msg="Daemon has completed initialization"
Feb 12 21:58:17.431678 systemd[1]: Started docker.service.
Feb 12 21:58:17.442470 env[1891]: time="2024-02-12T21:58:17.442389229Z" level=info msg="API listen on /run/docker.sock"
Feb 12 21:58:17.465490 systemd[1]: Reloading.
Feb 12 21:58:17.567749 /usr/lib/systemd/system-generators/torcx-generator[2031]: time="2024-02-12T21:58:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 21:58:17.569115 /usr/lib/systemd/system-generators/torcx-generator[2031]: time="2024-02-12T21:58:17Z" level=info msg="torcx already run"
Feb 12 21:58:17.671790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 21:58:17.671814 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 21:58:17.694352 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 21:58:17.814073 systemd[1]: Started kubelet.service.
Feb 12 21:58:17.918006 kubelet[2082]: E0212 21:58:17.917881    2082 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"
Feb 12 21:58:17.921220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 12 21:58:17.921537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 12 21:58:18.504779 env[1643]: time="2024-02-12T21:58:18.504720046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\""
Feb 12 21:58:19.112596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738284483.mount: Deactivated successfully.
Feb 12 21:58:21.703787 env[1643]: time="2024-02-12T21:58:21.703687586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:21.706964 env[1643]: time="2024-02-12T21:58:21.706921538Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:21.709946 env[1643]: time="2024-02-12T21:58:21.709902085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:21.712440 env[1643]: time="2024-02-12T21:58:21.712396378Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:21.713807 env[1643]: time="2024-02-12T21:58:21.713734183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\""
Feb 12 21:58:21.733456 env[1643]: time="2024-02-12T21:58:21.733400013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\""
Feb 12 21:58:24.403498 env[1643]: time="2024-02-12T21:58:24.403426381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:24.406378 env[1643]: time="2024-02-12T21:58:24.406327333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:24.408963 env[1643]: time="2024-02-12T21:58:24.408917474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:24.411687 env[1643]: time="2024-02-12T21:58:24.411641655Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:24.412670 env[1643]: time="2024-02-12T21:58:24.412626935Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\""
Feb 12 21:58:24.429745 env[1643]: time="2024-02-12T21:58:24.429136307Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\""
Feb 12 21:58:25.955572 env[1643]: time="2024-02-12T21:58:25.955517107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:25.958536 env[1643]: time="2024-02-12T21:58:25.958402128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:25.961196 env[1643]: time="2024-02-12T21:58:25.961154284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:25.963955 env[1643]: time="2024-02-12T21:58:25.963896361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:25.964695 env[1643]: time="2024-02-12T21:58:25.964660492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\""
Feb 12 21:58:25.977049 env[1643]: time="2024-02-12T21:58:25.977011814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\""
Feb 12 21:58:27.243179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328284708.mount: Deactivated successfully.
Feb 12 21:58:27.985606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 12 21:58:27.985979 systemd[1]: Stopped kubelet.service.
Feb 12 21:58:27.991557 systemd[1]: Started kubelet.service.
Feb 12 21:58:28.058184 env[1643]: time="2024-02-12T21:58:28.058120503Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:28.061974 env[1643]: time="2024-02-12T21:58:28.061923794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:28.066161 env[1643]: time="2024-02-12T21:58:28.066112687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:28.070146 env[1643]: time="2024-02-12T21:58:28.070096833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:28.071377 env[1643]: time="2024-02-12T21:58:28.071317394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\""
Feb 12 21:58:28.090097 env[1643]: time="2024-02-12T21:58:28.090055316Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb 12 21:58:28.098873 kubelet[2115]: E0212 21:58:28.098541    2115 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"
Feb 12 21:58:28.103130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 12 21:58:28.103282 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 12 21:58:28.629722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109347979.mount: Deactivated successfully.
Feb 12 21:58:28.642011 env[1643]: time="2024-02-12T21:58:28.641953589Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:28.645017 env[1643]: time="2024-02-12T21:58:28.644928899Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:28.649049 env[1643]: time="2024-02-12T21:58:28.649002554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:28.652046 env[1643]: time="2024-02-12T21:58:28.652004198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:28.652882 env[1643]: time="2024-02-12T21:58:28.652839815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Feb 12 21:58:28.671993 env[1643]: time="2024-02-12T21:58:28.671878399Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\""
Feb 12 21:58:29.653079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297253202.mount: Deactivated successfully.
Feb 12 21:58:34.764530 env[1643]: time="2024-02-12T21:58:34.764413918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:34.767681 env[1643]: time="2024-02-12T21:58:34.767633630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:34.770196 env[1643]: time="2024-02-12T21:58:34.770150335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:34.772937 env[1643]: time="2024-02-12T21:58:34.772894022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:34.773545 env[1643]: time="2024-02-12T21:58:34.773504506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\""
Feb 12 21:58:34.789538 env[1643]: time="2024-02-12T21:58:34.789468030Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\""
Feb 12 21:58:35.340230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330907645.mount: Deactivated successfully.
Feb 12 21:58:36.273326 env[1643]: time="2024-02-12T21:58:36.273267153Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:36.276582 env[1643]: time="2024-02-12T21:58:36.276536996Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:36.280555 env[1643]: time="2024-02-12T21:58:36.280516132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:36.283301 env[1643]: time="2024-02-12T21:58:36.283262522Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:36.283928 env[1643]: time="2024-02-12T21:58:36.283896508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\""
Feb 12 21:58:38.235208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 12 21:58:38.235580 systemd[1]: Stopped kubelet.service.
Feb 12 21:58:38.242686 systemd[1]: Started kubelet.service.
Feb 12 21:58:38.384097 kubelet[2198]: E0212 21:58:38.384050    2198 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"
Feb 12 21:58:38.387140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 12 21:58:38.387568 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 12 21:58:39.881011 systemd[1]: Stopped kubelet.service.
Feb 12 21:58:39.905604 systemd[1]: Reloading.
Feb 12 21:58:40.069210 /usr/lib/systemd/system-generators/torcx-generator[2227]: time="2024-02-12T21:58:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 21:58:40.069247 /usr/lib/systemd/system-generators/torcx-generator[2227]: time="2024-02-12T21:58:40Z" level=info msg="torcx already run"
Feb 12 21:58:40.205652 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 21:58:40.205675 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 21:58:40.244726 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 21:58:40.379489 systemd[1]: Started kubelet.service.
Feb 12 21:58:40.452699 kubelet[2280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 21:58:40.453060 kubelet[2280]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 12 21:58:40.453106 kubelet[2280]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 21:58:40.453221 kubelet[2280]: I0212 21:58:40.453197    2280 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 12 21:58:40.647425 kubelet[2280]: I0212 21:58:40.647376    2280 server.go:415] "Kubelet version" kubeletVersion="v1.27.2"
Feb 12 21:58:40.647425 kubelet[2280]: I0212 21:58:40.647413    2280 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 12 21:58:40.648065 kubelet[2280]: I0212 21:58:40.648030    2280 server.go:837] "Client rotation is on, will bootstrap in background"
Feb 12 21:58:40.656479 kubelet[2280]: E0212 21:58:40.656436    2280 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.657107 kubelet[2280]: I0212 21:58:40.657088    2280 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 12 21:58:40.657923 kubelet[2280]: I0212 21:58:40.657902    2280 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 12 21:58:40.658191 kubelet[2280]: I0212 21:58:40.658174    2280 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 12 21:58:40.658288 kubelet[2280]: I0212 21:58:40.658257    2280 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb 12 21:58:40.658398 kubelet[2280]: I0212 21:58:40.658295    2280 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb 12 21:58:40.658398 kubelet[2280]: I0212 21:58:40.658312    2280 container_manager_linux.go:302] "Creating device plugin manager"
Feb 12 21:58:40.658519 kubelet[2280]: I0212 21:58:40.658431    2280 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 21:58:40.661544 kubelet[2280]: I0212 21:58:40.661521    2280 kubelet.go:405] "Attempting to sync node with API server"
Feb 12 21:58:40.661544 kubelet[2280]: I0212 21:58:40.661545    2280 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 12 21:58:40.661699 kubelet[2280]: I0212 21:58:40.661589    2280 kubelet.go:309] "Adding apiserver pod source"
Feb 12 21:58:40.661699 kubelet[2280]: I0212 21:58:40.661607    2280 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 12 21:58:40.666399 kubelet[2280]: W0212 21:58:40.666351    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.21.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.666518 kubelet[2280]: E0212 21:58:40.666417    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.666580 kubelet[2280]: W0212 21:58:40.666501    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.21.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-40&limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.666580 kubelet[2280]: E0212 21:58:40.666541    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-40&limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.666666 kubelet[2280]: I0212 21:58:40.666624    2280 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb 12 21:58:40.666937 kubelet[2280]: W0212 21:58:40.666912    2280 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 12 21:58:40.667382 kubelet[2280]: I0212 21:58:40.667365    2280 server.go:1168] "Started kubelet"
Feb 12 21:58:40.667681 kubelet[2280]: I0212 21:58:40.667666    2280 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Feb 12 21:58:40.668384 kubelet[2280]: E0212 21:58:40.667992    2280 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-40.17b33c6c8fa07ae0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-40", UID:"ip-172-31-21-40", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-40"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 58, 40, 667343584, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 58, 40, 667343584, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.21.40:6443/api/v1/namespaces/default/events": dial tcp 172.31.21.40:6443: connect: connection refused'(may retry after sleeping)
Feb 12 21:58:40.668669 kubelet[2280]: I0212 21:58:40.668656    2280 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 12 21:58:40.669622 kubelet[2280]: I0212 21:58:40.669606    2280 server.go:461] "Adding debug handlers to kubelet server"
Feb 12 21:58:40.671140 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb 12 21:58:40.671281 kubelet[2280]: I0212 21:58:40.671259    2280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 12 21:58:40.676787 kubelet[2280]: E0212 21:58:40.676762    2280 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb 12 21:58:40.676976 kubelet[2280]: E0212 21:58:40.676964    2280 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 12 21:58:40.677565 kubelet[2280]: E0212 21:58:40.677126    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:40.677718 kubelet[2280]: I0212 21:58:40.677704    2280 volume_manager.go:284] "Starting Kubelet Volume Manager"
Feb 12 21:58:40.677919 kubelet[2280]: I0212 21:58:40.677906    2280 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
Feb 12 21:58:40.678485 kubelet[2280]: W0212 21:58:40.678420    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.21.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.678617 kubelet[2280]: E0212 21:58:40.678593    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.679146 kubelet[2280]: E0212 21:58:40.679132    2280 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-40?timeout=10s\": dial tcp 172.31.21.40:6443: connect: connection refused" interval="200ms"
Feb 12 21:58:40.705800 kubelet[2280]: I0212 21:58:40.705767    2280 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb 12 21:58:40.711003 kubelet[2280]: I0212 21:58:40.710976    2280 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb 12 21:58:40.711003 kubelet[2280]: I0212 21:58:40.711004    2280 status_manager.go:207] "Starting to sync pod status with apiserver"
Feb 12 21:58:40.711190 kubelet[2280]: I0212 21:58:40.711024    2280 kubelet.go:2257] "Starting kubelet main sync loop"
Feb 12 21:58:40.711190 kubelet[2280]: E0212 21:58:40.711089    2280 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 12 21:58:40.714742 kubelet[2280]: W0212 21:58:40.714478    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.21.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.714742 kubelet[2280]: E0212 21:58:40.714539    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:40.722639 kubelet[2280]: I0212 21:58:40.722604    2280 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 12 21:58:40.722639 kubelet[2280]: I0212 21:58:40.722625    2280 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 12 21:58:40.722639 kubelet[2280]: I0212 21:58:40.722643    2280 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 21:58:40.725272 kubelet[2280]: I0212 21:58:40.725245    2280 policy_none.go:49] "None policy: Start"
Feb 12 21:58:40.725944 kubelet[2280]: I0212 21:58:40.725915    2280 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb 12 21:58:40.725944 kubelet[2280]: I0212 21:58:40.725941    2280 state_mem.go:35] "Initializing new in-memory state store"
Feb 12 21:58:40.732482 systemd[1]: Created slice kubepods.slice.
Feb 12 21:58:40.736977 systemd[1]: Created slice kubepods-burstable.slice.
Feb 12 21:58:40.740236 systemd[1]: Created slice kubepods-besteffort.slice.
Feb 12 21:58:40.751941 kubelet[2280]: I0212 21:58:40.751408    2280 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 12 21:58:40.751941 kubelet[2280]: I0212 21:58:40.751691    2280 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 12 21:58:40.753749 kubelet[2280]: E0212 21:58:40.753729    2280 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-40\" not found"
Feb 12 21:58:40.780080 kubelet[2280]: I0212 21:58:40.780053    2280 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-40"
Feb 12 21:58:40.781132 kubelet[2280]: E0212 21:58:40.781109    2280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.40:6443/api/v1/nodes\": dial tcp 172.31.21.40:6443: connect: connection refused" node="ip-172-31-21-40"
Feb 12 21:58:40.811273 kubelet[2280]: I0212 21:58:40.811214    2280 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:58:40.812759 kubelet[2280]: I0212 21:58:40.812734    2280 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:58:40.814299 kubelet[2280]: I0212 21:58:40.814280    2280 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:58:40.821601 systemd[1]: Created slice kubepods-burstable-pod1543691372f318e5a8aed563bc8a302a.slice.
Feb 12 21:58:40.823575 kubelet[2280]: W0212 21:58:40.823546    2280 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1543691372f318e5a8aed563bc8a302a.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1543691372f318e5a8aed563bc8a302a.slice/cpuset.cpus.effective: no such device
Feb 12 21:58:40.837639 systemd[1]: Created slice kubepods-burstable-podf78ed2ffa13c495d032edc6ca913c319.slice.
Feb 12 21:58:40.844197 systemd[1]: Created slice kubepods-burstable-pod4c246ca3f3f337111d0911412f829f06.slice.
Feb 12 21:58:40.878857 kubelet[2280]: I0212 21:58:40.878818    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:40.879100 kubelet[2280]: I0212 21:58:40.879073    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:40.879186 kubelet[2280]: I0212 21:58:40.879128    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:40.879186 kubelet[2280]: I0212 21:58:40.879159    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c246ca3f3f337111d0911412f829f06-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-40\" (UID: \"4c246ca3f3f337111d0911412f829f06\") " pod="kube-system/kube-apiserver-ip-172-31-21-40"
Feb 12 21:58:40.879186 kubelet[2280]: I0212 21:58:40.879187    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:40.879390 kubelet[2280]: I0212 21:58:40.879221    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:40.879390 kubelet[2280]: I0212 21:58:40.879257    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f78ed2ffa13c495d032edc6ca913c319-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-40\" (UID: \"f78ed2ffa13c495d032edc6ca913c319\") " pod="kube-system/kube-scheduler-ip-172-31-21-40"
Feb 12 21:58:40.879390 kubelet[2280]: I0212 21:58:40.879290    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c246ca3f3f337111d0911412f829f06-ca-certs\") pod \"kube-apiserver-ip-172-31-21-40\" (UID: \"4c246ca3f3f337111d0911412f829f06\") " pod="kube-system/kube-apiserver-ip-172-31-21-40"
Feb 12 21:58:40.879390 kubelet[2280]: I0212 21:58:40.879380    2280 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c246ca3f3f337111d0911412f829f06-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-40\" (UID: \"4c246ca3f3f337111d0911412f829f06\") " pod="kube-system/kube-apiserver-ip-172-31-21-40"
Feb 12 21:58:40.880550 kubelet[2280]: E0212 21:58:40.880516    2280 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-40?timeout=10s\": dial tcp 172.31.21.40:6443: connect: connection refused" interval="400ms"
Feb 12 21:58:40.983638 kubelet[2280]: I0212 21:58:40.983544    2280 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-40"
Feb 12 21:58:40.987177 kubelet[2280]: E0212 21:58:40.985976    2280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.40:6443/api/v1/nodes\": dial tcp 172.31.21.40:6443: connect: connection refused" node="ip-172-31-21-40"
Feb 12 21:58:41.133628 env[1643]: time="2024-02-12T21:58:41.133579148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-40,Uid:1543691372f318e5a8aed563bc8a302a,Namespace:kube-system,Attempt:0,}"
Feb 12 21:58:41.148432 env[1643]: time="2024-02-12T21:58:41.148383486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-40,Uid:4c246ca3f3f337111d0911412f829f06,Namespace:kube-system,Attempt:0,}"
Feb 12 21:58:41.148891 env[1643]: time="2024-02-12T21:58:41.148382464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-40,Uid:f78ed2ffa13c495d032edc6ca913c319,Namespace:kube-system,Attempt:0,}"
Feb 12 21:58:41.282078 kubelet[2280]: E0212 21:58:41.282027    2280 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-40?timeout=10s\": dial tcp 172.31.21.40:6443: connect: connection refused" interval="800ms"
Feb 12 21:58:41.390492 kubelet[2280]: I0212 21:58:41.390421    2280 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-40"
Feb 12 21:58:41.390839 kubelet[2280]: E0212 21:58:41.390822    2280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.40:6443/api/v1/nodes\": dial tcp 172.31.21.40:6443: connect: connection refused" node="ip-172-31-21-40"
Feb 12 21:58:41.568337 kubelet[2280]: W0212 21:58:41.568237    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.21.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:41.568337 kubelet[2280]: E0212 21:58:41.568282    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:41.591238 kubelet[2280]: W0212 21:58:41.591177    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.21.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-40&limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:41.591238 kubelet[2280]: E0212 21:58:41.591241    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-40&limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:41.644053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452806263.mount: Deactivated successfully.
Feb 12 21:58:41.655786 env[1643]: time="2024-02-12T21:58:41.655735257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.657646 env[1643]: time="2024-02-12T21:58:41.657611451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.663277 env[1643]: time="2024-02-12T21:58:41.663230130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.664905 env[1643]: time="2024-02-12T21:58:41.664862980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.672796 env[1643]: time="2024-02-12T21:58:41.672748091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.674262 env[1643]: time="2024-02-12T21:58:41.674193298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.675882 env[1643]: time="2024-02-12T21:58:41.675847246Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.679395 env[1643]: time="2024-02-12T21:58:41.679355405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.680967 env[1643]: time="2024-02-12T21:58:41.680929639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.682930 env[1643]: time="2024-02-12T21:58:41.682898573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.700565 env[1643]: time="2024-02-12T21:58:41.700521632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.703560 kubelet[2280]: W0212 21:58:41.703508    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.21.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:41.704094 kubelet[2280]: E0212 21:58:41.703575    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:41.715498 env[1643]: time="2024-02-12T21:58:41.715427983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:58:41.754270 env[1643]: time="2024-02-12T21:58:41.748876098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:58:41.754270 env[1643]: time="2024-02-12T21:58:41.748933412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:58:41.754270 env[1643]: time="2024-02-12T21:58:41.748951152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:58:41.754270 env[1643]: time="2024-02-12T21:58:41.749148662Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc411f7b8bf64a121cc61b1666856af8afb84c9d8fbdd927197159e4e3023ece pid=2323 runtime=io.containerd.runc.v2
Feb 12 21:58:41.754594 env[1643]: time="2024-02-12T21:58:41.749888959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:58:41.754594 env[1643]: time="2024-02-12T21:58:41.749926621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:58:41.754594 env[1643]: time="2024-02-12T21:58:41.749941951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:58:41.755212 env[1643]: time="2024-02-12T21:58:41.755154313Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b559954e1937a01f6d89d2e0040bfb8c0b8a886ecbafdeb622daff9661111b3 pid=2325 runtime=io.containerd.runc.v2
Feb 12 21:58:41.777063 env[1643]: time="2024-02-12T21:58:41.776876111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:58:41.777245 env[1643]: time="2024-02-12T21:58:41.777097856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:58:41.777245 env[1643]: time="2024-02-12T21:58:41.777131802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:58:41.777529 env[1643]: time="2024-02-12T21:58:41.777344370Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7619b0bbf5d8b81bb78359e1f393dac63d7d5e7328f7e750b69558bc2f72ce8b pid=2361 runtime=io.containerd.runc.v2
Feb 12 21:58:41.794664 systemd[1]: Started cri-containerd-dc411f7b8bf64a121cc61b1666856af8afb84c9d8fbdd927197159e4e3023ece.scope.
Feb 12 21:58:41.818803 systemd[1]: Started cri-containerd-8b559954e1937a01f6d89d2e0040bfb8c0b8a886ecbafdeb622daff9661111b3.scope.
Feb 12 21:58:41.864735 systemd[1]: Started cri-containerd-7619b0bbf5d8b81bb78359e1f393dac63d7d5e7328f7e750b69558bc2f72ce8b.scope.
Feb 12 21:58:41.914321 kubelet[2280]: W0212 21:58:41.914214    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.21.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:41.915157 kubelet[2280]: E0212 21:58:41.915063    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:41.943714 env[1643]: time="2024-02-12T21:58:41.943668122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-40,Uid:4c246ca3f3f337111d0911412f829f06,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc411f7b8bf64a121cc61b1666856af8afb84c9d8fbdd927197159e4e3023ece\""
Feb 12 21:58:41.948491 env[1643]: time="2024-02-12T21:58:41.948430588Z" level=info msg="CreateContainer within sandbox \"dc411f7b8bf64a121cc61b1666856af8afb84c9d8fbdd927197159e4e3023ece\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 12 21:58:41.954108 env[1643]: time="2024-02-12T21:58:41.954065448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-40,Uid:1543691372f318e5a8aed563bc8a302a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b559954e1937a01f6d89d2e0040bfb8c0b8a886ecbafdeb622daff9661111b3\""
Feb 12 21:58:41.959210 env[1643]: time="2024-02-12T21:58:41.958987787Z" level=info msg="CreateContainer within sandbox \"8b559954e1937a01f6d89d2e0040bfb8c0b8a886ecbafdeb622daff9661111b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 12 21:58:41.992968 env[1643]: time="2024-02-12T21:58:41.992913672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-40,Uid:f78ed2ffa13c495d032edc6ca913c319,Namespace:kube-system,Attempt:0,} returns sandbox id \"7619b0bbf5d8b81bb78359e1f393dac63d7d5e7328f7e750b69558bc2f72ce8b\""
Feb 12 21:58:42.001108 env[1643]: time="2024-02-12T21:58:42.001073155Z" level=info msg="CreateContainer within sandbox \"7619b0bbf5d8b81bb78359e1f393dac63d7d5e7328f7e750b69558bc2f72ce8b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 12 21:58:42.045619 env[1643]: time="2024-02-12T21:58:42.045500995Z" level=info msg="CreateContainer within sandbox \"8b559954e1937a01f6d89d2e0040bfb8c0b8a886ecbafdeb622daff9661111b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd\""
Feb 12 21:58:42.047927 env[1643]: time="2024-02-12T21:58:42.047841643Z" level=info msg="StartContainer for \"b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd\""
Feb 12 21:58:42.054467 env[1643]: time="2024-02-12T21:58:42.054345019Z" level=info msg="CreateContainer within sandbox \"dc411f7b8bf64a121cc61b1666856af8afb84c9d8fbdd927197159e4e3023ece\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60d3ca9ac3a5ea1696b51fe20d0c3cc7c598c3d368fb0177656cfe17f29ca0fe\""
Feb 12 21:58:42.055646 env[1643]: time="2024-02-12T21:58:42.055608567Z" level=info msg="StartContainer for \"60d3ca9ac3a5ea1696b51fe20d0c3cc7c598c3d368fb0177656cfe17f29ca0fe\""
Feb 12 21:58:42.057573 env[1643]: time="2024-02-12T21:58:42.057533765Z" level=info msg="CreateContainer within sandbox \"7619b0bbf5d8b81bb78359e1f393dac63d7d5e7328f7e750b69558bc2f72ce8b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c\""
Feb 12 21:58:42.058686 env[1643]: time="2024-02-12T21:58:42.058652872Z" level=info msg="StartContainer for \"ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c\""
Feb 12 21:58:42.084316 kubelet[2280]: E0212 21:58:42.083595    2280 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-40?timeout=10s\": dial tcp 172.31.21.40:6443: connect: connection refused" interval="1.6s"
Feb 12 21:58:42.090207 systemd[1]: Started cri-containerd-b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd.scope.
Feb 12 21:58:42.098339 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 12 21:58:42.114923 systemd[1]: Started cri-containerd-60d3ca9ac3a5ea1696b51fe20d0c3cc7c598c3d368fb0177656cfe17f29ca0fe.scope.
Feb 12 21:58:42.139501 systemd[1]: Started cri-containerd-ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c.scope.
Feb 12 21:58:42.194578 kubelet[2280]: I0212 21:58:42.193785    2280 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-40"
Feb 12 21:58:42.194578 kubelet[2280]: E0212 21:58:42.194549    2280 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.40:6443/api/v1/nodes\": dial tcp 172.31.21.40:6443: connect: connection refused" node="ip-172-31-21-40"
Feb 12 21:58:42.262212 env[1643]: time="2024-02-12T21:58:42.261687204Z" level=info msg="StartContainer for \"b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd\" returns successfully"
Feb 12 21:58:42.276151 env[1643]: time="2024-02-12T21:58:42.276097210Z" level=info msg="StartContainer for \"60d3ca9ac3a5ea1696b51fe20d0c3cc7c598c3d368fb0177656cfe17f29ca0fe\" returns successfully"
Feb 12 21:58:42.303843 env[1643]: time="2024-02-12T21:58:42.303781709Z" level=info msg="StartContainer for \"ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c\" returns successfully"
Feb 12 21:58:42.801457 kubelet[2280]: E0212 21:58:42.801413    2280 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:43.309120 kubelet[2280]: W0212 21:58:43.309073    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.21.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-40&limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:43.309417 kubelet[2280]: E0212 21:58:43.309406    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-40&limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:43.323576 kubelet[2280]: W0212 21:58:43.323535    2280 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.21.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:43.323784 kubelet[2280]: E0212 21:58:43.323772    2280 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.40:6443: connect: connection refused
Feb 12 21:58:43.797209 kubelet[2280]: I0212 21:58:43.797180    2280 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-40"
Feb 12 21:58:45.218478 amazon-ssm-agent[1699]: 2024-02-12 21:58:45 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated
Feb 12 21:58:46.128154 kubelet[2280]: E0212 21:58:46.128116    2280 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-40\" not found" node="ip-172-31-21-40"
Feb 12 21:58:46.217636 kubelet[2280]: I0212 21:58:46.217579    2280 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-21-40"
Feb 12 21:58:46.242314 kubelet[2280]: E0212 21:58:46.241864    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:46.269509 kubelet[2280]: E0212 21:58:46.269338    2280 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-40.17b33c6c8fa07ae0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-40", UID:"ip-172-31-21-40", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-40"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 58, 40, 667343584, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 58, 40, 667343584, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 12 21:58:46.342676 kubelet[2280]: E0212 21:58:46.342499    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:46.443299 kubelet[2280]: E0212 21:58:46.443184    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:46.543764 kubelet[2280]: E0212 21:58:46.543726    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:46.644318 kubelet[2280]: E0212 21:58:46.644273    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:46.744695 kubelet[2280]: E0212 21:58:46.744656    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:46.845518 kubelet[2280]: E0212 21:58:46.845471    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:46.946193 kubelet[2280]: E0212 21:58:46.946150    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:47.047230 kubelet[2280]: E0212 21:58:47.047117    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:47.148269 kubelet[2280]: E0212 21:58:47.148232    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:47.249148 kubelet[2280]: E0212 21:58:47.249110    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:47.349860 kubelet[2280]: E0212 21:58:47.349727    2280 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-21-40\" not found"
Feb 12 21:58:47.670221 kubelet[2280]: I0212 21:58:47.670108    2280 apiserver.go:52] "Watching apiserver"
Feb 12 21:58:47.678993 kubelet[2280]: I0212 21:58:47.678950    2280 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
Feb 12 21:58:47.726306 kubelet[2280]: I0212 21:58:47.726263    2280 reconciler.go:41] "Reconciler: start to sync state"
Feb 12 21:58:49.071596 systemd[1]: Reloading.
Feb 12 21:58:49.177569 /usr/lib/systemd/system-generators/torcx-generator[2573]: time="2024-02-12T21:58:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 21:58:49.191566 /usr/lib/systemd/system-generators/torcx-generator[2573]: time="2024-02-12T21:58:49Z" level=info msg="torcx already run"
Feb 12 21:58:49.347298 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 21:58:49.347367 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 21:58:49.371948 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 21:58:49.502079 systemd[1]: Stopping kubelet.service...
Feb 12 21:58:49.502372 kubelet[2280]: I0212 21:58:49.502344    2280 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 12 21:58:49.521254 systemd[1]: kubelet.service: Deactivated successfully.
Feb 12 21:58:49.521542 systemd[1]: Stopped kubelet.service.
Feb 12 21:58:49.523742 systemd[1]: Started kubelet.service.
Feb 12 21:58:49.642734 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 21:58:49.642734 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 12 21:58:49.642734 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 21:58:49.642734 kubelet[2625]: I0212 21:58:49.642145    2625 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 12 21:58:49.651945 kubelet[2625]: I0212 21:58:49.651713    2625 server.go:415] "Kubelet version" kubeletVersion="v1.27.2"
Feb 12 21:58:49.651945 kubelet[2625]: I0212 21:58:49.651742    2625 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 12 21:58:49.652267 kubelet[2625]: I0212 21:58:49.652126    2625 server.go:837] "Client rotation is on, will bootstrap in background"
Feb 12 21:58:49.655116 kubelet[2625]: I0212 21:58:49.654715    2625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 12 21:58:49.656613 kubelet[2625]: I0212 21:58:49.656586    2625 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 12 21:58:49.658200 sudo[2637]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Feb 12 21:58:49.658546 sudo[2637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 12 21:58:49.663078 kubelet[2625]: I0212 21:58:49.663057    2625 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 12 21:58:49.664649 kubelet[2625]: I0212 21:58:49.663783    2625 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 12 21:58:49.664649 kubelet[2625]: I0212 21:58:49.663940    2625 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb 12 21:58:49.664649 kubelet[2625]: I0212 21:58:49.664057    2625 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb 12 21:58:49.664649 kubelet[2625]: I0212 21:58:49.664074    2625 container_manager_linux.go:302] "Creating device plugin manager"
Feb 12 21:58:49.664649 kubelet[2625]: I0212 21:58:49.664123    2625 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 21:58:49.672309 kubelet[2625]: I0212 21:58:49.672286    2625 kubelet.go:405] "Attempting to sync node with API server"
Feb 12 21:58:49.672492 kubelet[2625]: I0212 21:58:49.672482    2625 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 12 21:58:49.672588 kubelet[2625]: I0212 21:58:49.672580    2625 kubelet.go:309] "Adding apiserver pod source"
Feb 12 21:58:49.672658 kubelet[2625]: I0212 21:58:49.672650    2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 12 21:58:49.675728 kubelet[2625]: I0212 21:58:49.675707    2625 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb 12 21:58:49.676891 kubelet[2625]: I0212 21:58:49.676871    2625 server.go:1168] "Started kubelet"
Feb 12 21:58:49.724969 kubelet[2625]: I0212 21:58:49.724943    2625 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 12 21:58:49.726305 kubelet[2625]: I0212 21:58:49.726285    2625 server.go:461] "Adding debug handlers to kubelet server"
Feb 12 21:58:49.740195 kubelet[2625]: E0212 21:58:49.736876    2625 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb 12 21:58:49.740403 kubelet[2625]: E0212 21:58:49.740388    2625 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 12 21:58:49.740515 kubelet[2625]: I0212 21:58:49.725179    2625 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Feb 12 21:58:49.740718 kubelet[2625]: I0212 21:58:49.740699    2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 12 21:58:49.761766 kubelet[2625]: I0212 21:58:49.761736    2625 volume_manager.go:284] "Starting Kubelet Volume Manager"
Feb 12 21:58:49.766714 kubelet[2625]: I0212 21:58:49.761963    2625 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
Feb 12 21:58:49.819843 kubelet[2625]: I0212 21:58:49.819820    2625 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb 12 21:58:49.821198 kubelet[2625]: I0212 21:58:49.821181    2625 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb 12 21:58:49.821326 kubelet[2625]: I0212 21:58:49.821317    2625 status_manager.go:207] "Starting to sync pod status with apiserver"
Feb 12 21:58:49.821417 kubelet[2625]: I0212 21:58:49.821407    2625 kubelet.go:2257] "Starting kubelet main sync loop"
Feb 12 21:58:49.821554 kubelet[2625]: E0212 21:58:49.821545    2625 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 12 21:58:49.882552 kubelet[2625]: I0212 21:58:49.882522    2625 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-40"
Feb 12 21:58:49.910150 kubelet[2625]: I0212 21:58:49.910049    2625 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-21-40"
Feb 12 21:58:49.910150 kubelet[2625]: I0212 21:58:49.910150    2625 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-21-40"
Feb 12 21:58:49.922918 kubelet[2625]: E0212 21:58:49.922889    2625 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 12 21:58:49.958372 kubelet[2625]: I0212 21:58:49.958167    2625 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 12 21:58:49.958372 kubelet[2625]: I0212 21:58:49.958376    2625 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 12 21:58:49.958602 kubelet[2625]: I0212 21:58:49.958400    2625 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 21:58:49.958602 kubelet[2625]: I0212 21:58:49.958594    2625 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 12 21:58:49.958690 kubelet[2625]: I0212 21:58:49.958611    2625 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Feb 12 21:58:49.958690 kubelet[2625]: I0212 21:58:49.958620    2625 policy_none.go:49] "None policy: Start"
Feb 12 21:58:49.960562 kubelet[2625]: I0212 21:58:49.960540    2625 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb 12 21:58:49.960660 kubelet[2625]: I0212 21:58:49.960573    2625 state_mem.go:35] "Initializing new in-memory state store"
Feb 12 21:58:49.960772 kubelet[2625]: I0212 21:58:49.960759    2625 state_mem.go:75] "Updated machine memory state"
Feb 12 21:58:49.969313 kubelet[2625]: I0212 21:58:49.969285    2625 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 12 21:58:49.972140 kubelet[2625]: I0212 21:58:49.972114    2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 12 21:58:50.124045 kubelet[2625]: I0212 21:58:50.124008    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:58:50.124203 kubelet[2625]: I0212 21:58:50.124118    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:58:50.124203 kubelet[2625]: I0212 21:58:50.124158    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:58:50.169256 kubelet[2625]: E0212 21:58:50.169144    2625 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-21-40\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-40"
Feb 12 21:58:50.173824 kubelet[2625]: I0212 21:58:50.173784    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c246ca3f3f337111d0911412f829f06-ca-certs\") pod \"kube-apiserver-ip-172-31-21-40\" (UID: \"4c246ca3f3f337111d0911412f829f06\") " pod="kube-system/kube-apiserver-ip-172-31-21-40"
Feb 12 21:58:50.174061 kubelet[2625]: I0212 21:58:50.174049    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:50.174286 kubelet[2625]: I0212 21:58:50.174275    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:50.174405 kubelet[2625]: I0212 21:58:50.174397    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:50.174655 kubelet[2625]: I0212 21:58:50.174634    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:50.174773 kubelet[2625]: I0212 21:58:50.174765    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f78ed2ffa13c495d032edc6ca913c319-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-40\" (UID: \"f78ed2ffa13c495d032edc6ca913c319\") " pod="kube-system/kube-scheduler-ip-172-31-21-40"
Feb 12 21:58:50.174893 kubelet[2625]: I0212 21:58:50.174884    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1543691372f318e5a8aed563bc8a302a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-40\" (UID: \"1543691372f318e5a8aed563bc8a302a\") " pod="kube-system/kube-controller-manager-ip-172-31-21-40"
Feb 12 21:58:50.175003 kubelet[2625]: I0212 21:58:50.174996    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c246ca3f3f337111d0911412f829f06-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-40\" (UID: \"4c246ca3f3f337111d0911412f829f06\") " pod="kube-system/kube-apiserver-ip-172-31-21-40"
Feb 12 21:58:50.175114 kubelet[2625]: I0212 21:58:50.175104    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c246ca3f3f337111d0911412f829f06-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-40\" (UID: \"4c246ca3f3f337111d0911412f829f06\") " pod="kube-system/kube-apiserver-ip-172-31-21-40"
Feb 12 21:58:50.637719 sudo[2637]: pam_unix(sudo:session): session closed for user root
Feb 12 21:58:50.679866 kubelet[2625]: I0212 21:58:50.679827    2625 apiserver.go:52] "Watching apiserver"
Feb 12 21:58:50.767952 kubelet[2625]: I0212 21:58:50.767913    2625 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
Feb 12 21:58:50.779424 kubelet[2625]: I0212 21:58:50.779383    2625 reconciler.go:41] "Reconciler: start to sync state"
Feb 12 21:58:51.063868 kubelet[2625]: I0212 21:58:51.063732    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-40" podStartSLOduration=1.062909091 podCreationTimestamp="2024-02-12 21:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:58:51.011993508 +0000 UTC m=+1.478685708" watchObservedRunningTime="2024-02-12 21:58:51.062909091 +0000 UTC m=+1.529601282"
Feb 12 21:58:51.126258 kubelet[2625]: I0212 21:58:51.126204    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-40" podStartSLOduration=1.126121519 podCreationTimestamp="2024-02-12 21:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:58:51.064571579 +0000 UTC m=+1.531263777" watchObservedRunningTime="2024-02-12 21:58:51.126121519 +0000 UTC m=+1.592813712"
Feb 12 21:58:52.506991 kubelet[2625]: I0212 21:58:52.506952    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-40" podStartSLOduration=4.506866039 podCreationTimestamp="2024-02-12 21:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:58:51.127175046 +0000 UTC m=+1.593867240" watchObservedRunningTime="2024-02-12 21:58:52.506866039 +0000 UTC m=+2.973558237"
Feb 12 21:58:52.782496 sudo[1876]: pam_unix(sudo:session): session closed for user root
Feb 12 21:58:52.806093 sshd[1873]: pam_unix(sshd:session): session closed for user core
Feb 12 21:58:52.809668 systemd[1]: sshd@4-172.31.21.40:22-139.178.89.65:32882.service: Deactivated successfully.
Feb 12 21:58:52.811180 systemd[1]: session-5.scope: Deactivated successfully.
Feb 12 21:58:52.811546 systemd[1]: session-5.scope: Consumed 4.978s CPU time.
Feb 12 21:58:52.812093 systemd-logind[1633]: Session 5 logged out. Waiting for processes to exit.
Feb 12 21:58:52.813902 systemd-logind[1633]: Removed session 5.
Feb 12 21:58:57.004135 update_engine[1634]: I0212 21:58:57.004081  1634 update_attempter.cc:509] Updating boot flags...
Feb 12 21:59:03.095207 kubelet[2625]: I0212 21:59:03.094777    2625 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 12 21:59:03.101135 env[1643]: time="2024-02-12T21:59:03.101083334Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 12 21:59:03.104541 kubelet[2625]: I0212 21:59:03.103423    2625 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 12 21:59:03.469384 kubelet[2625]: I0212 21:59:03.469248    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:59:03.475462 kubelet[2625]: I0212 21:59:03.475422    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:59:03.479584 systemd[1]: Created slice kubepods-burstable-pod74dc5bf2_079f_4981_bf0f_c4dab63734f1.slice.
Feb 12 21:59:03.491084 systemd[1]: Created slice kubepods-besteffort-podc70e664f_7df7_485d_8211_ac56a7b21605.slice.
Feb 12 21:59:03.492670 kubelet[2625]: I0212 21:59:03.492642    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-lib-modules\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.492841 kubelet[2625]: I0212 21:59:03.492691    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-config-path\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.492841 kubelet[2625]: I0212 21:59:03.492736    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-xtables-lock\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.492841 kubelet[2625]: I0212 21:59:03.492763    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-run\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.494205 kubelet[2625]: I0212 21:59:03.493958    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-hostproc\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495584 kubelet[2625]: I0212 21:59:03.495558    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-etc-cni-netd\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495715 kubelet[2625]: I0212 21:59:03.495605    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-hubble-tls\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495715 kubelet[2625]: I0212 21:59:03.495635    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgt5k\" (UniqueName: \"kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-kube-api-access-xgt5k\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495715 kubelet[2625]: I0212 21:59:03.495665    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-cgroup\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495715 kubelet[2625]: I0212 21:59:03.495695    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-host-proc-sys-net\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495966 kubelet[2625]: I0212 21:59:03.495725    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-bpf-maps\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495966 kubelet[2625]: I0212 21:59:03.495756    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74dc5bf2-079f-4981-bf0f-c4dab63734f1-clustermesh-secrets\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495966 kubelet[2625]: I0212 21:59:03.495849    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-host-proc-sys-kernel\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.495966 kubelet[2625]: I0212 21:59:03.495893    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c70e664f-7df7-485d-8211-ac56a7b21605-xtables-lock\") pod \"kube-proxy-7kk8n\" (UID: \"c70e664f-7df7-485d-8211-ac56a7b21605\") " pod="kube-system/kube-proxy-7kk8n"
Feb 12 21:59:03.495966 kubelet[2625]: I0212 21:59:03.495927    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c70e664f-7df7-485d-8211-ac56a7b21605-lib-modules\") pod \"kube-proxy-7kk8n\" (UID: \"c70e664f-7df7-485d-8211-ac56a7b21605\") " pod="kube-system/kube-proxy-7kk8n"
Feb 12 21:59:03.495966 kubelet[2625]: I0212 21:59:03.495959    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cni-path\") pod \"cilium-dfhx4\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") " pod="kube-system/cilium-dfhx4"
Feb 12 21:59:03.496244 kubelet[2625]: I0212 21:59:03.495991    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c70e664f-7df7-485d-8211-ac56a7b21605-kube-proxy\") pod \"kube-proxy-7kk8n\" (UID: \"c70e664f-7df7-485d-8211-ac56a7b21605\") " pod="kube-system/kube-proxy-7kk8n"
Feb 12 21:59:03.496244 kubelet[2625]: I0212 21:59:03.496023    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mkl\" (UniqueName: \"kubernetes.io/projected/c70e664f-7df7-485d-8211-ac56a7b21605-kube-api-access-n6mkl\") pod \"kube-proxy-7kk8n\" (UID: \"c70e664f-7df7-485d-8211-ac56a7b21605\") " pod="kube-system/kube-proxy-7kk8n"
Feb 12 21:59:03.625856 kubelet[2625]: E0212 21:59:03.625807    2625 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Feb 12 21:59:03.626171 kubelet[2625]: E0212 21:59:03.626154    2625 projected.go:198] Error preparing data for projected volume kube-api-access-n6mkl for pod kube-system/kube-proxy-7kk8n: configmap "kube-root-ca.crt" not found
Feb 12 21:59:03.626382 kubelet[2625]: E0212 21:59:03.626370    2625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c70e664f-7df7-485d-8211-ac56a7b21605-kube-api-access-n6mkl podName:c70e664f-7df7-485d-8211-ac56a7b21605 nodeName:}" failed. No retries permitted until 2024-02-12 21:59:04.126342273 +0000 UTC m=+14.593034460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6mkl" (UniqueName: "kubernetes.io/projected/c70e664f-7df7-485d-8211-ac56a7b21605-kube-api-access-n6mkl") pod "kube-proxy-7kk8n" (UID: "c70e664f-7df7-485d-8211-ac56a7b21605") : configmap "kube-root-ca.crt" not found
Feb 12 21:59:03.633383 kubelet[2625]: E0212 21:59:03.633350    2625 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Feb 12 21:59:03.633653 kubelet[2625]: E0212 21:59:03.633641    2625 projected.go:198] Error preparing data for projected volume kube-api-access-xgt5k for pod kube-system/cilium-dfhx4: configmap "kube-root-ca.crt" not found
Feb 12 21:59:03.633806 kubelet[2625]: E0212 21:59:03.633789    2625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-kube-api-access-xgt5k podName:74dc5bf2-079f-4981-bf0f-c4dab63734f1 nodeName:}" failed. No retries permitted until 2024-02-12 21:59:04.133764315 +0000 UTC m=+14.600456493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xgt5k" (UniqueName: "kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-kube-api-access-xgt5k") pod "cilium-dfhx4" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1") : configmap "kube-root-ca.crt" not found
Feb 12 21:59:03.976555 kubelet[2625]: I0212 21:59:03.976461    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:59:03.984333 systemd[1]: Created slice kubepods-besteffort-poddc17f506_5ffa_4abb_8f2f_4e393304d070.slice.
Feb 12 21:59:04.003455 kubelet[2625]: I0212 21:59:03.999088    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc17f506-5ffa-4abb-8f2f-4e393304d070-cilium-config-path\") pod \"cilium-operator-574c4bb98d-glpmd\" (UID: \"dc17f506-5ffa-4abb-8f2f-4e393304d070\") " pod="kube-system/cilium-operator-574c4bb98d-glpmd"
Feb 12 21:59:04.003657 kubelet[2625]: I0212 21:59:04.003606    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xc2s\" (UniqueName: \"kubernetes.io/projected/dc17f506-5ffa-4abb-8f2f-4e393304d070-kube-api-access-2xc2s\") pod \"cilium-operator-574c4bb98d-glpmd\" (UID: \"dc17f506-5ffa-4abb-8f2f-4e393304d070\") " pod="kube-system/cilium-operator-574c4bb98d-glpmd"
Feb 12 21:59:04.295274 env[1643]: time="2024-02-12T21:59:04.295227686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-glpmd,Uid:dc17f506-5ffa-4abb-8f2f-4e393304d070,Namespace:kube-system,Attempt:0,}"
Feb 12 21:59:04.329025 env[1643]: time="2024-02-12T21:59:04.328928362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:59:04.329478 env[1643]: time="2024-02-12T21:59:04.328980018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:59:04.329478 env[1643]: time="2024-02-12T21:59:04.328996956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:59:04.329918 env[1643]: time="2024-02-12T21:59:04.329850946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15 pid=2888 runtime=io.containerd.runc.v2
Feb 12 21:59:04.353135 systemd[1]: Started cri-containerd-488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15.scope.
Feb 12 21:59:04.387560 env[1643]: time="2024-02-12T21:59:04.387513218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfhx4,Uid:74dc5bf2-079f-4981-bf0f-c4dab63734f1,Namespace:kube-system,Attempt:0,}"
Feb 12 21:59:04.402164 env[1643]: time="2024-02-12T21:59:04.402119913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7kk8n,Uid:c70e664f-7df7-485d-8211-ac56a7b21605,Namespace:kube-system,Attempt:0,}"
Feb 12 21:59:04.441945 env[1643]: time="2024-02-12T21:59:04.441314719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:59:04.441945 env[1643]: time="2024-02-12T21:59:04.441368139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:59:04.441945 env[1643]: time="2024-02-12T21:59:04.441399682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:59:04.453337 env[1643]: time="2024-02-12T21:59:04.445109094Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2 pid=2920 runtime=io.containerd.runc.v2
Feb 12 21:59:04.471488 env[1643]: time="2024-02-12T21:59:04.468630476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-glpmd,Uid:dc17f506-5ffa-4abb-8f2f-4e393304d070,Namespace:kube-system,Attempt:0,} returns sandbox id \"488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15\""
Feb 12 21:59:04.480208 env[1643]: time="2024-02-12T21:59:04.479407422Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 12 21:59:04.492241 env[1643]: time="2024-02-12T21:59:04.492107968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:59:04.492410 env[1643]: time="2024-02-12T21:59:04.492247394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:59:04.492410 env[1643]: time="2024-02-12T21:59:04.492281534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:59:04.492983 env[1643]: time="2024-02-12T21:59:04.492908262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fc8a10751bb3ba17f96499659157737dde8296c9363d033d636aae3cf05e963 pid=2950 runtime=io.containerd.runc.v2
Feb 12 21:59:04.503063 systemd[1]: Started cri-containerd-52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2.scope.
Feb 12 21:59:04.560077 systemd[1]: Started cri-containerd-6fc8a10751bb3ba17f96499659157737dde8296c9363d033d636aae3cf05e963.scope.
Feb 12 21:59:04.595828 env[1643]: time="2024-02-12T21:59:04.595778390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfhx4,Uid:74dc5bf2-079f-4981-bf0f-c4dab63734f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\""
Feb 12 21:59:04.635114 env[1643]: time="2024-02-12T21:59:04.634970440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7kk8n,Uid:c70e664f-7df7-485d-8211-ac56a7b21605,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fc8a10751bb3ba17f96499659157737dde8296c9363d033d636aae3cf05e963\""
Feb 12 21:59:04.642009 env[1643]: time="2024-02-12T21:59:04.641955414Z" level=info msg="CreateContainer within sandbox \"6fc8a10751bb3ba17f96499659157737dde8296c9363d033d636aae3cf05e963\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 12 21:59:04.665577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983771417.mount: Deactivated successfully.
Feb 12 21:59:04.679011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664843136.mount: Deactivated successfully.
Feb 12 21:59:04.690723 env[1643]: time="2024-02-12T21:59:04.690681044Z" level=info msg="CreateContainer within sandbox \"6fc8a10751bb3ba17f96499659157737dde8296c9363d033d636aae3cf05e963\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c63205b4bc223bd194b58913b91c0c240713eca396e682d282a67d98f0939cf\""
Feb 12 21:59:04.693459 env[1643]: time="2024-02-12T21:59:04.691752116Z" level=info msg="StartContainer for \"8c63205b4bc223bd194b58913b91c0c240713eca396e682d282a67d98f0939cf\""
Feb 12 21:59:04.745475 systemd[1]: Started cri-containerd-8c63205b4bc223bd194b58913b91c0c240713eca396e682d282a67d98f0939cf.scope.
Feb 12 21:59:04.815042 env[1643]: time="2024-02-12T21:59:04.814928010Z" level=info msg="StartContainer for \"8c63205b4bc223bd194b58913b91c0c240713eca396e682d282a67d98f0939cf\" returns successfully"
Feb 12 21:59:05.790159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327600402.mount: Deactivated successfully.
Feb 12 21:59:06.758070 env[1643]: time="2024-02-12T21:59:06.758018541Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:06.761421 env[1643]: time="2024-02-12T21:59:06.761375588Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:06.764065 env[1643]: time="2024-02-12T21:59:06.764021369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:06.764874 env[1643]: time="2024-02-12T21:59:06.764833472Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb 12 21:59:06.769781 env[1643]: time="2024-02-12T21:59:06.769738998Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 12 21:59:06.771479 env[1643]: time="2024-02-12T21:59:06.771426104Z" level=info msg="CreateContainer within sandbox \"488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 12 21:59:06.794666 env[1643]: time="2024-02-12T21:59:06.794573529Z" level=info msg="CreateContainer within sandbox \"488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\""
Feb 12 21:59:06.798356 env[1643]: time="2024-02-12T21:59:06.795871163Z" level=info msg="StartContainer for \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\""
Feb 12 21:59:06.833208 systemd[1]: Started cri-containerd-ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93.scope.
Feb 12 21:59:06.930033 env[1643]: time="2024-02-12T21:59:06.929885880Z" level=info msg="StartContainer for \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\" returns successfully"
Feb 12 21:59:07.786827 systemd[1]: run-containerd-runc-k8s.io-ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93-runc.ouLauY.mount: Deactivated successfully.
Feb 12 21:59:08.045648 kubelet[2625]: I0212 21:59:08.044112    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7kk8n" podStartSLOduration=5.043890671 podCreationTimestamp="2024-02-12 21:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:59:04.943354304 +0000 UTC m=+15.410046503" watchObservedRunningTime="2024-02-12 21:59:08.043890671 +0000 UTC m=+18.510582865"
Feb 12 21:59:08.045648 kubelet[2625]: I0212 21:59:08.044675    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-glpmd" podStartSLOduration=2.753379704 podCreationTimestamp="2024-02-12 21:59:03 +0000 UTC" firstStartedPulling="2024-02-12 21:59:04.476607029 +0000 UTC m=+14.943299219" lastFinishedPulling="2024-02-12 21:59:06.767863734 +0000 UTC m=+17.234555926" observedRunningTime="2024-02-12 21:59:08.044506977 +0000 UTC m=+18.511199175" watchObservedRunningTime="2024-02-12 21:59:08.044636411 +0000 UTC m=+18.511328607"
Feb 12 21:59:13.713071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437432799.mount: Deactivated successfully.
Feb 12 21:59:18.146364 env[1643]: time="2024-02-12T21:59:18.146309740Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:18.150668 env[1643]: time="2024-02-12T21:59:18.150589527Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:18.153564 env[1643]: time="2024-02-12T21:59:18.153517408Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 21:59:18.154428 env[1643]: time="2024-02-12T21:59:18.154383202Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb 12 21:59:18.160241 env[1643]: time="2024-02-12T21:59:18.160197732Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 12 21:59:18.182031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943115259.mount: Deactivated successfully.
Feb 12 21:59:18.193871 env[1643]: time="2024-02-12T21:59:18.193815256Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\""
Feb 12 21:59:18.194922 env[1643]: time="2024-02-12T21:59:18.194878121Z" level=info msg="StartContainer for \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\""
Feb 12 21:59:18.228834 systemd[1]: Started cri-containerd-7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767.scope.
Feb 12 21:59:18.308679 env[1643]: time="2024-02-12T21:59:18.308629030Z" level=info msg="StartContainer for \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\" returns successfully"
Feb 12 21:59:18.328538 systemd[1]: cri-containerd-7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767.scope: Deactivated successfully.
Feb 12 21:59:18.521422 env[1643]: time="2024-02-12T21:59:18.521371656Z" level=info msg="shim disconnected" id=7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767
Feb 12 21:59:18.521422 env[1643]: time="2024-02-12T21:59:18.521417406Z" level=warning msg="cleaning up after shim disconnected" id=7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767 namespace=k8s.io
Feb 12 21:59:18.521422 env[1643]: time="2024-02-12T21:59:18.521430088Z" level=info msg="cleaning up dead shim"
Feb 12 21:59:18.535697 env[1643]: time="2024-02-12T21:59:18.535599247Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3258 runtime=io.containerd.runc.v2\n"
Feb 12 21:59:19.017575 env[1643]: time="2024-02-12T21:59:19.017519536Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 12 21:59:19.040946 env[1643]: time="2024-02-12T21:59:19.040893027Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\""
Feb 12 21:59:19.046139 env[1643]: time="2024-02-12T21:59:19.042978746Z" level=info msg="StartContainer for \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\""
Feb 12 21:59:19.077629 systemd[1]: Started cri-containerd-85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b.scope.
Feb 12 21:59:19.118879 env[1643]: time="2024-02-12T21:59:19.118842401Z" level=info msg="StartContainer for \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\" returns successfully"
Feb 12 21:59:19.133064 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 12 21:59:19.133412 systemd[1]: Stopped systemd-sysctl.service.
Feb 12 21:59:19.134624 systemd[1]: Stopping systemd-sysctl.service...
Feb 12 21:59:19.137360 systemd[1]: Starting systemd-sysctl.service...
Feb 12 21:59:19.140263 systemd[1]: cri-containerd-85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b.scope: Deactivated successfully.
Feb 12 21:59:19.173537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767-rootfs.mount: Deactivated successfully.
Feb 12 21:59:19.192818 env[1643]: time="2024-02-12T21:59:19.192758752Z" level=info msg="shim disconnected" id=85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b
Feb 12 21:59:19.192818 env[1643]: time="2024-02-12T21:59:19.192815262Z" level=warning msg="cleaning up after shim disconnected" id=85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b namespace=k8s.io
Feb 12 21:59:19.193689 env[1643]: time="2024-02-12T21:59:19.192827159Z" level=info msg="cleaning up dead shim"
Feb 12 21:59:19.201903 systemd[1]: Finished systemd-sysctl.service.
Feb 12 21:59:19.208755 env[1643]: time="2024-02-12T21:59:19.208707805Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3322 runtime=io.containerd.runc.v2\n"
Feb 12 21:59:20.036635 env[1643]: time="2024-02-12T21:59:20.030968555Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 12 21:59:20.097368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471743065.mount: Deactivated successfully.
Feb 12 21:59:20.116544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287322668.mount: Deactivated successfully.
Feb 12 21:59:20.117562 env[1643]: time="2024-02-12T21:59:20.116891813Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\""
Feb 12 21:59:20.122246 env[1643]: time="2024-02-12T21:59:20.120883475Z" level=info msg="StartContainer for \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\""
Feb 12 21:59:20.154566 systemd[1]: Started cri-containerd-db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08.scope.
Feb 12 21:59:20.208737 env[1643]: time="2024-02-12T21:59:20.208677340Z" level=info msg="StartContainer for \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\" returns successfully"
Feb 12 21:59:20.223015 systemd[1]: cri-containerd-db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08.scope: Deactivated successfully.
Feb 12 21:59:20.251888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08-rootfs.mount: Deactivated successfully.
Feb 12 21:59:20.263800 env[1643]: time="2024-02-12T21:59:20.263745675Z" level=info msg="shim disconnected" id=db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08
Feb 12 21:59:20.263800 env[1643]: time="2024-02-12T21:59:20.263799759Z" level=warning msg="cleaning up after shim disconnected" id=db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08 namespace=k8s.io
Feb 12 21:59:20.264600 env[1643]: time="2024-02-12T21:59:20.263811265Z" level=info msg="cleaning up dead shim"
Feb 12 21:59:20.277192 env[1643]: time="2024-02-12T21:59:20.277128652Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3380 runtime=io.containerd.runc.v2\n"
Feb 12 21:59:21.018055 env[1643]: time="2024-02-12T21:59:21.017823674Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 12 21:59:21.045465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089507456.mount: Deactivated successfully.
Feb 12 21:59:21.063970 env[1643]: time="2024-02-12T21:59:21.063891785Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\""
Feb 12 21:59:21.065410 env[1643]: time="2024-02-12T21:59:21.065321534Z" level=info msg="StartContainer for \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\""
Feb 12 21:59:21.094309 systemd[1]: Started cri-containerd-a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a.scope.
Feb 12 21:59:21.143583 env[1643]: time="2024-02-12T21:59:21.143528267Z" level=info msg="StartContainer for \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\" returns successfully"
Feb 12 21:59:21.147843 systemd[1]: cri-containerd-a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a.scope: Deactivated successfully.
Feb 12 21:59:21.179346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528441722.mount: Deactivated successfully.
Feb 12 21:59:21.184302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a-rootfs.mount: Deactivated successfully.
Feb 12 21:59:21.206237 env[1643]: time="2024-02-12T21:59:21.205902033Z" level=info msg="shim disconnected" id=a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a
Feb 12 21:59:21.207038 env[1643]: time="2024-02-12T21:59:21.206237857Z" level=warning msg="cleaning up after shim disconnected" id=a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a namespace=k8s.io
Feb 12 21:59:21.207038 env[1643]: time="2024-02-12T21:59:21.206255211Z" level=info msg="cleaning up dead shim"
Feb 12 21:59:21.229265 env[1643]: time="2024-02-12T21:59:21.229216449Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3437 runtime=io.containerd.runc.v2\n"
Feb 12 21:59:22.028241 env[1643]: time="2024-02-12T21:59:22.028102810Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 12 21:59:22.063534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135295461.mount: Deactivated successfully.
Feb 12 21:59:22.083745 env[1643]: time="2024-02-12T21:59:22.083581480Z" level=info msg="CreateContainer within sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\""
Feb 12 21:59:22.094330 env[1643]: time="2024-02-12T21:59:22.090777517Z" level=info msg="StartContainer for \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\""
Feb 12 21:59:22.133483 systemd[1]: Started cri-containerd-671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611.scope.
Feb 12 21:59:22.224830 env[1643]: time="2024-02-12T21:59:22.224775866Z" level=info msg="StartContainer for \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\" returns successfully"
Feb 12 21:59:22.267662 systemd[1]: run-containerd-runc-k8s.io-671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611-runc.PYFefZ.mount: Deactivated successfully.
Feb 12 21:59:22.501017 kubelet[2625]: I0212 21:59:22.499871    2625 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb 12 21:59:22.529211 kubelet[2625]: I0212 21:59:22.528326    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:59:22.534261 kubelet[2625]: I0212 21:59:22.534235    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 21:59:22.535596 systemd[1]: Created slice kubepods-burstable-pod856beb00_7dad_4702_b441_154207e20993.slice.
Feb 12 21:59:22.543609 systemd[1]: Created slice kubepods-burstable-pod3bb2e76d_8816_40d1_b1ca_24ccc5cd4390.slice.
Feb 12 21:59:22.676562 kubelet[2625]: I0212 21:59:22.676511    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wstw\" (UniqueName: \"kubernetes.io/projected/3bb2e76d-8816-40d1-b1ca-24ccc5cd4390-kube-api-access-6wstw\") pod \"coredns-5d78c9869d-vjt4x\" (UID: \"3bb2e76d-8816-40d1-b1ca-24ccc5cd4390\") " pod="kube-system/coredns-5d78c9869d-vjt4x"
Feb 12 21:59:22.676748 kubelet[2625]: I0212 21:59:22.676583    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/856beb00-7dad-4702-b441-154207e20993-config-volume\") pod \"coredns-5d78c9869d-hxzmb\" (UID: \"856beb00-7dad-4702-b441-154207e20993\") " pod="kube-system/coredns-5d78c9869d-hxzmb"
Feb 12 21:59:22.676748 kubelet[2625]: I0212 21:59:22.676624    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bb2e76d-8816-40d1-b1ca-24ccc5cd4390-config-volume\") pod \"coredns-5d78c9869d-vjt4x\" (UID: \"3bb2e76d-8816-40d1-b1ca-24ccc5cd4390\") " pod="kube-system/coredns-5d78c9869d-vjt4x"
Feb 12 21:59:22.676748 kubelet[2625]: I0212 21:59:22.676657    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2z96\" (UniqueName: \"kubernetes.io/projected/856beb00-7dad-4702-b441-154207e20993-kube-api-access-g2z96\") pod \"coredns-5d78c9869d-hxzmb\" (UID: \"856beb00-7dad-4702-b441-154207e20993\") " pod="kube-system/coredns-5d78c9869d-hxzmb"
Feb 12 21:59:22.841940 env[1643]: time="2024-02-12T21:59:22.841427069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-hxzmb,Uid:856beb00-7dad-4702-b441-154207e20993,Namespace:kube-system,Attempt:0,}"
Feb 12 21:59:22.851802 env[1643]: time="2024-02-12T21:59:22.851754219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-vjt4x,Uid:3bb2e76d-8816-40d1-b1ca-24ccc5cd4390,Namespace:kube-system,Attempt:0,}"
Feb 12 21:59:23.059421 kubelet[2625]: I0212 21:59:23.059386    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dfhx4" podStartSLOduration=6.501406797 podCreationTimestamp="2024-02-12 21:59:03 +0000 UTC" firstStartedPulling="2024-02-12 21:59:04.597779447 +0000 UTC m=+15.064471627" lastFinishedPulling="2024-02-12 21:59:18.154867913 +0000 UTC m=+28.621560101" observedRunningTime="2024-02-12 21:59:23.057997467 +0000 UTC m=+33.524689664" watchObservedRunningTime="2024-02-12 21:59:23.058495271 +0000 UTC m=+33.525187469"
Feb 12 21:59:24.734683 systemd-networkd[1461]: cilium_host: Link UP
Feb 12 21:59:24.735806 systemd-networkd[1461]: cilium_net: Link UP
Feb 12 21:59:24.735811 systemd-networkd[1461]: cilium_net: Gained carrier
Feb 12 21:59:24.737527 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb 12 21:59:24.737616 (udev-worker)[3561]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:59:24.737680 systemd-networkd[1461]: cilium_host: Gained carrier
Feb 12 21:59:24.740487 (udev-worker)[3600]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:59:24.815677 systemd-networkd[1461]: cilium_host: Gained IPv6LL
Feb 12 21:59:24.933135 systemd-networkd[1461]: cilium_net: Gained IPv6LL
Feb 12 21:59:25.008099 (udev-worker)[3619]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:59:25.016797 systemd-networkd[1461]: cilium_vxlan: Link UP
Feb 12 21:59:25.016806 systemd-networkd[1461]: cilium_vxlan: Gained carrier
Feb 12 21:59:25.650516 kernel: NET: Registered PF_ALG protocol family
Feb 12 21:59:26.754100 systemd-networkd[1461]: lxc_health: Link UP
Feb 12 21:59:26.763786 systemd-networkd[1461]: lxc_health: Gained carrier
Feb 12 21:59:26.764517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb 12 21:59:27.014883 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL
Feb 12 21:59:27.423279 systemd-networkd[1461]: lxcd2137618bd5d: Link UP
Feb 12 21:59:27.437414 systemd-networkd[1461]: lxc207e9850a7b4: Link UP
Feb 12 21:59:27.437592 kernel: eth0: renamed from tmpc1443
Feb 12 21:59:27.445730 kernel: eth0: renamed from tmp60f23
Feb 12 21:59:27.459425 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd2137618bd5d: link becomes ready
Feb 12 21:59:27.460999 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc207e9850a7b4: link becomes ready
Feb 12 21:59:27.457344 systemd-networkd[1461]: lxcd2137618bd5d: Gained carrier
Feb 12 21:59:27.459496 systemd-networkd[1461]: lxc207e9850a7b4: Gained carrier
Feb 12 21:59:27.461750 (udev-worker)[3620]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 21:59:27.910095 systemd-networkd[1461]: lxc_health: Gained IPv6LL
Feb 12 21:59:28.623578 systemd-networkd[1461]: lxcd2137618bd5d: Gained IPv6LL
Feb 12 21:59:29.317727 systemd-networkd[1461]: lxc207e9850a7b4: Gained IPv6LL
Feb 12 21:59:31.814839 kubelet[2625]: I0212 21:59:31.814802    2625 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Feb 12 21:59:33.828051 env[1643]: time="2024-02-12T21:59:33.827952396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:59:33.828988 env[1643]: time="2024-02-12T21:59:33.828938008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:59:33.829920 env[1643]: time="2024-02-12T21:59:33.829160311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:59:33.830259 env[1643]: time="2024-02-12T21:59:33.829620889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60f2322b5c50e589742b2dbc1f90c705b93aa3c7de4bf97561a26ff72c3f5b03 pid=3979 runtime=io.containerd.runc.v2
Feb 12 21:59:33.856280 env[1643]: time="2024-02-12T21:59:33.855965945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 21:59:33.856280 env[1643]: time="2024-02-12T21:59:33.856020564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 21:59:33.856280 env[1643]: time="2024-02-12T21:59:33.856037209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 21:59:33.857284 env[1643]: time="2024-02-12T21:59:33.856509754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1443d3ab2bed6304f380edf315223a98c05dc0deabf59c415d86b9262947b60 pid=3985 runtime=io.containerd.runc.v2
Feb 12 21:59:33.904320 systemd[1]: Started cri-containerd-c1443d3ab2bed6304f380edf315223a98c05dc0deabf59c415d86b9262947b60.scope.
Feb 12 21:59:33.924246 systemd[1]: run-containerd-runc-k8s.io-c1443d3ab2bed6304f380edf315223a98c05dc0deabf59c415d86b9262947b60-runc.lU3au7.mount: Deactivated successfully.
Feb 12 21:59:33.929276 systemd[1]: Started cri-containerd-60f2322b5c50e589742b2dbc1f90c705b93aa3c7de4bf97561a26ff72c3f5b03.scope.
Feb 12 21:59:34.062771 env[1643]: time="2024-02-12T21:59:34.062721052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-hxzmb,Uid:856beb00-7dad-4702-b441-154207e20993,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1443d3ab2bed6304f380edf315223a98c05dc0deabf59c415d86b9262947b60\""
Feb 12 21:59:34.070516 env[1643]: time="2024-02-12T21:59:34.070468670Z" level=info msg="CreateContainer within sandbox \"c1443d3ab2bed6304f380edf315223a98c05dc0deabf59c415d86b9262947b60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 12 21:59:34.078899 env[1643]: time="2024-02-12T21:59:34.078083021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-vjt4x,Uid:3bb2e76d-8816-40d1-b1ca-24ccc5cd4390,Namespace:kube-system,Attempt:0,} returns sandbox id \"60f2322b5c50e589742b2dbc1f90c705b93aa3c7de4bf97561a26ff72c3f5b03\""
Feb 12 21:59:34.085271 env[1643]: time="2024-02-12T21:59:34.085007588Z" level=info msg="CreateContainer within sandbox \"60f2322b5c50e589742b2dbc1f90c705b93aa3c7de4bf97561a26ff72c3f5b03\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 12 21:59:34.113242 env[1643]: time="2024-02-12T21:59:34.113181712Z" level=info msg="CreateContainer within sandbox \"60f2322b5c50e589742b2dbc1f90c705b93aa3c7de4bf97561a26ff72c3f5b03\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e9c4fe11eb57a79a14c994e18d0788ba0bf562970355680556f2a1f09097462\""
Feb 12 21:59:34.117617 env[1643]: time="2024-02-12T21:59:34.116390364Z" level=info msg="StartContainer for \"6e9c4fe11eb57a79a14c994e18d0788ba0bf562970355680556f2a1f09097462\""
Feb 12 21:59:34.123864 env[1643]: time="2024-02-12T21:59:34.123770758Z" level=info msg="CreateContainer within sandbox \"c1443d3ab2bed6304f380edf315223a98c05dc0deabf59c415d86b9262947b60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fcf98c8c55b6afce51fbe01b70c73bd892b7ac5dafb4448ea489af829238615f\""
Feb 12 21:59:34.127172 env[1643]: time="2024-02-12T21:59:34.127127831Z" level=info msg="StartContainer for \"fcf98c8c55b6afce51fbe01b70c73bd892b7ac5dafb4448ea489af829238615f\""
Feb 12 21:59:34.156052 systemd[1]: Started cri-containerd-6e9c4fe11eb57a79a14c994e18d0788ba0bf562970355680556f2a1f09097462.scope.
Feb 12 21:59:34.198768 systemd[1]: Started cri-containerd-fcf98c8c55b6afce51fbe01b70c73bd892b7ac5dafb4448ea489af829238615f.scope.
Feb 12 21:59:34.235266 env[1643]: time="2024-02-12T21:59:34.235216024Z" level=info msg="StartContainer for \"6e9c4fe11eb57a79a14c994e18d0788ba0bf562970355680556f2a1f09097462\" returns successfully"
Feb 12 21:59:34.274197 env[1643]: time="2024-02-12T21:59:34.274138675Z" level=info msg="StartContainer for \"fcf98c8c55b6afce51fbe01b70c73bd892b7ac5dafb4448ea489af829238615f\" returns successfully"
Feb 12 21:59:35.127085 kubelet[2625]: I0212 21:59:35.127052    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-vjt4x" podStartSLOduration=32.127006587 podCreationTimestamp="2024-02-12 21:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:59:35.126724429 +0000 UTC m=+45.593416626" watchObservedRunningTime="2024-02-12 21:59:35.127006587 +0000 UTC m=+45.593698785"
Feb 12 21:59:35.146402 kubelet[2625]: I0212 21:59:35.146369    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-hxzmb" podStartSLOduration=32.146323905 podCreationTimestamp="2024-02-12 21:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:59:35.146090769 +0000 UTC m=+45.612782965" watchObservedRunningTime="2024-02-12 21:59:35.146323905 +0000 UTC m=+45.613016101"
Feb 12 21:59:38.824518 systemd[1]: Started sshd@5-172.31.21.40:22-139.178.89.65:47010.service.
Feb 12 21:59:39.036317 sshd[4133]: Accepted publickey for core from 139.178.89.65 port 47010 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:39.045724 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:39.069833 systemd[1]: Started session-6.scope.
Feb 12 21:59:39.071102 systemd-logind[1633]: New session 6 of user core.
Feb 12 21:59:39.349332 sshd[4133]: pam_unix(sshd:session): session closed for user core
Feb 12 21:59:39.355328 systemd[1]: sshd@5-172.31.21.40:22-139.178.89.65:47010.service: Deactivated successfully.
Feb 12 21:59:39.356621 systemd[1]: session-6.scope: Deactivated successfully.
Feb 12 21:59:39.356659 systemd-logind[1633]: Session 6 logged out. Waiting for processes to exit.
Feb 12 21:59:39.358795 systemd-logind[1633]: Removed session 6.
Feb 12 21:59:44.378733 systemd[1]: Started sshd@6-172.31.21.40:22-139.178.89.65:47026.service.
Feb 12 21:59:44.551109 sshd[4146]: Accepted publickey for core from 139.178.89.65 port 47026 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:44.552829 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:44.563004 systemd-logind[1633]: New session 7 of user core.
Feb 12 21:59:44.563998 systemd[1]: Started session-7.scope.
Feb 12 21:59:44.783338 sshd[4146]: pam_unix(sshd:session): session closed for user core
Feb 12 21:59:44.788269 systemd[1]: sshd@6-172.31.21.40:22-139.178.89.65:47026.service: Deactivated successfully.
Feb 12 21:59:44.789430 systemd[1]: session-7.scope: Deactivated successfully.
Feb 12 21:59:44.790435 systemd-logind[1633]: Session 7 logged out. Waiting for processes to exit.
Feb 12 21:59:44.791563 systemd-logind[1633]: Removed session 7.
Feb 12 21:59:49.809205 systemd[1]: Started sshd@7-172.31.21.40:22-139.178.89.65:39696.service.
Feb 12 21:59:49.975530 sshd[4159]: Accepted publickey for core from 139.178.89.65 port 39696 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:49.977969 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:49.984882 systemd[1]: Started session-8.scope.
Feb 12 21:59:49.985666 systemd-logind[1633]: New session 8 of user core.
Feb 12 21:59:50.229279 sshd[4159]: pam_unix(sshd:session): session closed for user core
Feb 12 21:59:50.233720 systemd[1]: sshd@7-172.31.21.40:22-139.178.89.65:39696.service: Deactivated successfully.
Feb 12 21:59:50.234642 systemd[1]: session-8.scope: Deactivated successfully.
Feb 12 21:59:50.235720 systemd-logind[1633]: Session 8 logged out. Waiting for processes to exit.
Feb 12 21:59:50.236730 systemd-logind[1633]: Removed session 8.
Feb 12 21:59:55.258507 systemd[1]: Started sshd@8-172.31.21.40:22-139.178.89.65:39710.service.
Feb 12 21:59:55.447960 sshd[4173]: Accepted publickey for core from 139.178.89.65 port 39710 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 21:59:55.448693 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 21:59:55.454623 systemd-logind[1633]: New session 9 of user core.
Feb 12 21:59:55.455811 systemd[1]: Started session-9.scope.
Feb 12 21:59:55.662291 sshd[4173]: pam_unix(sshd:session): session closed for user core
Feb 12 21:59:55.666374 systemd[1]: sshd@8-172.31.21.40:22-139.178.89.65:39710.service: Deactivated successfully.
Feb 12 21:59:55.667331 systemd[1]: session-9.scope: Deactivated successfully.
Feb 12 21:59:55.668187 systemd-logind[1633]: Session 9 logged out. Waiting for processes to exit.
Feb 12 21:59:55.669160 systemd-logind[1633]: Removed session 9.
Feb 12 22:00:00.692276 systemd[1]: Started sshd@9-172.31.21.40:22-139.178.89.65:46046.service.
Feb 12 22:00:00.886363 sshd[4186]: Accepted publickey for core from 139.178.89.65 port 46046 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:00.892155 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:00.906547 systemd-logind[1633]: New session 10 of user core.
Feb 12 22:00:00.909705 systemd[1]: Started session-10.scope.
Feb 12 22:00:01.168579 sshd[4186]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:01.181766 systemd[1]: sshd@9-172.31.21.40:22-139.178.89.65:46046.service: Deactivated successfully.
Feb 12 22:00:01.184632 systemd-logind[1633]: Session 10 logged out. Waiting for processes to exit.
Feb 12 22:00:01.184799 systemd[1]: session-10.scope: Deactivated successfully.
Feb 12 22:00:01.187613 systemd-logind[1633]: Removed session 10.
Feb 12 22:00:01.202053 systemd[1]: Started sshd@10-172.31.21.40:22-139.178.89.65:46056.service.
Feb 12 22:00:01.378296 sshd[4199]: Accepted publickey for core from 139.178.89.65 port 46056 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:01.380971 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:01.402272 systemd-logind[1633]: New session 11 of user core.
Feb 12 22:00:01.409342 systemd[1]: Started session-11.scope.
Feb 12 22:00:03.483923 sshd[4199]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:03.498377 systemd[1]: sshd@10-172.31.21.40:22-139.178.89.65:46056.service: Deactivated successfully.
Feb 12 22:00:03.501333 systemd-logind[1633]: Session 11 logged out. Waiting for processes to exit.
Feb 12 22:00:03.502550 systemd[1]: session-11.scope: Deactivated successfully.
Feb 12 22:00:03.513852 systemd-logind[1633]: Removed session 11.
Feb 12 22:00:03.515276 systemd[1]: Started sshd@11-172.31.21.40:22-139.178.89.65:46068.service.
Feb 12 22:00:03.709930 sshd[4209]: Accepted publickey for core from 139.178.89.65 port 46068 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:03.712382 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:03.734616 systemd-logind[1633]: New session 12 of user core.
Feb 12 22:00:03.735249 systemd[1]: Started session-12.scope.
Feb 12 22:00:04.193342 sshd[4209]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:04.207246 systemd[1]: sshd@11-172.31.21.40:22-139.178.89.65:46068.service: Deactivated successfully.
Feb 12 22:00:04.210268 systemd[1]: session-12.scope: Deactivated successfully.
Feb 12 22:00:04.214560 systemd-logind[1633]: Session 12 logged out. Waiting for processes to exit.
Feb 12 22:00:04.219018 systemd-logind[1633]: Removed session 12.
Feb 12 22:00:09.224275 systemd[1]: Started sshd@12-172.31.21.40:22-139.178.89.65:44958.service.
Feb 12 22:00:09.388970 sshd[4226]: Accepted publickey for core from 139.178.89.65 port 44958 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:09.390895 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:09.398883 systemd[1]: Started session-13.scope.
Feb 12 22:00:09.399628 systemd-logind[1633]: New session 13 of user core.
Feb 12 22:00:09.615059 sshd[4226]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:09.618946 systemd[1]: sshd@12-172.31.21.40:22-139.178.89.65:44958.service: Deactivated successfully.
Feb 12 22:00:09.620023 systemd[1]: session-13.scope: Deactivated successfully.
Feb 12 22:00:09.620973 systemd-logind[1633]: Session 13 logged out. Waiting for processes to exit.
Feb 12 22:00:09.622082 systemd-logind[1633]: Removed session 13.
Feb 12 22:00:14.642273 systemd[1]: Started sshd@13-172.31.21.40:22-139.178.89.65:44974.service.
Feb 12 22:00:14.807885 sshd[4238]: Accepted publickey for core from 139.178.89.65 port 44974 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:14.813791 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:14.835766 systemd-logind[1633]: New session 14 of user core.
Feb 12 22:00:14.836119 systemd[1]: Started session-14.scope.
Feb 12 22:00:15.097565 sshd[4238]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:15.100964 systemd[1]: sshd@13-172.31.21.40:22-139.178.89.65:44974.service: Deactivated successfully.
Feb 12 22:00:15.102057 systemd[1]: session-14.scope: Deactivated successfully.
Feb 12 22:00:15.102919 systemd-logind[1633]: Session 14 logged out. Waiting for processes to exit.
Feb 12 22:00:15.104107 systemd-logind[1633]: Removed session 14.
Feb 12 22:00:20.125411 systemd[1]: Started sshd@14-172.31.21.40:22-139.178.89.65:36744.service.
Feb 12 22:00:20.315761 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 36744 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:20.317696 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:20.350644 systemd[1]: Started session-15.scope.
Feb 12 22:00:20.359325 systemd-logind[1633]: New session 15 of user core.
Feb 12 22:00:20.638409 sshd[4250]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:20.643935 systemd-logind[1633]: Session 15 logged out. Waiting for processes to exit.
Feb 12 22:00:20.644379 systemd[1]: sshd@14-172.31.21.40:22-139.178.89.65:36744.service: Deactivated successfully.
Feb 12 22:00:20.645428 systemd[1]: session-15.scope: Deactivated successfully.
Feb 12 22:00:20.646496 systemd-logind[1633]: Removed session 15.
Feb 12 22:00:25.674581 systemd[1]: Started sshd@15-172.31.21.40:22-139.178.89.65:36758.service.
Feb 12 22:00:25.840013 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 36758 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:25.842277 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:25.849196 systemd-logind[1633]: New session 16 of user core.
Feb 12 22:00:25.849361 systemd[1]: Started session-16.scope.
Feb 12 22:00:26.083343 sshd[4262]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:26.088271 systemd[1]: sshd@15-172.31.21.40:22-139.178.89.65:36758.service: Deactivated successfully.
Feb 12 22:00:26.090358 systemd[1]: session-16.scope: Deactivated successfully.
Feb 12 22:00:26.091254 systemd-logind[1633]: Session 16 logged out. Waiting for processes to exit.
Feb 12 22:00:26.092989 systemd-logind[1633]: Removed session 16.
Feb 12 22:00:26.109625 systemd[1]: Started sshd@16-172.31.21.40:22-139.178.89.65:36762.service.
Feb 12 22:00:26.283888 sshd[4274]: Accepted publickey for core from 139.178.89.65 port 36762 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:26.288978 sshd[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:26.314387 systemd-logind[1633]: New session 17 of user core.
Feb 12 22:00:26.315046 systemd[1]: Started session-17.scope.
Feb 12 22:00:27.093265 sshd[4274]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:27.097462 systemd[1]: sshd@16-172.31.21.40:22-139.178.89.65:36762.service: Deactivated successfully.
Feb 12 22:00:27.100649 systemd[1]: session-17.scope: Deactivated successfully.
Feb 12 22:00:27.102210 systemd-logind[1633]: Session 17 logged out. Waiting for processes to exit.
Feb 12 22:00:27.106135 systemd-logind[1633]: Removed session 17.
Feb 12 22:00:27.125411 systemd[1]: Started sshd@17-172.31.21.40:22-139.178.89.65:36772.service.
Feb 12 22:00:27.336313 sshd[4283]: Accepted publickey for core from 139.178.89.65 port 36772 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:27.338191 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:27.345060 systemd[1]: Started session-18.scope.
Feb 12 22:00:27.346514 systemd-logind[1633]: New session 18 of user core.
Feb 12 22:00:28.736539 sshd[4283]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:28.742485 systemd-logind[1633]: Session 18 logged out. Waiting for processes to exit.
Feb 12 22:00:28.742814 systemd[1]: sshd@17-172.31.21.40:22-139.178.89.65:36772.service: Deactivated successfully.
Feb 12 22:00:28.744055 systemd[1]: session-18.scope: Deactivated successfully.
Feb 12 22:00:28.746760 systemd-logind[1633]: Removed session 18.
Feb 12 22:00:28.769879 systemd[1]: Started sshd@18-172.31.21.40:22-139.178.89.65:36668.service.
Feb 12 22:00:28.947050 sshd[4300]: Accepted publickey for core from 139.178.89.65 port 36668 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:28.948611 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:28.954532 systemd-logind[1633]: New session 19 of user core.
Feb 12 22:00:28.955008 systemd[1]: Started session-19.scope.
Feb 12 22:00:29.695469 sshd[4300]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:29.702966 systemd-logind[1633]: Session 19 logged out. Waiting for processes to exit.
Feb 12 22:00:29.703248 systemd[1]: sshd@18-172.31.21.40:22-139.178.89.65:36668.service: Deactivated successfully.
Feb 12 22:00:29.704467 systemd[1]: session-19.scope: Deactivated successfully.
Feb 12 22:00:29.705607 systemd-logind[1633]: Removed session 19.
Feb 12 22:00:29.721497 systemd[1]: Started sshd@19-172.31.21.40:22-139.178.89.65:36684.service.
Feb 12 22:00:29.889638 sshd[4311]: Accepted publickey for core from 139.178.89.65 port 36684 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:29.891655 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:29.897779 systemd-logind[1633]: New session 20 of user core.
Feb 12 22:00:29.897899 systemd[1]: Started session-20.scope.
Feb 12 22:00:30.147122 sshd[4311]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:30.154590 systemd[1]: sshd@19-172.31.21.40:22-139.178.89.65:36684.service: Deactivated successfully.
Feb 12 22:00:30.156800 systemd[1]: session-20.scope: Deactivated successfully.
Feb 12 22:00:30.158758 systemd-logind[1633]: Session 20 logged out. Waiting for processes to exit.
Feb 12 22:00:30.161378 systemd-logind[1633]: Removed session 20.
Feb 12 22:00:35.175975 systemd[1]: Started sshd@20-172.31.21.40:22-139.178.89.65:36698.service.
Feb 12 22:00:35.348919 sshd[4324]: Accepted publickey for core from 139.178.89.65 port 36698 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:35.350741 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:35.357929 systemd[1]: Started session-21.scope.
Feb 12 22:00:35.358545 systemd-logind[1633]: New session 21 of user core.
Feb 12 22:00:35.559828 sshd[4324]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:35.566702 systemd[1]: sshd@20-172.31.21.40:22-139.178.89.65:36698.service: Deactivated successfully.
Feb 12 22:00:35.570332 systemd[1]: session-21.scope: Deactivated successfully.
Feb 12 22:00:35.573409 systemd-logind[1633]: Session 21 logged out. Waiting for processes to exit.
Feb 12 22:00:35.575112 systemd-logind[1633]: Removed session 21.
Feb 12 22:00:40.591749 systemd[1]: Started sshd@21-172.31.21.40:22-139.178.89.65:45510.service.
Feb 12 22:00:40.757568 sshd[4342]: Accepted publickey for core from 139.178.89.65 port 45510 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:40.759709 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:40.765723 systemd[1]: Started session-22.scope.
Feb 12 22:00:40.766466 systemd-logind[1633]: New session 22 of user core.
Feb 12 22:00:40.982510 sshd[4342]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:40.987789 systemd[1]: sshd@21-172.31.21.40:22-139.178.89.65:45510.service: Deactivated successfully.
Feb 12 22:00:40.988987 systemd[1]: session-22.scope: Deactivated successfully.
Feb 12 22:00:40.989913 systemd-logind[1633]: Session 22 logged out. Waiting for processes to exit.
Feb 12 22:00:40.991103 systemd-logind[1633]: Removed session 22.
Feb 12 22:00:46.021376 systemd[1]: Started sshd@22-172.31.21.40:22-139.178.89.65:45514.service.
Feb 12 22:00:46.233089 sshd[4355]: Accepted publickey for core from 139.178.89.65 port 45514 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:46.246597 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:46.272226 systemd-logind[1633]: New session 23 of user core.
Feb 12 22:00:46.272995 systemd[1]: Started session-23.scope.
Feb 12 22:00:46.530001 sshd[4355]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:46.535816 systemd[1]: sshd@22-172.31.21.40:22-139.178.89.65:45514.service: Deactivated successfully.
Feb 12 22:00:46.537244 systemd[1]: session-23.scope: Deactivated successfully.
Feb 12 22:00:46.538032 systemd-logind[1633]: Session 23 logged out. Waiting for processes to exit.
Feb 12 22:00:46.539072 systemd-logind[1633]: Removed session 23.
Feb 12 22:00:51.560010 systemd[1]: Started sshd@23-172.31.21.40:22-139.178.89.65:35190.service.
Feb 12 22:00:51.745156 sshd[4369]: Accepted publickey for core from 139.178.89.65 port 35190 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:51.749238 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:51.756040 systemd[1]: Started session-24.scope.
Feb 12 22:00:51.756783 systemd-logind[1633]: New session 24 of user core.
Feb 12 22:00:52.028631 sshd[4369]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:52.034076 systemd[1]: sshd@23-172.31.21.40:22-139.178.89.65:35190.service: Deactivated successfully.
Feb 12 22:00:52.035366 systemd[1]: session-24.scope: Deactivated successfully.
Feb 12 22:00:52.036393 systemd-logind[1633]: Session 24 logged out. Waiting for processes to exit.
Feb 12 22:00:52.037704 systemd-logind[1633]: Removed session 24.
Feb 12 22:00:52.056638 systemd[1]: Started sshd@24-172.31.21.40:22-139.178.89.65:35202.service.
Feb 12 22:00:52.257534 sshd[4382]: Accepted publickey for core from 139.178.89.65 port 35202 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:52.259125 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:52.270552 systemd-logind[1633]: New session 25 of user core.
Feb 12 22:00:52.271976 systemd[1]: Started session-25.scope.
Feb 12 22:00:54.796582 env[1643]: time="2024-02-12T22:00:54.794027941Z" level=info msg="StopContainer for \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\" with timeout 30 (s)"
Feb 12 22:00:54.802539 env[1643]: time="2024-02-12T22:00:54.800971887Z" level=info msg="Stop container \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\" with signal terminated"
Feb 12 22:00:54.829816 env[1643]: time="2024-02-12T22:00:54.829749682Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 12 22:00:54.836737 systemd[1]: cri-containerd-ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93.scope: Deactivated successfully.
Feb 12 22:00:54.843650 env[1643]: time="2024-02-12T22:00:54.843604881Z" level=info msg="StopContainer for \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\" with timeout 1 (s)"
Feb 12 22:00:54.844246 env[1643]: time="2024-02-12T22:00:54.844215696Z" level=info msg="Stop container \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\" with signal terminated"
Feb 12 22:00:54.856905 systemd-networkd[1461]: lxc_health: Link DOWN
Feb 12 22:00:54.856916 systemd-networkd[1461]: lxc_health: Lost carrier
Feb 12 22:00:54.930354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93-rootfs.mount: Deactivated successfully.
Feb 12 22:00:54.994145 env[1643]: time="2024-02-12T22:00:54.994089428Z" level=info msg="shim disconnected" id=ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93
Feb 12 22:00:54.994145 env[1643]: time="2024-02-12T22:00:54.994142331Z" level=warning msg="cleaning up after shim disconnected" id=ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93 namespace=k8s.io
Feb 12 22:00:54.994145 env[1643]: time="2024-02-12T22:00:54.994155142Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:54.997888 systemd[1]: cri-containerd-671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611.scope: Deactivated successfully.
Feb 12 22:00:54.998215 systemd[1]: cri-containerd-671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611.scope: Consumed 8.850s CPU time.
Feb 12 22:00:55.016548 kubelet[2625]: E0212 22:00:55.016471    2625 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 12 22:00:55.018986 env[1643]: time="2024-02-12T22:00:55.018912746Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4436 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:55.023312 env[1643]: time="2024-02-12T22:00:55.023262659Z" level=info msg="StopContainer for \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\" returns successfully"
Feb 12 22:00:55.025740 env[1643]: time="2024-02-12T22:00:55.025701791Z" level=info msg="StopPodSandbox for \"488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15\""
Feb 12 22:00:55.026009 env[1643]: time="2024-02-12T22:00:55.025982222Z" level=info msg="Container to stop \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:55.028886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15-shm.mount: Deactivated successfully.
Feb 12 22:00:55.046480 systemd[1]: cri-containerd-488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15.scope: Deactivated successfully.
Feb 12 22:00:55.063846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611-rootfs.mount: Deactivated successfully.
Feb 12 22:00:55.078907 env[1643]: time="2024-02-12T22:00:55.078850811Z" level=info msg="shim disconnected" id=671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611
Feb 12 22:00:55.079324 env[1643]: time="2024-02-12T22:00:55.079290043Z" level=warning msg="cleaning up after shim disconnected" id=671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611 namespace=k8s.io
Feb 12 22:00:55.079748 env[1643]: time="2024-02-12T22:00:55.079724724Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:55.095809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15-rootfs.mount: Deactivated successfully.
Feb 12 22:00:55.102087 env[1643]: time="2024-02-12T22:00:55.102032757Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4481 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:55.108536 env[1643]: time="2024-02-12T22:00:55.108480906Z" level=info msg="StopContainer for \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\" returns successfully"
Feb 12 22:00:55.109712 env[1643]: time="2024-02-12T22:00:55.109668282Z" level=info msg="shim disconnected" id=488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15
Feb 12 22:00:55.110106 env[1643]: time="2024-02-12T22:00:55.110080957Z" level=warning msg="cleaning up after shim disconnected" id=488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15 namespace=k8s.io
Feb 12 22:00:55.110624 env[1643]: time="2024-02-12T22:00:55.110211512Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:55.110923 env[1643]: time="2024-02-12T22:00:55.109745900Z" level=info msg="StopPodSandbox for \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\""
Feb 12 22:00:55.111102 env[1643]: time="2024-02-12T22:00:55.111074545Z" level=info msg="Container to stop \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:55.111784 env[1643]: time="2024-02-12T22:00:55.111191362Z" level=info msg="Container to stop \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:55.111784 env[1643]: time="2024-02-12T22:00:55.111214257Z" level=info msg="Container to stop \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:55.111784 env[1643]: time="2024-02-12T22:00:55.111232950Z" level=info msg="Container to stop \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:55.111784 env[1643]: time="2024-02-12T22:00:55.111252491Z" level=info msg="Container to stop \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:55.124058 systemd[1]: cri-containerd-52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2.scope: Deactivated successfully.
Feb 12 22:00:55.125555 env[1643]: time="2024-02-12T22:00:55.125509984Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4493 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:55.126855 env[1643]: time="2024-02-12T22:00:55.126811655Z" level=info msg="TearDown network for sandbox \"488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15\" successfully"
Feb 12 22:00:55.126983 env[1643]: time="2024-02-12T22:00:55.126851279Z" level=info msg="StopPodSandbox for \"488f2c6d116bc57d21ed50c97519b32c94fba54d76e313d46753185bd8d54d15\" returns successfully"
Feb 12 22:00:55.178609 env[1643]: time="2024-02-12T22:00:55.178557849Z" level=info msg="shim disconnected" id=52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2
Feb 12 22:00:55.179612 env[1643]: time="2024-02-12T22:00:55.179565936Z" level=warning msg="cleaning up after shim disconnected" id=52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2 namespace=k8s.io
Feb 12 22:00:55.179612 env[1643]: time="2024-02-12T22:00:55.179601982Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:55.190219 env[1643]: time="2024-02-12T22:00:55.190166447Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4527 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:55.191257 env[1643]: time="2024-02-12T22:00:55.191217752Z" level=info msg="TearDown network for sandbox \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" successfully"
Feb 12 22:00:55.191369 env[1643]: time="2024-02-12T22:00:55.191253217Z" level=info msg="StopPodSandbox for \"52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2\" returns successfully"
Feb 12 22:00:55.237728 kubelet[2625]: I0212 22:00:55.237689    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xc2s\" (UniqueName: \"kubernetes.io/projected/dc17f506-5ffa-4abb-8f2f-4e393304d070-kube-api-access-2xc2s\") pod \"dc17f506-5ffa-4abb-8f2f-4e393304d070\" (UID: \"dc17f506-5ffa-4abb-8f2f-4e393304d070\") "
Feb 12 22:00:55.238096 kubelet[2625]: I0212 22:00:55.237750    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc17f506-5ffa-4abb-8f2f-4e393304d070-cilium-config-path\") pod \"dc17f506-5ffa-4abb-8f2f-4e393304d070\" (UID: \"dc17f506-5ffa-4abb-8f2f-4e393304d070\") "
Feb 12 22:00:55.240957 kubelet[2625]: W0212 22:00:55.240892    2625 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/dc17f506-5ffa-4abb-8f2f-4e393304d070/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb 12 22:00:55.246322 kubelet[2625]: I0212 22:00:55.245184    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc17f506-5ffa-4abb-8f2f-4e393304d070-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc17f506-5ffa-4abb-8f2f-4e393304d070" (UID: "dc17f506-5ffa-4abb-8f2f-4e393304d070"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 12 22:00:55.246965 kubelet[2625]: I0212 22:00:55.246929    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc17f506-5ffa-4abb-8f2f-4e393304d070-kube-api-access-2xc2s" (OuterVolumeSpecName: "kube-api-access-2xc2s") pod "dc17f506-5ffa-4abb-8f2f-4e393304d070" (UID: "dc17f506-5ffa-4abb-8f2f-4e393304d070"). InnerVolumeSpecName "kube-api-access-2xc2s". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:55.347846 kubelet[2625]: I0212 22:00:55.343758    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-etc-cni-netd\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.348124 kubelet[2625]: I0212 22:00:55.345743    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.348852 kubelet[2625]: I0212 22:00:55.348826    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-xtables-lock\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.349029 kubelet[2625]: I0212 22:00:55.349017    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-run\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.349176 kubelet[2625]: I0212 22:00:55.349164    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgt5k\" (UniqueName: \"kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-kube-api-access-xgt5k\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.350316 kubelet[2625]: I0212 22:00:55.349287    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-host-proc-sys-kernel\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.351068 kubelet[2625]: I0212 22:00:55.351052    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-hubble-tls\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.354228 kubelet[2625]: I0212 22:00:55.354205    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cni-path\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.354561 kubelet[2625]: I0212 22:00:55.354543    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-hostproc\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.354968 kubelet[2625]: I0212 22:00:55.349231    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.354968 kubelet[2625]: I0212 22:00:55.349046    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.354968 kubelet[2625]: I0212 22:00:55.354610    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.355286 kubelet[2625]: I0212 22:00:55.354986    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cni-path" (OuterVolumeSpecName: "cni-path") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.355286 kubelet[2625]: I0212 22:00:55.355044    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-config-path\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.355286 kubelet[2625]: I0212 22:00:55.355076    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-lib-modules\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.355286 kubelet[2625]: I0212 22:00:55.355218    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-cgroup\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.355286 kubelet[2625]: I0212 22:00:55.355250    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-host-proc-sys-net\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.355286 kubelet[2625]: I0212 22:00:55.355279    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-bpf-maps\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.356899 kubelet[2625]: I0212 22:00:55.355313    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74dc5bf2-079f-4981-bf0f-c4dab63734f1-clustermesh-secrets\") pod \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\" (UID: \"74dc5bf2-079f-4981-bf0f-c4dab63734f1\") "
Feb 12 22:00:55.356899 kubelet[2625]: I0212 22:00:55.355368    2625 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cni-path\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.356899 kubelet[2625]: I0212 22:00:55.355386    2625 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-etc-cni-netd\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.356899 kubelet[2625]: I0212 22:00:55.355404    2625 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-host-proc-sys-kernel\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.356899 kubelet[2625]: I0212 22:00:55.355420    2625 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2xc2s\" (UniqueName: \"kubernetes.io/projected/dc17f506-5ffa-4abb-8f2f-4e393304d070-kube-api-access-2xc2s\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.356899 kubelet[2625]: I0212 22:00:55.355717    2625 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc17f506-5ffa-4abb-8f2f-4e393304d070-cilium-config-path\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.357605 kubelet[2625]: I0212 22:00:55.357581    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-hostproc" (OuterVolumeSpecName: "hostproc") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.361065 kubelet[2625]: I0212 22:00:55.361025    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.371047 kubelet[2625]: W0212 22:00:55.370982    2625 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/74dc5bf2-079f-4981-bf0f-c4dab63734f1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb 12 22:00:55.371288 kubelet[2625]: I0212 22:00:55.371219    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.371467 kubelet[2625]: I0212 22:00:55.371314    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.371467 kubelet[2625]: I0212 22:00:55.371340    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:55.376088 kubelet[2625]: I0212 22:00:55.375952    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 12 22:00:55.386976 kubelet[2625]: I0212 22:00:55.386935    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:55.388039 kubelet[2625]: I0212 22:00:55.388004    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-kube-api-access-xgt5k" (OuterVolumeSpecName: "kube-api-access-xgt5k") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "kube-api-access-xgt5k". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:55.389305 kubelet[2625]: I0212 22:00:55.388976    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74dc5bf2-079f-4981-bf0f-c4dab63734f1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74dc5bf2-079f-4981-bf0f-c4dab63734f1" (UID: "74dc5bf2-079f-4981-bf0f-c4dab63734f1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 22:00:55.390978 kubelet[2625]: I0212 22:00:55.390669    2625 scope.go:115] "RemoveContainer" containerID="671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611"
Feb 12 22:00:55.403347 systemd[1]: Removed slice kubepods-burstable-pod74dc5bf2_079f_4981_bf0f_c4dab63734f1.slice.
Feb 12 22:00:55.403521 systemd[1]: kubepods-burstable-pod74dc5bf2_079f_4981_bf0f_c4dab63734f1.slice: Consumed 8.987s CPU time.
Feb 12 22:00:55.410191 env[1643]: time="2024-02-12T22:00:55.409365729Z" level=info msg="RemoveContainer for \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\""
Feb 12 22:00:55.424379 env[1643]: time="2024-02-12T22:00:55.424179966Z" level=info msg="RemoveContainer for \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\" returns successfully"
Feb 12 22:00:55.424843 kubelet[2625]: I0212 22:00:55.424819    2625 scope.go:115] "RemoveContainer" containerID="a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a"
Feb 12 22:00:55.432237 systemd[1]: Removed slice kubepods-besteffort-poddc17f506_5ffa_4abb_8f2f_4e393304d070.slice.
Feb 12 22:00:55.439377 env[1643]: time="2024-02-12T22:00:55.438890597Z" level=info msg="RemoveContainer for \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\""
Feb 12 22:00:55.454606 env[1643]: time="2024-02-12T22:00:55.454486031Z" level=info msg="RemoveContainer for \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\" returns successfully"
Feb 12 22:00:55.455372 kubelet[2625]: I0212 22:00:55.455327    2625 scope.go:115] "RemoveContainer" containerID="db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08"
Feb 12 22:00:55.456652 kubelet[2625]: I0212 22:00:55.456632    2625 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xgt5k\" (UniqueName: \"kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-kube-api-access-xgt5k\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.456892 kubelet[2625]: I0212 22:00:55.456869    2625 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-xtables-lock\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.457006 kubelet[2625]: I0212 22:00:55.456996    2625 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-run\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.457266 kubelet[2625]: I0212 22:00:55.457253    2625 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74dc5bf2-079f-4981-bf0f-c4dab63734f1-hubble-tls\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.457361 kubelet[2625]: I0212 22:00:55.457351    2625 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-hostproc\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.457466 kubelet[2625]: I0212 22:00:55.457455    2625 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-config-path\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.457564 kubelet[2625]: I0212 22:00:55.457556    2625 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-host-proc-sys-net\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.457636 kubelet[2625]: I0212 22:00:55.457628    2625 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-bpf-maps\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.458010 kubelet[2625]: I0212 22:00:55.457994    2625 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74dc5bf2-079f-4981-bf0f-c4dab63734f1-clustermesh-secrets\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.458111 kubelet[2625]: I0212 22:00:55.458102    2625 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-lib-modules\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.458190 kubelet[2625]: I0212 22:00:55.458182    2625 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74dc5bf2-079f-4981-bf0f-c4dab63734f1-cilium-cgroup\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:55.471284 env[1643]: time="2024-02-12T22:00:55.471231660Z" level=info msg="RemoveContainer for \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\""
Feb 12 22:00:55.479406 env[1643]: time="2024-02-12T22:00:55.479351322Z" level=info msg="RemoveContainer for \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\" returns successfully"
Feb 12 22:00:55.480249 kubelet[2625]: I0212 22:00:55.480182    2625 scope.go:115] "RemoveContainer" containerID="85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b"
Feb 12 22:00:55.484045 env[1643]: time="2024-02-12T22:00:55.483883834Z" level=info msg="RemoveContainer for \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\""
Feb 12 22:00:55.491525 env[1643]: time="2024-02-12T22:00:55.491422495Z" level=info msg="RemoveContainer for \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\" returns successfully"
Feb 12 22:00:55.491954 kubelet[2625]: I0212 22:00:55.491827    2625 scope.go:115] "RemoveContainer" containerID="7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767"
Feb 12 22:00:55.494026 env[1643]: time="2024-02-12T22:00:55.493739710Z" level=info msg="RemoveContainer for \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\""
Feb 12 22:00:55.498635 env[1643]: time="2024-02-12T22:00:55.498594180Z" level=info msg="RemoveContainer for \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\" returns successfully"
Feb 12 22:00:55.498877 kubelet[2625]: I0212 22:00:55.498852    2625 scope.go:115] "RemoveContainer" containerID="671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611"
Feb 12 22:00:55.499218 env[1643]: time="2024-02-12T22:00:55.499130579Z" level=error msg="ContainerStatus for \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\": not found"
Feb 12 22:00:55.501432 kubelet[2625]: E0212 22:00:55.501399    2625 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\": not found" containerID="671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611"
Feb 12 22:00:55.501959 kubelet[2625]: I0212 22:00:55.501930    2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611} err="failed to get container status \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\": rpc error: code = NotFound desc = an error occurred when try to find container \"671132a7d2f3347f89c96c9502bd1b7f4e190304b1e71749ef9c77c992348611\": not found"
Feb 12 22:00:55.502094 kubelet[2625]: I0212 22:00:55.501972    2625 scope.go:115] "RemoveContainer" containerID="a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a"
Feb 12 22:00:55.502396 env[1643]: time="2024-02-12T22:00:55.502328151Z" level=error msg="ContainerStatus for \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\": not found"
Feb 12 22:00:55.502879 kubelet[2625]: E0212 22:00:55.502854    2625 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\": not found" containerID="a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a"
Feb 12 22:00:55.502957 kubelet[2625]: I0212 22:00:55.502894    2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a} err="failed to get container status \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2fade8f92aadbb94fffdf1376c30cd041152c23950cb25cd890b6d4d28f109a\": not found"
Feb 12 22:00:55.502957 kubelet[2625]: I0212 22:00:55.502924    2625 scope.go:115] "RemoveContainer" containerID="db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08"
Feb 12 22:00:55.503273 env[1643]: time="2024-02-12T22:00:55.503203763Z" level=error msg="ContainerStatus for \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\": not found"
Feb 12 22:00:55.503397 kubelet[2625]: E0212 22:00:55.503378    2625 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\": not found" containerID="db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08"
Feb 12 22:00:55.503556 kubelet[2625]: I0212 22:00:55.503411    2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08} err="failed to get container status \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\": rpc error: code = NotFound desc = an error occurred when try to find container \"db64c5989a3c515f75530694db18ee9f94783b582b0eebabf801fadf0e686d08\": not found"
Feb 12 22:00:55.503556 kubelet[2625]: I0212 22:00:55.503424    2625 scope.go:115] "RemoveContainer" containerID="85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b"
Feb 12 22:00:55.503785 env[1643]: time="2024-02-12T22:00:55.503728301Z" level=error msg="ContainerStatus for \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\": not found"
Feb 12 22:00:55.503985 kubelet[2625]: E0212 22:00:55.503965    2625 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\": not found" containerID="85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b"
Feb 12 22:00:55.504069 kubelet[2625]: I0212 22:00:55.503998    2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b} err="failed to get container status \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\": rpc error: code = NotFound desc = an error occurred when try to find container \"85f5bbc444f0b445df46f55add22ce962c20ef7b9f8d13f84606b1f5250c435b\": not found"
Feb 12 22:00:55.504069 kubelet[2625]: I0212 22:00:55.504011    2625 scope.go:115] "RemoveContainer" containerID="7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767"
Feb 12 22:00:55.504273 env[1643]: time="2024-02-12T22:00:55.504210557Z" level=error msg="ContainerStatus for \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\": not found"
Feb 12 22:00:55.504390 kubelet[2625]: E0212 22:00:55.504370    2625 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\": not found" containerID="7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767"
Feb 12 22:00:55.504481 kubelet[2625]: I0212 22:00:55.504402    2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767} err="failed to get container status \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f37a2b879ad88657ea74527171729852f999f0d40ada1089d5ec5a7f9175767\": not found"
Feb 12 22:00:55.504481 kubelet[2625]: I0212 22:00:55.504416    2625 scope.go:115] "RemoveContainer" containerID="ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93"
Feb 12 22:00:55.506625 env[1643]: time="2024-02-12T22:00:55.506534763Z" level=info msg="RemoveContainer for \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\""
Feb 12 22:00:55.513295 env[1643]: time="2024-02-12T22:00:55.513248382Z" level=info msg="RemoveContainer for \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\" returns successfully"
Feb 12 22:00:55.513702 kubelet[2625]: I0212 22:00:55.513579    2625 scope.go:115] "RemoveContainer" containerID="ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93"
Feb 12 22:00:55.514313 env[1643]: time="2024-02-12T22:00:55.514213485Z" level=error msg="ContainerStatus for \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\": not found"
Feb 12 22:00:55.514698 kubelet[2625]: E0212 22:00:55.514674    2625 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\": not found" containerID="ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93"
Feb 12 22:00:55.514850 kubelet[2625]: I0212 22:00:55.514722    2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93} err="failed to get container status \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef266b67dadffc8513c9b4e608bddff68f41786b46f36c641037a5fe64aebe93\": not found"
Feb 12 22:00:55.792557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2-rootfs.mount: Deactivated successfully.
Feb 12 22:00:55.793423 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52a2266ddf4035f43c25ddd26a2d9836a2dc8f466601a1ba338d21f0a214d2d2-shm.mount: Deactivated successfully.
Feb 12 22:00:55.794243 systemd[1]: var-lib-kubelet-pods-74dc5bf2\x2d079f\x2d4981\x2dbf0f\x2dc4dab63734f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxgt5k.mount: Deactivated successfully.
Feb 12 22:00:55.794357 systemd[1]: var-lib-kubelet-pods-dc17f506\x2d5ffa\x2d4abb\x2d8f2f\x2d4e393304d070-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xc2s.mount: Deactivated successfully.
Feb 12 22:00:55.794571 systemd[1]: var-lib-kubelet-pods-74dc5bf2\x2d079f\x2d4981\x2dbf0f\x2dc4dab63734f1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 12 22:00:55.794667 systemd[1]: var-lib-kubelet-pods-74dc5bf2\x2d079f\x2d4981\x2dbf0f\x2dc4dab63734f1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 12 22:00:55.825575 kubelet[2625]: I0212 22:00:55.825543    2625 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=74dc5bf2-079f-4981-bf0f-c4dab63734f1 path="/var/lib/kubelet/pods/74dc5bf2-079f-4981-bf0f-c4dab63734f1/volumes"
Feb 12 22:00:55.827964 kubelet[2625]: I0212 22:00:55.827932    2625 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=dc17f506-5ffa-4abb-8f2f-4e393304d070 path="/var/lib/kubelet/pods/dc17f506-5ffa-4abb-8f2f-4e393304d070/volumes"
Feb 12 22:00:56.718694 sshd[4382]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:56.731846 systemd[1]: sshd@24-172.31.21.40:22-139.178.89.65:35202.service: Deactivated successfully.
Feb 12 22:00:56.733267 systemd[1]: session-25.scope: Deactivated successfully.
Feb 12 22:00:56.733495 systemd[1]: session-25.scope: Consumed 1.073s CPU time.
Feb 12 22:00:56.737284 systemd-logind[1633]: Session 25 logged out. Waiting for processes to exit.
Feb 12 22:00:56.739932 systemd-logind[1633]: Removed session 25.
Feb 12 22:00:56.756284 systemd[1]: Started sshd@25-172.31.21.40:22-139.178.89.65:35212.service.
Feb 12 22:00:56.949318 sshd[4546]: Accepted publickey for core from 139.178.89.65 port 35212 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:56.951340 sshd[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:56.963096 systemd[1]: Started session-26.scope.
Feb 12 22:00:56.964124 systemd-logind[1633]: New session 26 of user core.
Feb 12 22:00:57.785960 kubelet[2625]: I0212 22:00:57.785922    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 22:00:57.788343 sshd[4546]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:57.791887 kubelet[2625]: E0212 22:00:57.791861    2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74dc5bf2-079f-4981-bf0f-c4dab63734f1" containerName="apply-sysctl-overwrites"
Feb 12 22:00:57.792028 kubelet[2625]: E0212 22:00:57.792019    2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74dc5bf2-079f-4981-bf0f-c4dab63734f1" containerName="mount-bpf-fs"
Feb 12 22:00:57.792095 kubelet[2625]: E0212 22:00:57.792087    2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74dc5bf2-079f-4981-bf0f-c4dab63734f1" containerName="clean-cilium-state"
Feb 12 22:00:57.792314 kubelet[2625]: E0212 22:00:57.792301    2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74dc5bf2-079f-4981-bf0f-c4dab63734f1" containerName="cilium-agent"
Feb 12 22:00:57.792414 kubelet[2625]: E0212 22:00:57.792404    2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc17f506-5ffa-4abb-8f2f-4e393304d070" containerName="cilium-operator"
Feb 12 22:00:57.792501 kubelet[2625]: E0212 22:00:57.792492    2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74dc5bf2-079f-4981-bf0f-c4dab63734f1" containerName="mount-cgroup"
Feb 12 22:00:57.792654 kubelet[2625]: I0212 22:00:57.792643    2625 memory_manager.go:346] "RemoveStaleState removing state" podUID="dc17f506-5ffa-4abb-8f2f-4e393304d070" containerName="cilium-operator"
Feb 12 22:00:57.792730 kubelet[2625]: I0212 22:00:57.792721    2625 memory_manager.go:346] "RemoveStaleState removing state" podUID="74dc5bf2-079f-4981-bf0f-c4dab63734f1" containerName="cilium-agent"
Feb 12 22:00:57.794690 systemd[1]: sshd@25-172.31.21.40:22-139.178.89.65:35212.service: Deactivated successfully.
Feb 12 22:00:57.795859 systemd[1]: session-26.scope: Deactivated successfully.
Feb 12 22:00:57.800028 systemd-logind[1633]: Session 26 logged out. Waiting for processes to exit.
Feb 12 22:00:57.804911 systemd-logind[1633]: Removed session 26.
Feb 12 22:00:57.818837 systemd[1]: Created slice kubepods-burstable-pod83207465_dd7c_467e_9817_b8cfb12f138b.slice.
Feb 12 22:00:57.823867 systemd[1]: Started sshd@26-172.31.21.40:22-139.178.89.65:35218.service.
Feb 12 22:00:57.890891 kubelet[2625]: I0212 22:00:57.890855    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-bpf-maps\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.891226 kubelet[2625]: I0212 22:00:57.891214    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-host-proc-sys-net\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.891350 kubelet[2625]: I0212 22:00:57.891341    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cni-path\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.891538 kubelet[2625]: I0212 22:00:57.891521    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83207465-dd7c-467e-9817-b8cfb12f138b-clustermesh-secrets\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.891717 kubelet[2625]: I0212 22:00:57.891708    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-hostproc\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.891848 kubelet[2625]: I0212 22:00:57.891840    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83207465-dd7c-467e-9817-b8cfb12f138b-hubble-tls\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.891981 kubelet[2625]: I0212 22:00:57.891973    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-run\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.892123 kubelet[2625]: I0212 22:00:57.892102    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-ipsec-secrets\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.892254 kubelet[2625]: I0212 22:00:57.892238    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsqsw\" (UniqueName: \"kubernetes.io/projected/83207465-dd7c-467e-9817-b8cfb12f138b-kube-api-access-vsqsw\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.892449 kubelet[2625]: I0212 22:00:57.892423    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-config-path\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.892587 kubelet[2625]: I0212 22:00:57.892572    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-host-proc-sys-kernel\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.892715 kubelet[2625]: I0212 22:00:57.892707    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-cgroup\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.892843 kubelet[2625]: I0212 22:00:57.892835    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-etc-cni-netd\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.893014 kubelet[2625]: I0212 22:00:57.892961    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-lib-modules\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:57.893145 kubelet[2625]: I0212 22:00:57.893136    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-xtables-lock\") pod \"cilium-stbfq\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") " pod="kube-system/cilium-stbfq"
Feb 12 22:00:58.017972 sshd[4556]: Accepted publickey for core from 139.178.89.65 port 35218 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:58.020055 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:58.038124 systemd-logind[1633]: New session 27 of user core.
Feb 12 22:00:58.039255 systemd[1]: Started session-27.scope.
Feb 12 22:00:58.145259 env[1643]: time="2024-02-12T22:00:58.144893332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-stbfq,Uid:83207465-dd7c-467e-9817-b8cfb12f138b,Namespace:kube-system,Attempt:0,}"
Feb 12 22:00:58.167622 env[1643]: time="2024-02-12T22:00:58.167378276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 22:00:58.167622 env[1643]: time="2024-02-12T22:00:58.167420806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 22:00:58.167622 env[1643]: time="2024-02-12T22:00:58.167555264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 22:00:58.168148 env[1643]: time="2024-02-12T22:00:58.168076403Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76 pid=4574 runtime=io.containerd.runc.v2
Feb 12 22:00:58.182205 systemd[1]: Started cri-containerd-fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76.scope.
Feb 12 22:00:58.223187 env[1643]: time="2024-02-12T22:00:58.223140829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-stbfq,Uid:83207465-dd7c-467e-9817-b8cfb12f138b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76\""
Feb 12 22:00:58.232313 env[1643]: time="2024-02-12T22:00:58.230784959Z" level=info msg="CreateContainer within sandbox \"fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 12 22:00:58.251725 env[1643]: time="2024-02-12T22:00:58.251671971Z" level=info msg="CreateContainer within sandbox \"fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53\""
Feb 12 22:00:58.255967 env[1643]: time="2024-02-12T22:00:58.255833848Z" level=info msg="StartContainer for \"80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53\""
Feb 12 22:00:58.307729 systemd[1]: Started cri-containerd-80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53.scope.
Feb 12 22:00:58.333203 systemd[1]: cri-containerd-80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53.scope: Deactivated successfully.
Feb 12 22:00:58.366188 env[1643]: time="2024-02-12T22:00:58.366120869Z" level=info msg="shim disconnected" id=80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53
Feb 12 22:00:58.366188 env[1643]: time="2024-02-12T22:00:58.366184242Z" level=warning msg="cleaning up after shim disconnected" id=80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53 namespace=k8s.io
Feb 12 22:00:58.366188 env[1643]: time="2024-02-12T22:00:58.366196971Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:58.384750 env[1643]: time="2024-02-12T22:00:58.384550444Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4633 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T22:00:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Feb 12 22:00:58.385551 env[1643]: time="2024-02-12T22:00:58.385376472Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed"
Feb 12 22:00:58.388567 env[1643]: time="2024-02-12T22:00:58.388438118Z" level=error msg="Failed to pipe stdout of container \"80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53\"" error="reading from a closed fifo"
Feb 12 22:00:58.388725 env[1643]: time="2024-02-12T22:00:58.388533088Z" level=error msg="Failed to pipe stderr of container \"80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53\"" error="reading from a closed fifo"
Feb 12 22:00:58.391312 env[1643]: time="2024-02-12T22:00:58.391225465Z" level=error msg="StartContainer for \"80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Feb 12 22:00:58.391983 kubelet[2625]: E0212 22:00:58.391833    2625 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53"
Feb 12 22:00:58.395464 kubelet[2625]: E0212 22:00:58.395176    2625 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Feb 12 22:00:58.395464 kubelet[2625]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Feb 12 22:00:58.395464 kubelet[2625]: rm /hostbin/cilium-mount
Feb 12 22:00:58.395682 kubelet[2625]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vsqsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-stbfq_kube-system(83207465-dd7c-467e-9817-b8cfb12f138b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Feb 12 22:00:58.395682 kubelet[2625]: E0212 22:00:58.395277    2625 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-stbfq" podUID=83207465-dd7c-467e-9817-b8cfb12f138b
Feb 12 22:00:58.425752 env[1643]: time="2024-02-12T22:00:58.425703213Z" level=info msg="CreateContainer within sandbox \"fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}"
Feb 12 22:00:58.430516 sshd[4556]: pam_unix(sshd:session): session closed for user core
Feb 12 22:00:58.435174 systemd[1]: sshd@26-172.31.21.40:22-139.178.89.65:35218.service: Deactivated successfully.
Feb 12 22:00:58.436520 systemd[1]: session-27.scope: Deactivated successfully.
Feb 12 22:00:58.442518 systemd-logind[1633]: Session 27 logged out. Waiting for processes to exit.
Feb 12 22:00:58.445222 systemd-logind[1633]: Removed session 27.
Feb 12 22:00:58.459712 systemd[1]: Started sshd@27-172.31.21.40:22-139.178.89.65:38352.service.
Feb 12 22:00:58.473887 env[1643]: time="2024-02-12T22:00:58.473834095Z" level=info msg="CreateContainer within sandbox \"fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680\""
Feb 12 22:00:58.478191 env[1643]: time="2024-02-12T22:00:58.478147045Z" level=info msg="StartContainer for \"1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680\""
Feb 12 22:00:58.521374 systemd[1]: Started cri-containerd-1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680.scope.
Feb 12 22:00:58.546178 systemd[1]: cri-containerd-1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680.scope: Deactivated successfully.
Feb 12 22:00:58.569736 env[1643]: time="2024-02-12T22:00:58.569603627Z" level=info msg="shim disconnected" id=1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680
Feb 12 22:00:58.570393 env[1643]: time="2024-02-12T22:00:58.570300644Z" level=warning msg="cleaning up after shim disconnected" id=1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680 namespace=k8s.io
Feb 12 22:00:58.570606 env[1643]: time="2024-02-12T22:00:58.570586500Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:58.596185 env[1643]: time="2024-02-12T22:00:58.596119948Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4676 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T22:00:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2024-02-12T22:00:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Feb 12 22:00:58.596541 env[1643]: time="2024-02-12T22:00:58.596433823Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed"
Feb 12 22:00:58.597665 env[1643]: time="2024-02-12T22:00:58.597527804Z" level=error msg="Failed to pipe stdout of container \"1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680\"" error="reading from a closed fifo"
Feb 12 22:00:58.597807 env[1643]: time="2024-02-12T22:00:58.597699672Z" level=error msg="Failed to pipe stderr of container \"1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680\"" error="reading from a closed fifo"
Feb 12 22:00:58.601033 env[1643]: time="2024-02-12T22:00:58.600968291Z" level=error msg="StartContainer for \"1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Feb 12 22:00:58.601292 kubelet[2625]: E0212 22:00:58.601216    2625 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680"
Feb 12 22:00:58.605164 kubelet[2625]: E0212 22:00:58.605073    2625 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Feb 12 22:00:58.605164 kubelet[2625]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Feb 12 22:00:58.605164 kubelet[2625]: rm /hostbin/cilium-mount
Feb 12 22:00:58.605164 kubelet[2625]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vsqsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-stbfq_kube-system(83207465-dd7c-467e-9817-b8cfb12f138b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Feb 12 22:00:58.605526 kubelet[2625]: E0212 22:00:58.605176    2625 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-stbfq" podUID=83207465-dd7c-467e-9817-b8cfb12f138b
Feb 12 22:00:58.647605 sshd[4649]: Accepted publickey for core from 139.178.89.65 port 38352 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU
Feb 12 22:00:58.650154 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 22:00:58.658462 systemd[1]: Started session-28.scope.
Feb 12 22:00:58.658985 systemd-logind[1633]: New session 28 of user core.
Feb 12 22:00:59.430894 kubelet[2625]: I0212 22:00:59.430568    2625 scope.go:115] "RemoveContainer" containerID="80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53"
Feb 12 22:00:59.431555 env[1643]: time="2024-02-12T22:00:59.431520091Z" level=info msg="StopPodSandbox for \"fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76\""
Feb 12 22:00:59.431950 env[1643]: time="2024-02-12T22:00:59.431924662Z" level=info msg="Container to stop \"80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:59.432161 env[1643]: time="2024-02-12T22:00:59.432022768Z" level=info msg="Container to stop \"1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 22:00:59.439482 env[1643]: time="2024-02-12T22:00:59.434343979Z" level=info msg="RemoveContainer for \"80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53\""
Feb 12 22:00:59.441625 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76-shm.mount: Deactivated successfully.
Feb 12 22:00:59.444591 env[1643]: time="2024-02-12T22:00:59.444540773Z" level=info msg="RemoveContainer for \"80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53\" returns successfully"
Feb 12 22:00:59.464799 systemd[1]: cri-containerd-fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76.scope: Deactivated successfully.
Feb 12 22:00:59.503988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76-rootfs.mount: Deactivated successfully.
Feb 12 22:00:59.522412 env[1643]: time="2024-02-12T22:00:59.522180364Z" level=info msg="shim disconnected" id=fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76
Feb 12 22:00:59.522684 env[1643]: time="2024-02-12T22:00:59.522414450Z" level=warning msg="cleaning up after shim disconnected" id=fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76 namespace=k8s.io
Feb 12 22:00:59.522684 env[1643]: time="2024-02-12T22:00:59.522434324Z" level=info msg="cleaning up dead shim"
Feb 12 22:00:59.533012 env[1643]: time="2024-02-12T22:00:59.532953454Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4717 runtime=io.containerd.runc.v2\n"
Feb 12 22:00:59.533539 env[1643]: time="2024-02-12T22:00:59.533423193Z" level=info msg="TearDown network for sandbox \"fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76\" successfully"
Feb 12 22:00:59.533647 env[1643]: time="2024-02-12T22:00:59.533534521Z" level=info msg="StopPodSandbox for \"fba1e90f462a4334a98bcc39b87ab9a3b48b568dc5445919f78dbb54d536bc76\" returns successfully"
Feb 12 22:00:59.608118 kubelet[2625]: I0212 22:00:59.608074    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-bpf-maps\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608118 kubelet[2625]: I0212 22:00:59.608119    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-etc-cni-netd\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608148    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-lib-modules\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608180    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83207465-dd7c-467e-9817-b8cfb12f138b-clustermesh-secrets\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608208    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsqsw\" (UniqueName: \"kubernetes.io/projected/83207465-dd7c-467e-9817-b8cfb12f138b-kube-api-access-vsqsw\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608232    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-cgroup\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608259    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-xtables-lock\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608283    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-run\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608314    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83207465-dd7c-467e-9817-b8cfb12f138b-hubble-tls\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608340    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-hostproc\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608368 kubelet[2625]: I0212 22:00:59.608369    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-host-proc-sys-kernel\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608892 kubelet[2625]: I0212 22:00:59.608397    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cni-path\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608892 kubelet[2625]: I0212 22:00:59.608428    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-ipsec-secrets\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608892 kubelet[2625]: I0212 22:00:59.608482    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-host-proc-sys-net\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608892 kubelet[2625]: I0212 22:00:59.608582    2625 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-config-path\") pod \"83207465-dd7c-467e-9817-b8cfb12f138b\" (UID: \"83207465-dd7c-467e-9817-b8cfb12f138b\") "
Feb 12 22:00:59.608892 kubelet[2625]: W0212 22:00:59.608838    2625 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/83207465-dd7c-467e-9817-b8cfb12f138b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb 12 22:00:59.612097 kubelet[2625]: I0212 22:00:59.609436    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.612097 kubelet[2625]: I0212 22:00:59.610236    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-hostproc" (OuterVolumeSpecName: "hostproc") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.612097 kubelet[2625]: I0212 22:00:59.610287    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.612097 kubelet[2625]: I0212 22:00:59.610314    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cni-path" (OuterVolumeSpecName: "cni-path") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.612097 kubelet[2625]: I0212 22:00:59.611121    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.612097 kubelet[2625]: I0212 22:00:59.611161    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.612097 kubelet[2625]: I0212 22:00:59.611188    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.612097 kubelet[2625]: I0212 22:00:59.611631    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.613836 kubelet[2625]: I0212 22:00:59.613744    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.613836 kubelet[2625]: I0212 22:00:59.613811    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 22:00:59.616631 kubelet[2625]: I0212 22:00:59.616597    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83207465-dd7c-467e-9817-b8cfb12f138b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:59.617929 systemd[1]: var-lib-kubelet-pods-83207465\x2ddd7c\x2d467e\x2d9817\x2db8cfb12f138b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 12 22:00:59.623053 kubelet[2625]: I0212 22:00:59.623012    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 12 22:00:59.632412 systemd[1]: var-lib-kubelet-pods-83207465\x2ddd7c\x2d467e\x2d9817\x2db8cfb12f138b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb 12 22:00:59.632566 systemd[1]: var-lib-kubelet-pods-83207465\x2ddd7c\x2d467e\x2d9817\x2db8cfb12f138b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 12 22:00:59.635596 kubelet[2625]: I0212 22:00:59.634747    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 22:00:59.636955 kubelet[2625]: I0212 22:00:59.636920    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83207465-dd7c-467e-9817-b8cfb12f138b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 22:00:59.643734 kubelet[2625]: I0212 22:00:59.643691    2625 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83207465-dd7c-467e-9817-b8cfb12f138b-kube-api-access-vsqsw" (OuterVolumeSpecName: "kube-api-access-vsqsw") pod "83207465-dd7c-467e-9817-b8cfb12f138b" (UID: "83207465-dd7c-467e-9817-b8cfb12f138b"). InnerVolumeSpecName "kube-api-access-vsqsw". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 22:00:59.709376 kubelet[2625]: I0212 22:00:59.709210    2625 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-config-path\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709376 kubelet[2625]: I0212 22:00:59.709297    2625 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-bpf-maps\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709376 kubelet[2625]: I0212 22:00:59.709315    2625 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-etc-cni-netd\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709376 kubelet[2625]: I0212 22:00:59.709329    2625 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-lib-modules\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709376 kubelet[2625]: I0212 22:00:59.709344    2625 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83207465-dd7c-467e-9817-b8cfb12f138b-clustermesh-secrets\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709376 kubelet[2625]: I0212 22:00:59.709359    2625 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vsqsw\" (UniqueName: \"kubernetes.io/projected/83207465-dd7c-467e-9817-b8cfb12f138b-kube-api-access-vsqsw\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709376 kubelet[2625]: I0212 22:00:59.709373    2625 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-cgroup\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709921 kubelet[2625]: I0212 22:00:59.709394    2625 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-run\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709921 kubelet[2625]: I0212 22:00:59.709407    2625 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-xtables-lock\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709921 kubelet[2625]: I0212 22:00:59.709420    2625 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-hostproc\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709921 kubelet[2625]: I0212 22:00:59.709432    2625 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83207465-dd7c-467e-9817-b8cfb12f138b-hubble-tls\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709921 kubelet[2625]: I0212 22:00:59.709465    2625 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-cni-path\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709921 kubelet[2625]: I0212 22:00:59.709480    2625 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83207465-dd7c-467e-9817-b8cfb12f138b-cilium-ipsec-secrets\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709921 kubelet[2625]: I0212 22:00:59.709496    2625 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-host-proc-sys-kernel\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.709921 kubelet[2625]: I0212 22:00:59.709509    2625 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83207465-dd7c-467e-9817-b8cfb12f138b-host-proc-sys-net\") on node \"ip-172-31-21-40\" DevicePath \"\""
Feb 12 22:00:59.848885 systemd[1]: Removed slice kubepods-burstable-pod83207465_dd7c_467e_9817_b8cfb12f138b.slice.
Feb 12 22:01:00.008153 systemd[1]: var-lib-kubelet-pods-83207465\x2ddd7c\x2d467e\x2d9817\x2db8cfb12f138b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvsqsw.mount: Deactivated successfully.
Feb 12 22:01:00.017784 kubelet[2625]: E0212 22:01:00.017741    2625 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 12 22:01:00.436740 kubelet[2625]: I0212 22:01:00.436406    2625 scope.go:115] "RemoveContainer" containerID="1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680"
Feb 12 22:01:00.449787 env[1643]: time="2024-02-12T22:01:00.449723259Z" level=info msg="RemoveContainer for \"1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680\""
Feb 12 22:01:00.477741 env[1643]: time="2024-02-12T22:01:00.477690136Z" level=info msg="RemoveContainer for \"1e4fffee50b8ada3988872548d63a9ca965073788a6d3de5627d0b144768c680\" returns successfully"
Feb 12 22:01:00.506617 kubelet[2625]: I0212 22:01:00.506581    2625 topology_manager.go:212] "Topology Admit Handler"
Feb 12 22:01:00.507114 kubelet[2625]: E0212 22:01:00.507097    2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83207465-dd7c-467e-9817-b8cfb12f138b" containerName="mount-cgroup"
Feb 12 22:01:00.507257 kubelet[2625]: I0212 22:01:00.507246    2625 memory_manager.go:346] "RemoveStaleState removing state" podUID="83207465-dd7c-467e-9817-b8cfb12f138b" containerName="mount-cgroup"
Feb 12 22:01:00.508946 kubelet[2625]: I0212 22:01:00.507416    2625 memory_manager.go:346] "RemoveStaleState removing state" podUID="83207465-dd7c-467e-9817-b8cfb12f138b" containerName="mount-cgroup"
Feb 12 22:01:00.508946 kubelet[2625]: E0212 22:01:00.507477    2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="83207465-dd7c-467e-9817-b8cfb12f138b" containerName="mount-cgroup"
Feb 12 22:01:00.515759 kubelet[2625]: I0212 22:01:00.514036    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-lib-modules\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.515759 kubelet[2625]: I0212 22:01:00.514521    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-xtables-lock\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.515759 kubelet[2625]: I0212 22:01:00.514939    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8zk\" (UniqueName: \"kubernetes.io/projected/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-kube-api-access-nd8zk\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.515759 kubelet[2625]: I0212 22:01:00.514986    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-cilium-run\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.515991    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-cni-path\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516072    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-cilium-config-path\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516147    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-cilium-ipsec-secrets\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516207    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-bpf-maps\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516238    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-hubble-tls\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516761    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-host-proc-sys-net\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516803    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-cilium-cgroup\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516845    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-etc-cni-netd\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516877    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-hostproc\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516922    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-clustermesh-secrets\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519832 kubelet[2625]: I0212 22:01:00.516951    2625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb-host-proc-sys-kernel\") pod \"cilium-8rk4n\" (UID: \"31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb\") " pod="kube-system/cilium-8rk4n"
Feb 12 22:01:00.519204 systemd[1]: Created slice kubepods-burstable-pod31e3a01a_ebf4_41e5_afa5_ce0ffaebdffb.slice.
Feb 12 22:01:00.823308 env[1643]: time="2024-02-12T22:01:00.823261233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rk4n,Uid:31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb,Namespace:kube-system,Attempt:0,}"
Feb 12 22:01:00.858709 env[1643]: time="2024-02-12T22:01:00.858622023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 22:01:00.858709 env[1643]: time="2024-02-12T22:01:00.858663785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 22:01:00.858709 env[1643]: time="2024-02-12T22:01:00.858680189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 22:01:00.859246 env[1643]: time="2024-02-12T22:01:00.859118469Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac pid=4747 runtime=io.containerd.runc.v2
Feb 12 22:01:00.879421 systemd[1]: Started cri-containerd-3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac.scope.
Feb 12 22:01:00.945100 env[1643]: time="2024-02-12T22:01:00.945044272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rk4n,Uid:31e3a01a-ebf4-41e5-afa5-ce0ffaebdffb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\""
Feb 12 22:01:00.951926 env[1643]: time="2024-02-12T22:01:00.951884991Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 12 22:01:00.974206 env[1643]: time="2024-02-12T22:01:00.974156889Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030\""
Feb 12 22:01:00.975026 env[1643]: time="2024-02-12T22:01:00.974994633Z" level=info msg="StartContainer for \"0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030\""
Feb 12 22:01:00.998549 systemd[1]: Started cri-containerd-0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030.scope.
Feb 12 22:01:01.055943 env[1643]: time="2024-02-12T22:01:01.055764627Z" level=info msg="StartContainer for \"0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030\" returns successfully"
Feb 12 22:01:01.091865 systemd[1]: cri-containerd-0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030.scope: Deactivated successfully.
Feb 12 22:01:01.124774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030-rootfs.mount: Deactivated successfully.
Feb 12 22:01:01.163252 env[1643]: time="2024-02-12T22:01:01.163199725Z" level=info msg="shim disconnected" id=0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030
Feb 12 22:01:01.163642 env[1643]: time="2024-02-12T22:01:01.163618536Z" level=warning msg="cleaning up after shim disconnected" id=0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030 namespace=k8s.io
Feb 12 22:01:01.163766 env[1643]: time="2024-02-12T22:01:01.163750427Z" level=info msg="cleaning up dead shim"
Feb 12 22:01:01.199034 env[1643]: time="2024-02-12T22:01:01.198987133Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4833 runtime=io.containerd.runc.v2\n"
Feb 12 22:01:01.452729 env[1643]: time="2024-02-12T22:01:01.452506911Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 12 22:01:01.515968 kubelet[2625]: W0212 22:01:01.498438    2625 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83207465_dd7c_467e_9817_b8cfb12f138b.slice/cri-containerd-80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53.scope WatchSource:0}: container "80a5afcfdcdf8e8194c68766855b73c0406629816c3ede803aed875006b90f53" in namespace "k8s.io": not found
Feb 12 22:01:01.505839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144497619.mount: Deactivated successfully.
Feb 12 22:01:01.575758 env[1643]: time="2024-02-12T22:01:01.575677488Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912\""
Feb 12 22:01:01.595963 env[1643]: time="2024-02-12T22:01:01.595910206Z" level=info msg="StartContainer for \"bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912\""
Feb 12 22:01:01.743045 systemd[1]: Started cri-containerd-bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912.scope.
Feb 12 22:01:01.833981 kubelet[2625]: I0212 22:01:01.833936    2625 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=83207465-dd7c-467e-9817-b8cfb12f138b path="/var/lib/kubelet/pods/83207465-dd7c-467e-9817-b8cfb12f138b/volumes"
Feb 12 22:01:01.884071 env[1643]: time="2024-02-12T22:01:01.884011916Z" level=info msg="StartContainer for \"bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912\" returns successfully"
Feb 12 22:01:01.928379 systemd[1]: cri-containerd-bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912.scope: Deactivated successfully.
Feb 12 22:01:02.028380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912-rootfs.mount: Deactivated successfully.
Feb 12 22:01:02.071608 env[1643]: time="2024-02-12T22:01:02.071556167Z" level=info msg="shim disconnected" id=bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912
Feb 12 22:01:02.072008 env[1643]: time="2024-02-12T22:01:02.071983278Z" level=warning msg="cleaning up after shim disconnected" id=bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912 namespace=k8s.io
Feb 12 22:01:02.072119 env[1643]: time="2024-02-12T22:01:02.072103816Z" level=info msg="cleaning up dead shim"
Feb 12 22:01:02.147046 env[1643]: time="2024-02-12T22:01:02.146976576Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4898 runtime=io.containerd.runc.v2\n"
Feb 12 22:01:02.410582 kubelet[2625]: I0212 22:01:02.410467    2625 setters.go:548] "Node became not ready" node="ip-172-31-21-40" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 22:01:02.410365875 +0000 UTC m=+132.877058056 LastTransitionTime:2024-02-12 22:01:02.410365875 +0000 UTC m=+132.877058056 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
Feb 12 22:01:02.463032 env[1643]: time="2024-02-12T22:01:02.462980490Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 12 22:01:02.526194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833646746.mount: Deactivated successfully.
Feb 12 22:01:02.552581 env[1643]: time="2024-02-12T22:01:02.552518743Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0\""
Feb 12 22:01:02.556178 env[1643]: time="2024-02-12T22:01:02.556134449Z" level=info msg="StartContainer for \"6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0\""
Feb 12 22:01:02.651037 systemd[1]: Started cri-containerd-6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0.scope.
Feb 12 22:01:02.811705 env[1643]: time="2024-02-12T22:01:02.811641129Z" level=info msg="StartContainer for \"6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0\" returns successfully"
Feb 12 22:01:02.832829 systemd[1]: cri-containerd-6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0.scope: Deactivated successfully.
Feb 12 22:01:02.921661 env[1643]: time="2024-02-12T22:01:02.915835183Z" level=info msg="shim disconnected" id=6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0
Feb 12 22:01:02.921661 env[1643]: time="2024-02-12T22:01:02.915891040Z" level=warning msg="cleaning up after shim disconnected" id=6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0 namespace=k8s.io
Feb 12 22:01:02.921661 env[1643]: time="2024-02-12T22:01:02.915903993Z" level=info msg="cleaning up dead shim"
Feb 12 22:01:02.937032 env[1643]: time="2024-02-12T22:01:02.936980373Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4956 runtime=io.containerd.runc.v2\n"
Feb 12 22:01:03.480311 env[1643]: time="2024-02-12T22:01:03.480249863Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 12 22:01:03.512356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859595129.mount: Deactivated successfully.
Feb 12 22:01:03.532124 env[1643]: time="2024-02-12T22:01:03.531973207Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140\""
Feb 12 22:01:03.533228 env[1643]: time="2024-02-12T22:01:03.533149716Z" level=info msg="StartContainer for \"580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140\""
Feb 12 22:01:03.567655 systemd[1]: Started cri-containerd-580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140.scope.
Feb 12 22:01:03.664089 env[1643]: time="2024-02-12T22:01:03.664018800Z" level=info msg="StartContainer for \"580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140\" returns successfully"
Feb 12 22:01:03.696385 systemd[1]: cri-containerd-580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140.scope: Deactivated successfully.
Feb 12 22:01:03.758505 env[1643]: time="2024-02-12T22:01:03.756234329Z" level=info msg="shim disconnected" id=580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140
Feb 12 22:01:03.758505 env[1643]: time="2024-02-12T22:01:03.756319581Z" level=warning msg="cleaning up after shim disconnected" id=580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140 namespace=k8s.io
Feb 12 22:01:03.758505 env[1643]: time="2024-02-12T22:01:03.756333767Z" level=info msg="cleaning up dead shim"
Feb 12 22:01:03.794574 env[1643]: time="2024-02-12T22:01:03.794461179Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5012 runtime=io.containerd.runc.v2\n"
Feb 12 22:01:04.011708 systemd[1]: run-containerd-runc-k8s.io-580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140-runc.hqSzlL.mount: Deactivated successfully.
Feb 12 22:01:04.011844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140-rootfs.mount: Deactivated successfully.
Feb 12 22:01:04.486360 env[1643]: time="2024-02-12T22:01:04.486313826Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 12 22:01:04.546905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237019729.mount: Deactivated successfully.
Feb 12 22:01:04.556387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196837385.mount: Deactivated successfully.
Feb 12 22:01:04.561477 env[1643]: time="2024-02-12T22:01:04.561390896Z" level=info msg="CreateContainer within sandbox \"3ef2f51ed1e70a5f23a8c7e6e41ec5b54234e8638760167d2e09fde8f9a858ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8b6c552687e2f5528d466645e040f16fd865b0de786799e93ce338f77eeab30f\""
Feb 12 22:01:04.562645 env[1643]: time="2024-02-12T22:01:04.562597946Z" level=info msg="StartContainer for \"8b6c552687e2f5528d466645e040f16fd865b0de786799e93ce338f77eeab30f\""
Feb 12 22:01:04.596532 systemd[1]: Started cri-containerd-8b6c552687e2f5528d466645e040f16fd865b0de786799e93ce338f77eeab30f.scope.
Feb 12 22:01:04.645352 env[1643]: time="2024-02-12T22:01:04.645307083Z" level=info msg="StartContainer for \"8b6c552687e2f5528d466645e040f16fd865b0de786799e93ce338f77eeab30f\" returns successfully"
Feb 12 22:01:04.750238 kubelet[2625]: W0212 22:01:04.750110    2625 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e3a01a_ebf4_41e5_afa5_ce0ffaebdffb.slice/cri-containerd-0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030.scope WatchSource:0}: task 0f502647e7b394ab099de88862471542f49005baa578e34e12096ba23d851030 not found: not found
Feb 12 22:01:05.690475 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb 12 22:01:07.878611 kubelet[2625]: W0212 22:01:07.878564    2625 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e3a01a_ebf4_41e5_afa5_ce0ffaebdffb.slice/cri-containerd-bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912.scope WatchSource:0}: task bc6316deaf05bb8936d6f20fb730a70e60b1486d67c1c5fe5aefc00867a14912 not found: not found
Feb 12 22:01:09.417547 (udev-worker)[5579]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 22:01:09.423859 (udev-worker)[5580]: Network interface NamePolicy= disabled on kernel command line.
Feb 12 22:01:09.425433 systemd-networkd[1461]: lxc_health: Link UP
Feb 12 22:01:09.437847 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb 12 22:01:09.436862 systemd-networkd[1461]: lxc_health: Gained carrier
Feb 12 22:01:09.682158 systemd[1]: run-containerd-runc-k8s.io-8b6c552687e2f5528d466645e040f16fd865b0de786799e93ce338f77eeab30f-runc.HvEle5.mount: Deactivated successfully.
Feb 12 22:01:10.870229 kubelet[2625]: I0212 22:01:10.870183    2625 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8rk4n" podStartSLOduration=10.868066424 podCreationTimestamp="2024-02-12 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 22:01:05.541667098 +0000 UTC m=+136.008359295" watchObservedRunningTime="2024-02-12 22:01:10.868066424 +0000 UTC m=+141.334758654"
Feb 12 22:01:10.992792 kubelet[2625]: W0212 22:01:10.992741    2625 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e3a01a_ebf4_41e5_afa5_ce0ffaebdffb.slice/cri-containerd-6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0.scope WatchSource:0}: task 6f33968e6946991d6c6b516564fde429f97d006fcfc9ff3d7c960a02db9873a0 not found: not found
Feb 12 22:01:11.207921 systemd-networkd[1461]: lxc_health: Gained IPv6LL
Feb 12 22:01:12.023350 systemd[1]: run-containerd-runc-k8s.io-8b6c552687e2f5528d466645e040f16fd865b0de786799e93ce338f77eeab30f-runc.Yg67lK.mount: Deactivated successfully.
Feb 12 22:01:14.122567 kubelet[2625]: W0212 22:01:14.122526    2625 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e3a01a_ebf4_41e5_afa5_ce0ffaebdffb.slice/cri-containerd-580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140.scope WatchSource:0}: task 580201476a5727698aff1badea608a0cec026958fd3be43273788b9e76524140 not found: not found
Feb 12 22:01:14.334669 systemd[1]: run-containerd-runc-k8s.io-8b6c552687e2f5528d466645e040f16fd865b0de786799e93ce338f77eeab30f-runc.8hh51y.mount: Deactivated successfully.
Feb 12 22:01:14.511535 sshd[4649]: pam_unix(sshd:session): session closed for user core
Feb 12 22:01:14.515857 systemd[1]: sshd@27-172.31.21.40:22-139.178.89.65:38352.service: Deactivated successfully.
Feb 12 22:01:14.517181 systemd[1]: session-28.scope: Deactivated successfully.
Feb 12 22:01:14.519151 systemd-logind[1633]: Session 28 logged out. Waiting for processes to exit.
Feb 12 22:01:14.520915 systemd-logind[1633]: Removed session 28.
Feb 12 22:01:28.981002 systemd[1]: cri-containerd-b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd.scope: Deactivated successfully.
Feb 12 22:01:28.981318 systemd[1]: cri-containerd-b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd.scope: Consumed 4.140s CPU time.
Feb 12 22:01:29.006423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd-rootfs.mount: Deactivated successfully.
Feb 12 22:01:29.033684 env[1643]: time="2024-02-12T22:01:29.033632927Z" level=info msg="shim disconnected" id=b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd
Feb 12 22:01:29.033684 env[1643]: time="2024-02-12T22:01:29.033681941Z" level=warning msg="cleaning up after shim disconnected" id=b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd namespace=k8s.io
Feb 12 22:01:29.034294 env[1643]: time="2024-02-12T22:01:29.033694930Z" level=info msg="cleaning up dead shim"
Feb 12 22:01:29.044687 env[1643]: time="2024-02-12T22:01:29.044636626Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5698 runtime=io.containerd.runc.v2\n"
Feb 12 22:01:29.561203 kubelet[2625]: I0212 22:01:29.561159    2625 scope.go:115] "RemoveContainer" containerID="b4abe9f19327d745f008305e9bf8d5e770483f5377f737037dc2baec0fb0e1bd"
Feb 12 22:01:29.565997 env[1643]: time="2024-02-12T22:01:29.565945415Z" level=info msg="CreateContainer within sandbox \"8b559954e1937a01f6d89d2e0040bfb8c0b8a886ecbafdeb622daff9661111b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Feb 12 22:01:29.600406 env[1643]: time="2024-02-12T22:01:29.600304619Z" level=info msg="CreateContainer within sandbox \"8b559954e1937a01f6d89d2e0040bfb8c0b8a886ecbafdeb622daff9661111b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4bdf1fa5459cc14f8a22c6ba9363fb1d0cbe597fca46ab06ee00a600273f66f1\""
Feb 12 22:01:29.601394 env[1643]: time="2024-02-12T22:01:29.601354889Z" level=info msg="StartContainer for \"4bdf1fa5459cc14f8a22c6ba9363fb1d0cbe597fca46ab06ee00a600273f66f1\""
Feb 12 22:01:29.666168 systemd[1]: Started cri-containerd-4bdf1fa5459cc14f8a22c6ba9363fb1d0cbe597fca46ab06ee00a600273f66f1.scope.
Feb 12 22:01:29.746699 env[1643]: time="2024-02-12T22:01:29.746633819Z" level=info msg="StartContainer for \"4bdf1fa5459cc14f8a22c6ba9363fb1d0cbe597fca46ab06ee00a600273f66f1\" returns successfully"
Feb 12 22:01:33.039514 kubelet[2625]: E0212 22:01:33.039437    2625 request.go:1092] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
Feb 12 22:01:33.039996 kubelet[2625]: E0212 22:01:33.039973    2625 controller.go:193] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)"
Feb 12 22:01:34.444843 systemd[1]: cri-containerd-ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c.scope: Deactivated successfully.
Feb 12 22:01:34.446013 systemd[1]: cri-containerd-ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c.scope: Consumed 1.676s CPU time.
Feb 12 22:01:34.522805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c-rootfs.mount: Deactivated successfully.
Feb 12 22:01:34.546905 env[1643]: time="2024-02-12T22:01:34.546844299Z" level=info msg="shim disconnected" id=ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c
Feb 12 22:01:34.546905 env[1643]: time="2024-02-12T22:01:34.546905980Z" level=warning msg="cleaning up after shim disconnected" id=ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c namespace=k8s.io
Feb 12 22:01:34.549028 env[1643]: time="2024-02-12T22:01:34.546918022Z" level=info msg="cleaning up dead shim"
Feb 12 22:01:34.586802 env[1643]: time="2024-02-12T22:01:34.586746355Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5756 runtime=io.containerd.runc.v2\n"
Feb 12 22:01:35.592046 kubelet[2625]: I0212 22:01:35.592009    2625 scope.go:115] "RemoveContainer" containerID="ed118ab8e2741405a7ced194621c1db50ed03500a157f262de1a08b8afb6411c"
Feb 12 22:01:35.598475 env[1643]: time="2024-02-12T22:01:35.598415894Z" level=info msg="CreateContainer within sandbox \"7619b0bbf5d8b81bb78359e1f393dac63d7d5e7328f7e750b69558bc2f72ce8b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Feb 12 22:01:35.647593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900339469.mount: Deactivated successfully.
Feb 12 22:01:35.659975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243010403.mount: Deactivated successfully.
Feb 12 22:01:35.671305 env[1643]: time="2024-02-12T22:01:35.671078223Z" level=info msg="CreateContainer within sandbox \"7619b0bbf5d8b81bb78359e1f393dac63d7d5e7328f7e750b69558bc2f72ce8b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9c1f3d020de215f3c5dc56b989a61eb9ecc4ee611d58b616b566a45d3fe6a43c\""
Feb 12 22:01:35.674726 env[1643]: time="2024-02-12T22:01:35.674560254Z" level=info msg="StartContainer for \"9c1f3d020de215f3c5dc56b989a61eb9ecc4ee611d58b616b566a45d3fe6a43c\""
Feb 12 22:01:35.718530 systemd[1]: Started cri-containerd-9c1f3d020de215f3c5dc56b989a61eb9ecc4ee611d58b616b566a45d3fe6a43c.scope.
Feb 12 22:01:35.815383 env[1643]: time="2024-02-12T22:01:35.815324918Z" level=info msg="StartContainer for \"9c1f3d020de215f3c5dc56b989a61eb9ecc4ee611d58b616b566a45d3fe6a43c\" returns successfully"