Sep 13 00:46:40.089765 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:46:40.089801 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:46:40.089818 kernel: BIOS-provided physical RAM map: Sep 13 00:46:40.089828 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:46:40.089840 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:46:40.089851 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:46:40.089865 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:46:40.089878 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:46:40.089893 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:46:40.089905 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:46:40.089917 kernel: NX (Execute Disable) protection: active Sep 13 00:46:40.089929 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:46:40.089941 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:46:40.089953 kernel: extended physical RAM map: Sep 13 00:46:40.089971 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:46:40.089984 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Sep 13 00:46:40.089997 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Sep 13 00:46:40.090010 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Sep 13 00:46:40.090023 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:46:40.090036 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:46:40.090049 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:46:40.090062 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:46:40.090075 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:46:40.090088 kernel: efi: EFI v2.70 by EDK II Sep 13 00:46:40.090104 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 Sep 13 00:46:40.090117 kernel: SMBIOS 2.7 present. Sep 13 00:46:40.090129 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:46:40.090142 kernel: Hypervisor detected: KVM Sep 13 00:46:40.090155 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:46:40.090168 kernel: kvm-clock: cpu 0, msr 2519f001, primary cpu clock Sep 13 00:46:40.090181 kernel: kvm-clock: using sched offset of 4354525935 cycles Sep 13 00:46:40.090194 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:46:40.090208 kernel: tsc: Detected 2499.998 MHz processor Sep 13 00:46:40.090222 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:46:40.095281 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:46:40.095322 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:46:40.095337 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:46:40.095351 kernel: Using GB pages for direct mapping Sep 13 00:46:40.095364 kernel: Secure boot disabled Sep 13 00:46:40.095378 kernel: ACPI: Early table checksum verification disabled Sep 13 00:46:40.095397 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:46:40.095412 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:46:40.095429 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:46:40.095443 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:46:40.095458 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:46:40.095472 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:46:40.095487 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:46:40.095502 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:46:40.095516 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:46:40.095533 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:46:40.095547 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:46:40.095562 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:46:40.095576 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:46:40.095590 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:46:40.095605 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:46:40.095620 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:46:40.095634 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:46:40.095648 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:46:40.095666 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:46:40.095680 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:46:40.095694 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:46:40.095709 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:46:40.095723 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:46:40.095737 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:46:40.095751 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:46:40.095765 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:46:40.095779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:46:40.095796 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:46:40.095811 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:46:40.095826 kernel: Zone ranges: Sep 13 00:46:40.095841 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:46:40.095855 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:46:40.095869 kernel: Normal empty Sep 13 00:46:40.095883 kernel: Movable zone start for each node Sep 13 00:46:40.095898 kernel: Early memory node ranges Sep 13 00:46:40.095912 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:46:40.095929 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:46:40.095943 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:46:40.095957 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:46:40.095972 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:46:40.095986 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:46:40.096001 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:46:40.096015 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:46:40.096030 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:46:40.096044 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:46:40.096061 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:46:40.096076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:46:40.096090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:46:40.096105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:46:40.096119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:46:40.096133 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:46:40.096148 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:46:40.096162 kernel: TSC deadline timer available Sep 13 00:46:40.096194 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:46:40.096209 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:46:40.096227 kernel: Booting paravirtualized kernel on KVM Sep 13 00:46:40.096256 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:46:40.096271 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:46:40.096285 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:46:40.096300 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:46:40.096315 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:46:40.096329 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Sep 13 00:46:40.096344 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:46:40.096358 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:46:40.096375 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:46:40.096389 kernel: Policy zone: DMA32 Sep 13 00:46:40.096406 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:46:40.096422 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:46:40.096436 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:46:40.096451 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:46:40.096466 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:46:40.096483 kernel: Memory: 1876640K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 160904K reserved, 0K cma-reserved) Sep 13 00:46:40.096498 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:46:40.096513 kernel: Kernel/User page tables isolation: enabled Sep 13 00:46:40.096527 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:46:40.096542 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:46:40.096556 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:46:40.096573 kernel: rcu: RCU event tracing is enabled. Sep 13 00:46:40.096601 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:46:40.096616 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:46:40.096632 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:46:40.096647 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:46:40.097193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:46:40.099055 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:46:40.100569 kernel: random: crng init done Sep 13 00:46:40.101754 kernel: Console: colour dummy device 80x25 Sep 13 00:46:40.101771 kernel: printk: console [tty0] enabled Sep 13 00:46:40.101783 kernel: printk: console [ttyS0] enabled Sep 13 00:46:40.101795 kernel: ACPI: Core revision 20210730 Sep 13 00:46:40.101808 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:46:40.101825 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:46:40.101837 kernel: x2apic enabled Sep 13 00:46:40.101848 kernel: Switched APIC routing to physical x2apic. Sep 13 00:46:40.101861 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 00:46:40.101873 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 13 00:46:40.101884 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:46:40.101896 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:46:40.101911 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:46:40.101922 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:46:40.101934 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:46:40.101947 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:46:40.101960 kernel: RETBleed: Vulnerable Sep 13 00:46:40.101972 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:46:40.101985 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:46:40.101999 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:46:40.102012 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:46:40.102025 kernel: active return thunk: its_return_thunk Sep 13 00:46:40.102037 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:46:40.102052 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:46:40.102066 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:46:40.102080 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:46:40.102094 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:46:40.102106 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:46:40.102120 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:46:40.102133 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:46:40.102145 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:46:40.102157 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:46:40.102170 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:46:40.102183 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:46:40.102200 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:46:40.102214 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:46:40.102230 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:46:40.102243 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:46:40.108312 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:46:40.108329 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:46:40.108345 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:46:40.108360 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:46:40.108375 kernel: LSM: Security Framework initializing Sep 13 00:46:40.108387 kernel: SELinux: Initializing. Sep 13 00:46:40.108400 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:46:40.108421 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:46:40.108433 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Sep 13 00:46:40.108456 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:46:40.108468 kernel: signal: max sigframe size: 3632 Sep 13 00:46:40.108480 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:46:40.108492 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:46:40.108504 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:46:40.108517 kernel: x86: Booting SMP configuration: Sep 13 00:46:40.108533 kernel: .... node #0, CPUs: #1 Sep 13 00:46:40.108548 kernel: kvm-clock: cpu 1, msr 2519f041, secondary cpu clock Sep 13 00:46:40.108567 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Sep 13 00:46:40.108584 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:46:40.108602 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:46:40.108617 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:46:40.108632 kernel: smpboot: Max logical packages: 1 Sep 13 00:46:40.108648 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 13 00:46:40.108663 kernel: devtmpfs: initialized Sep 13 00:46:40.108679 kernel: x86/mm: Memory block size: 128MB Sep 13 00:46:40.108698 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:46:40.108713 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:46:40.108729 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:46:40.108745 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:46:40.108760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:46:40.108775 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:46:40.108791 kernel: audit: type=2000 audit(1757724399.680:1): state=initialized audit_enabled=0 res=1 Sep 13 00:46:40.108806 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:46:40.108822 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:46:40.108840 kernel: cpuidle: using governor menu Sep 13 00:46:40.108855 kernel: ACPI: bus type PCI registered Sep 13 00:46:40.108870 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:46:40.108886 kernel: dca service started, version 1.12.1 Sep 13 00:46:40.108902 kernel: PCI: Using configuration type 1 for base access Sep 13 00:46:40.108917 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:46:40.108933 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:46:40.108948 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:46:40.108964 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:46:40.108983 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:46:40.108998 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:46:40.109013 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:46:40.109029 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:46:40.109044 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:46:40.109059 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:46:40.109074 kernel: ACPI: Interpreter enabled Sep 13 00:46:40.109090 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:46:40.109105 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:46:40.109123 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:46:40.109139 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:46:40.109154 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:46:40.109400 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:46:40.109543 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:46:40.109563 kernel: acpiphp: Slot [3] registered Sep 13 00:46:40.109579 kernel: acpiphp: Slot [4] registered Sep 13 00:46:40.109595 kernel: acpiphp: Slot [5] registered Sep 13 00:46:40.109614 kernel: acpiphp: Slot [6] registered Sep 13 00:46:40.109629 kernel: acpiphp: Slot [7] registered Sep 13 00:46:40.109644 kernel: acpiphp: Slot [8] registered Sep 13 00:46:40.109659 kernel: acpiphp: Slot [9] registered Sep 13 00:46:40.109674 kernel: acpiphp: Slot [10] registered Sep 13 00:46:40.109689 kernel: acpiphp: Slot [11] registered Sep 13 00:46:40.109704 kernel: acpiphp: Slot [12] registered Sep 13 00:46:40.109719 kernel: acpiphp: Slot [13] registered Sep 13 00:46:40.109734 kernel: acpiphp: Slot [14] registered Sep 13 00:46:40.109751 kernel: acpiphp: Slot [15] registered Sep 13 00:46:40.109766 kernel: acpiphp: Slot [16] registered Sep 13 00:46:40.109780 kernel: acpiphp: Slot [17] registered Sep 13 00:46:40.109795 kernel: acpiphp: Slot [18] registered Sep 13 00:46:40.109810 kernel: acpiphp: Slot [19] registered Sep 13 00:46:40.109825 kernel: acpiphp: Slot [20] registered Sep 13 00:46:40.109839 kernel: acpiphp: Slot [21] registered Sep 13 00:46:40.109854 kernel: acpiphp: Slot [22] registered Sep 13 00:46:40.109869 kernel: acpiphp: Slot [23] registered Sep 13 00:46:40.109884 kernel: acpiphp: Slot [24] registered Sep 13 00:46:40.109901 kernel: acpiphp: Slot [25] registered Sep 13 00:46:40.109916 kernel: acpiphp: Slot [26] registered Sep 13 00:46:40.109931 kernel: acpiphp: Slot [27] registered Sep 13 00:46:40.109946 kernel: acpiphp: Slot [28] registered Sep 13 00:46:40.109961 kernel: acpiphp: Slot [29] registered Sep 13 00:46:40.109976 kernel: acpiphp: Slot [30] registered Sep 13 00:46:40.109991 kernel: acpiphp: Slot [31] registered Sep 13 00:46:40.110006 kernel: PCI host bridge to bus 0000:00 Sep 13 00:46:40.110141 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:46:40.120621 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:46:40.120790 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:46:40.120911 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:46:40.121026 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:46:40.121171 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:46:40.121373 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:46:40.121531 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:46:40.121665 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:46:40.121802 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:46:40.121937 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:46:40.122069 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:46:40.122199 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:46:40.122345 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:46:40.122479 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:46:40.122606 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:46:40.122742 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:46:40.122871 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:46:40.124945 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:46:40.125101 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:46:40.125283 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:46:40.125436 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:46:40.125570 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:46:40.125712 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:46:40.125845 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:46:40.125865 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:46:40.125882 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:46:40.125898 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:46:40.125916 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:46:40.125931 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:46:40.125946 kernel: iommu: Default domain type: Translated Sep 13 00:46:40.125962 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:46:40.126095 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:46:40.126229 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:46:40.132149 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:46:40.132200 kernel: vgaarb: loaded Sep 13 00:46:40.132223 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:46:40.132267 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:46:40.132283 kernel: PTP clock support registered Sep 13 00:46:40.132299 kernel: Registered efivars operations Sep 13 00:46:40.132315 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:46:40.132330 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:46:40.132346 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Sep 13 00:46:40.132361 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:46:40.132377 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:46:40.132395 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:46:40.132411 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:46:40.132426 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:46:40.132847 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:46:40.134145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:46:40.134163 kernel: pnp: PnP ACPI init Sep 13 00:46:40.134176 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:46:40.134191 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:46:40.134205 kernel: NET: Registered PF_INET protocol family Sep 13 00:46:40.134224 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:46:40.145301 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:46:40.145335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:46:40.145351 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:46:40.145368 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:46:40.145383 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:46:40.145399 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:46:40.145415 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:46:40.145430 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:46:40.145451 kernel: NET: Registered PF_XDP protocol family Sep 13 00:46:40.145642 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:46:40.145765 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:46:40.145886 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:46:40.146006 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:46:40.146124 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:46:40.148372 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:46:40.148557 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 13 00:46:40.148584 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:46:40.148602 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:46:40.148618 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 00:46:40.148633 kernel: clocksource: Switched to clocksource tsc Sep 13 00:46:40.148649 kernel: Initialise system trusted keyrings Sep 13 00:46:40.148665 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:46:40.148680 kernel: Key type asymmetric registered Sep 13 00:46:40.148695 kernel: Asymmetric key parser 'x509' registered Sep 13 00:46:40.148711 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:46:40.148729 kernel: io scheduler mq-deadline registered Sep 13 00:46:40.148744 kernel: io scheduler kyber registered Sep 13 00:46:40.148760 kernel: io scheduler bfq registered Sep 13 00:46:40.148775 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:46:40.148790 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:46:40.148805 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:46:40.148821 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:46:40.148835 kernel: i8042: Warning: Keylock active Sep 13 00:46:40.148850 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:46:40.148868 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:46:40.149012 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:46:40.149139 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:46:40.152354 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:46:39 UTC (1757724399) Sep 13 00:46:40.152511 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:46:40.152533 kernel: intel_pstate: CPU model not supported Sep 13 00:46:40.152550 kernel: efifb: probing for efifb Sep 13 00:46:40.152571 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 13 00:46:40.152586 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:46:40.152602 kernel: efifb: scrolling: redraw Sep 13 00:46:40.152618 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:46:40.152633 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:46:40.152650 kernel: fb0: EFI VGA frame buffer device Sep 13 00:46:40.152689 kernel: pstore: Registered efi as persistent store backend Sep 13 00:46:40.152708 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:46:40.152724 kernel: Segment Routing with IPv6 Sep 13 00:46:40.152744 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:46:40.152760 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:46:40.152776 kernel: Key type dns_resolver registered Sep 13 00:46:40.152793 kernel: IPI shorthand broadcast: enabled Sep 13 00:46:40.152809 kernel: sched_clock: Marking stable (427742366, 180509478)->(702167267, -93915423) Sep 13 00:46:40.152825 kernel: registered taskstats version 1 Sep 13 00:46:40.152841 kernel: Loading compiled-in X.509 certificates Sep 13 00:46:40.152857 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:46:40.152874 kernel: Key type .fscrypt registered Sep 13 00:46:40.152889 kernel: Key type fscrypt-provisioning registered Sep 13 00:46:40.152908 kernel: pstore: Using crash dump compression: deflate Sep 13 00:46:40.152925 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:46:40.152941 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:46:40.152957 kernel: ima: No architecture policies found Sep 13 00:46:40.152973 kernel: clk: Disabling unused clocks Sep 13 00:46:40.152989 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:46:40.153006 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:46:40.153022 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:46:40.153041 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:46:40.153058 kernel: Run /init as init process Sep 13 00:46:40.153074 kernel: with arguments: Sep 13 00:46:40.153090 kernel: /init Sep 13 00:46:40.153106 kernel: with environment: Sep 13 00:46:40.153122 kernel: HOME=/ Sep 13 00:46:40.153138 kernel: TERM=linux Sep 13 00:46:40.153153 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:46:40.153174 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:46:40.153198 systemd[1]: Detected virtualization amazon. Sep 13 00:46:40.153215 systemd[1]: Detected architecture x86-64. Sep 13 00:46:40.153232 systemd[1]: Running in initrd. Sep 13 00:46:40.153283 systemd[1]: No hostname configured, using default hostname. Sep 13 00:46:40.153300 systemd[1]: Hostname set to . Sep 13 00:46:40.153317 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:46:40.153335 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:46:40.153355 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:46:40.153372 systemd[1]: Reached target cryptsetup.target. Sep 13 00:46:40.153388 systemd[1]: Reached target paths.target. Sep 13 00:46:40.153405 systemd[1]: Reached target slices.target. Sep 13 00:46:40.153423 systemd[1]: Reached target swap.target. Sep 13 00:46:40.153439 systemd[1]: Reached target timers.target. Sep 13 00:46:40.153461 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:46:40.153477 systemd[1]: Listening on iscsid.socket. Sep 13 00:46:40.153494 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:46:40.153512 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:46:40.153529 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:46:40.153546 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:46:40.153563 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:46:40.153583 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:46:40.153600 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:46:40.153617 systemd[1]: Reached target sockets.target. Sep 13 00:46:40.153634 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:46:40.153651 systemd[1]: Finished network-cleanup.service. Sep 13 00:46:40.153668 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:46:40.153686 systemd[1]: Starting systemd-journald.service... Sep 13 00:46:40.153703 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:46:40.153720 systemd[1]: Starting systemd-resolved.service... Sep 13 00:46:40.153739 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:46:40.153756 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:46:40.153776 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:46:40.153794 kernel: audit: type=1130 audit(1757724400.093:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.153811 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:46:40.153829 kernel: audit: type=1130 audit(1757724400.112:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.153846 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:46:40.153863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:46:40.153884 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:46:40.153907 systemd-journald[185]: Journal started Sep 13 00:46:40.153982 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2f5ca0773532d0d7a50738c237bb56) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:46:40.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.086265 systemd-modules-load[186]: Inserted module 'overlay' Sep 13 00:46:40.171277 systemd[1]: Started systemd-journald.service. Sep 13 00:46:40.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.172261 kernel: audit: type=1130 audit(1757724400.168:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.182153 systemd-resolved[187]: Positive Trust Anchors: Sep 13 00:46:40.193544 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:46:40.182417 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:46:40.182473 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:46:40.228772 kernel: audit: type=1130 audit(1757724400.198:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.228810 kernel: Bridge firewalling registered Sep 13 00:46:40.228830 kernel: audit: type=1130 audit(1757724400.214:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.186723 systemd-resolved[187]: Defaulting to hostname 'linux'. Sep 13 00:46:40.198605 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 13 00:46:40.245757 kernel: audit: type=1130 audit(1757724400.229:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.245792 kernel: SCSI subsystem initialized Sep 13 00:46:40.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.198838 systemd[1]: Started systemd-resolved.service. Sep 13 00:46:40.215062 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:46:40.230445 systemd[1]: Reached target nss-lookup.target. Sep 13 00:46:40.242225 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:46:40.261396 dracut-cmdline[202]: dracut-dracut-053 Sep 13 00:46:40.273288 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:46:40.273324 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:46:40.273344 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:46:40.273365 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:46:40.281105 systemd-modules-load[186]: Inserted module 'dm_multipath' Sep 13 00:46:40.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.292358 kernel: audit: type=1130 audit(1757724400.284:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.283217 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:46:40.293764 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:46:40.305357 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:46:40.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.316277 kernel: audit: type=1130 audit(1757724400.306:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.363277 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:46:40.382271 kernel: iscsi: registered transport (tcp) Sep 13 00:46:40.408598 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:46:40.408680 kernel: QLogic iSCSI HBA Driver Sep 13 00:46:40.443500 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:46:40.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.445810 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:46:40.455410 kernel: audit: type=1130 audit(1757724400.444:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.505304 kernel: raid6: avx512x4 gen() 17834 MB/s Sep 13 00:46:40.523285 kernel: raid6: avx512x4 xor() 7792 MB/s Sep 13 00:46:40.541280 kernel: raid6: avx512x2 gen() 17677 MB/s Sep 13 00:46:40.559279 kernel: raid6: avx512x2 xor() 24301 MB/s Sep 13 00:46:40.577270 kernel: raid6: avx512x1 gen() 17554 MB/s Sep 13 00:46:40.595269 kernel: raid6: avx512x1 xor() 21812 MB/s Sep 13 00:46:40.613269 kernel: raid6: avx2x4 gen() 17516 MB/s Sep 13 00:46:40.631267 kernel: raid6: avx2x4 xor() 7118 MB/s Sep 13 00:46:40.649268 kernel: raid6: avx2x2 gen() 17374 MB/s Sep 13 00:46:40.667270 kernel: raid6: avx2x2 xor() 17968 MB/s Sep 13 00:46:40.685264 kernel: raid6: avx2x1 gen() 13480 MB/s Sep 13 00:46:40.703267 kernel: raid6: avx2x1 xor() 15755 MB/s Sep 13 00:46:40.721268 kernel: raid6: sse2x4 gen() 9508 MB/s Sep 13 00:46:40.739267 kernel: raid6: sse2x4 xor() 5956 MB/s Sep 13 00:46:40.757270 kernel: raid6: sse2x2 gen() 10508 MB/s Sep 13 00:46:40.775269 kernel: raid6: sse2x2 xor() 6087 MB/s Sep 13 00:46:40.793272 kernel: raid6: sse2x1 gen() 9391 MB/s Sep 13 00:46:40.812351 kernel: raid6: sse2x1 xor() 4791 MB/s Sep 13 00:46:40.812402 kernel: raid6: using algorithm avx512x4 gen() 17834 MB/s Sep 13 00:46:40.812425 kernel: raid6: .... xor() 7792 MB/s, rmw enabled Sep 13 00:46:40.813937 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:46:40.831278 kernel: xor: automatically using best checksumming function avx Sep 13 00:46:40.936279 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:46:40.946098 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:46:40.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.947000 audit: BPF prog-id=7 op=LOAD Sep 13 00:46:40.947000 audit: BPF prog-id=8 op=LOAD Sep 13 00:46:40.947882 systemd[1]: Starting systemd-udevd.service... Sep 13 00:46:40.961522 systemd-udevd[384]: Using default interface naming scheme 'v252'. Sep 13 00:46:40.966922 systemd[1]: Started systemd-udevd.service. Sep 13 00:46:40.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:40.971701 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:46:40.989402 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Sep 13 00:46:41.022292 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:46:41.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:41.023940 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:46:41.068584 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:46:41.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:41.128691 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:46:41.173760 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:46:41.173933 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:46:41.173962 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:46:41.174113 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2b:13:36:13:55 Sep 13 00:46:41.176611 (udev-worker)[435]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:46:41.184267 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:46:41.184719 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:46:41.190224 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:46:41.190385 kernel: AES CTR mode by8 optimization enabled Sep 13 00:46:41.201264 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:46:41.210775 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:46:41.210857 kernel: GPT:9289727 != 16777215 Sep 13 00:46:41.210878 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:46:41.213201 kernel: GPT:9289727 != 16777215 Sep 13 00:46:41.217094 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:46:41.217161 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:46:41.286267 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (437) Sep 13 00:46:41.337179 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:46:41.379442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:46:41.393994 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:46:41.399180 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:46:41.400127 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:46:41.403299 systemd[1]: Starting disk-uuid.service... Sep 13 00:46:41.411547 disk-uuid[593]: Primary Header is updated. Sep 13 00:46:41.411547 disk-uuid[593]: Secondary Entries is updated. Sep 13 00:46:41.411547 disk-uuid[593]: Secondary Header is updated. Sep 13 00:46:41.422279 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:46:41.430279 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:46:41.437273 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:46:42.437067 disk-uuid[594]: The operation has completed successfully. Sep 13 00:46:42.437867 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:46:42.569634 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:46:42.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.569757 systemd[1]: Finished disk-uuid.service. Sep 13 00:46:42.580624 systemd[1]: Starting verity-setup.service... Sep 13 00:46:42.607280 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:46:42.702778 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:46:42.704988 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:46:42.707698 systemd[1]: Finished verity-setup.service. Sep 13 00:46:42.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.797516 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:46:42.797995 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:46:42.798768 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:46:42.799494 systemd[1]: Starting ignition-setup.service... Sep 13 00:46:42.802588 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:46:42.825571 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:46:42.825628 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:46:42.825641 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:46:42.836277 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:46:42.855015 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:46:42.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.866343 systemd[1]: Finished ignition-setup.service. Sep 13 00:46:42.867918 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:46:42.899864 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:46:42.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.901000 audit: BPF prog-id=9 op=LOAD Sep 13 00:46:42.902580 systemd[1]: Starting systemd-networkd.service... Sep 13 00:46:42.927468 systemd-networkd[1105]: lo: Link UP Sep 13 00:46:42.927485 systemd-networkd[1105]: lo: Gained carrier Sep 13 00:46:42.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.928131 systemd-networkd[1105]: Enumeration completed Sep 13 00:46:42.928273 systemd[1]: Started systemd-networkd.service. Sep 13 00:46:42.928589 systemd-networkd[1105]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:46:42.931543 systemd[1]: Reached target network.target. Sep 13 00:46:42.931701 systemd-networkd[1105]: eth0: Link UP Sep 13 00:46:42.931706 systemd-networkd[1105]: eth0: Gained carrier Sep 13 00:46:42.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.933520 systemd[1]: Starting iscsiuio.service... Sep 13 00:46:42.942153 systemd[1]: Started iscsiuio.service. Sep 13 00:46:42.944591 systemd[1]: Starting iscsid.service... Sep 13 00:46:42.949983 iscsid[1110]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:46:42.949983 iscsid[1110]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:46:42.949983 iscsid[1110]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:46:42.949983 iscsid[1110]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:46:42.949983 iscsid[1110]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:46:42.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.961485 iscsid[1110]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:46:42.952309 systemd[1]: Started iscsid.service. Sep 13 00:46:42.953379 systemd-networkd[1105]: eth0: DHCPv4 address 172.31.24.139/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:46:42.958541 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:46:42.974755 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:46:42.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.975744 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:46:42.976993 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:46:42.978216 systemd[1]: Reached target remote-fs.target. Sep 13 00:46:42.980830 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:46:42.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:42.991479 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:46:43.441912 ignition[1065]: Ignition 2.14.0 Sep 13 00:46:43.441925 ignition[1065]: Stage: fetch-offline Sep 13 00:46:43.442037 ignition[1065]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:46:43.442069 ignition[1065]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:46:43.459611 ignition[1065]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:46:43.460309 ignition[1065]: Ignition finished successfully Sep 13 00:46:43.462463 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:46:43.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:43.464824 systemd[1]: Starting ignition-fetch.service... Sep 13 00:46:43.476043 ignition[1129]: Ignition 2.14.0 Sep 13 00:46:43.476061 ignition[1129]: Stage: fetch Sep 13 00:46:43.476304 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:46:43.476339 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:46:43.485307 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:46:43.486275 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:46:43.530830 ignition[1129]: INFO : PUT result: OK Sep 13 00:46:43.562559 ignition[1129]: DEBUG : parsed url from cmdline: "" Sep 13 00:46:43.562559 ignition[1129]: INFO : no config URL provided Sep 13 00:46:43.562559 ignition[1129]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:46:43.562559 ignition[1129]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 13 00:46:43.562559 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:46:43.568383 ignition[1129]: INFO : PUT result: OK Sep 13 00:46:43.569206 ignition[1129]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:46:43.570674 ignition[1129]: INFO : GET result: OK Sep 13 00:46:43.571625 ignition[1129]: DEBUG : parsing config with SHA512: e257cdecb87d718bd05afd45b7b676b77bc0f57db70bf1eb3079698ace8063ab9eaa459d32209de5569694f8641c901c7951a3501989b8e5e9c379479b5badb4 Sep 13 00:46:43.577745 unknown[1129]: fetched base config from "system" Sep 13 00:46:43.579088 unknown[1129]: fetched base config from "system" Sep 13 00:46:43.579109 unknown[1129]: fetched user config from "aws" Sep 13 00:46:43.580375 ignition[1129]: fetch: fetch complete Sep 13 00:46:43.580382 ignition[1129]: fetch: fetch passed Sep 13 00:46:43.580450 ignition[1129]: Ignition finished successfully Sep 13 00:46:43.583153 systemd[1]: Finished ignition-fetch.service. Sep 13 00:46:43.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:43.585203 systemd[1]: Starting ignition-kargs.service... Sep 13 00:46:43.596772 ignition[1135]: Ignition 2.14.0 Sep 13 00:46:43.596791 ignition[1135]: Stage: kargs Sep 13 00:46:43.596994 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:46:43.597028 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:46:43.604839 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:46:43.605722 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:46:43.606500 ignition[1135]: INFO : PUT result: OK Sep 13 00:46:43.608569 ignition[1135]: kargs: kargs passed Sep 13 00:46:43.608643 ignition[1135]: Ignition finished successfully Sep 13 00:46:43.610660 systemd[1]: Finished ignition-kargs.service. Sep 13 00:46:43.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:43.612714 systemd[1]: Starting ignition-disks.service... Sep 13 00:46:43.622919 ignition[1141]: Ignition 2.14.0 Sep 13 00:46:43.622936 ignition[1141]: Stage: disks Sep 13 00:46:43.623149 ignition[1141]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:46:43.623182 ignition[1141]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:46:43.630710 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:46:43.631626 ignition[1141]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:46:43.632489 ignition[1141]: INFO : PUT result: OK Sep 13 00:46:43.634836 ignition[1141]: disks: disks passed Sep 13 00:46:43.634911 ignition[1141]: Ignition finished successfully Sep 13 00:46:43.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:43.636894 systemd[1]: Finished ignition-disks.service. Sep 13 00:46:43.637556 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:46:43.638014 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:46:43.638496 systemd[1]: Reached target local-fs.target. Sep 13 00:46:43.638909 systemd[1]: Reached target sysinit.target. Sep 13 00:46:43.639393 systemd[1]: Reached target basic.target. Sep 13 00:46:43.641165 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:46:43.682433 systemd-fsck[1149]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:46:43.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:46:43.685541 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:46:43.686959 systemd[1]: Mounting sysroot.mount... Sep 13 00:46:43.706727 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:46:43.706289 systemd[1]: Mounted sysroot.mount. Sep 13 00:46:43.707476 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:46:43.710170 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:46:43.711674 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:46:43.712754 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:46:43.713706 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:46:43.715674 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:46:44.740397 systemd-networkd[1105]: eth0: Gained IPv6LL Sep 13 00:47:38.325975 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:47:38.328468 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:47:38.335873 initrd-setup-root[1171]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:47:38.342445 initrd-setup-root[1179]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:47:38.350376 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1166) Sep 13 00:47:38.358288 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:47:38.358496 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:47:38.358517 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:47:38.359910 initrd-setup-root[1188]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:47:38.382519 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:47:38.387272 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:47:38.391817 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:47:38.444375 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:47:38.457714 kernel: kauditd_printk_skb: 22 callbacks suppressed Sep 13 00:47:38.457782 kernel: audit: type=1130 audit(1757724458.445:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:38.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:38.446416 systemd[1]: Starting ignition-mount.service... Sep 13 00:47:38.458682 systemd[1]: Starting sysroot-boot.service... Sep 13 00:47:38.467423 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:47:38.467581 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:47:38.482635 ignition[1232]: INFO : Ignition 2.14.0 Sep 13 00:47:38.484591 ignition[1232]: INFO : Stage: mount Sep 13 00:47:38.485903 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:38.487859 ignition[1232]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:47:38.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:38.501605 systemd[1]: Finished sysroot-boot.service. Sep 13 00:47:38.509282 kernel: audit: type=1130 audit(1757724458.502:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:38.511303 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:47:38.512302 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:47:38.513783 ignition[1232]: INFO : PUT result: OK Sep 13 00:47:38.517349 ignition[1232]: INFO : mount: mount passed Sep 13 00:47:38.518025 ignition[1232]: INFO : Ignition finished successfully Sep 13 00:47:38.519514 systemd[1]: Finished ignition-mount.service. Sep 13 00:47:38.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:38.529285 kernel: audit: type=1130 audit(1757724458.520:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:38.522394 systemd[1]: Starting ignition-files.service... Sep 13 00:47:38.536902 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:47:38.556260 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1242) Sep 13 00:47:38.560393 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:47:38.560458 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:47:38.560472 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:47:38.568278 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:47:38.572028 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:47:38.582704 ignition[1261]: INFO : Ignition 2.14.0 Sep 13 00:47:38.582704 ignition[1261]: INFO : Stage: files Sep 13 00:47:38.584217 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:38.584217 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:47:38.590101 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:47:38.590767 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:47:38.591750 ignition[1261]: INFO : PUT result: OK Sep 13 00:47:38.595462 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:47:38.601970 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:47:38.601970 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:47:38.606169 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:47:38.607464 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:47:38.607464 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:47:38.607142 unknown[1261]: wrote ssh authorized keys file for user: core Sep 13 00:47:38.610531 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:47:38.610531 ignition[1261]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:47:38.680390 ignition[1261]: INFO : GET result: OK Sep 13 00:47:38.907982 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:47:38.910330 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:47:38.910330 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:47:38.910330 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:47:38.910330 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:47:38.919657 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4030228831" Sep 13 00:47:38.921757 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4030228831": device or resource busy Sep 13 00:47:38.921757 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4030228831", trying btrfs: device or resource busy Sep 13 00:47:38.921757 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4030228831" Sep 13 00:47:38.921757 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4030228831" Sep 13 00:47:38.921757 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem4030228831" Sep 13 00:47:38.921757 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem4030228831" Sep 13 00:47:38.921757 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:47:38.930868 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:47:38.930868 ignition[1261]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:47:39.140631 ignition[1261]: INFO : GET result: OK Sep 13 00:47:39.278168 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:47:39.278168 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:47:39.282972 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:47:39.282972 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:47:39.310498 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2771152238" Sep 13 00:47:39.310498 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2771152238": device or resource busy Sep 13 00:47:39.310498 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2771152238", trying btrfs: device or resource busy Sep 13 00:47:39.310498 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2771152238" Sep 13 00:47:39.310498 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2771152238" Sep 13 00:47:39.310498 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem2771152238" Sep 13 00:47:39.310498 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem2771152238" Sep 13 00:47:39.310498 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:47:39.310498 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:47:39.310498 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:47:39.310498 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem576816417" Sep 13 00:47:39.310498 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem576816417": device or resource busy Sep 13 00:47:39.310498 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem576816417", trying btrfs: device or resource busy Sep 13 00:47:39.310498 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem576816417" Sep 13 00:47:39.310498 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem576816417" Sep 13 00:47:39.310498 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem576816417" Sep 13 00:47:39.310498 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem576816417" Sep 13 00:47:39.310498 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:47:39.310498 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:47:39.310498 ignition[1261]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:47:39.703579 ignition[1261]: INFO : GET result: OK Sep 13 00:47:40.321563 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:47:40.321563 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:47:40.324758 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:47:40.329530 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1069105221" Sep 13 00:47:40.331804 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1069105221": device or resource busy Sep 13 00:47:40.331804 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1069105221", trying btrfs: device or resource busy Sep 13 00:47:40.331804 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1069105221" Sep 13 00:47:40.331804 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1069105221" Sep 13 00:47:40.331804 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem1069105221" Sep 13 00:47:40.331804 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem1069105221" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(13): [started] processing unit "nvidia.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(13): [finished] processing unit "nvidia.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:47:40.331804 ignition[1261]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:47:40.420871 kernel: audit: type=1130 audit(1757724460.343:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.420905 kernel: audit: type=1130 audit(1757724460.368:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.420925 kernel: audit: type=1131 audit(1757724460.375:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.420951 kernel: audit: type=1130 audit(1757724460.388:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.421176 ignition[1261]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:47:40.421176 ignition[1261]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:47:40.421176 ignition[1261]: INFO : files: op(18): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:47:40.421176 ignition[1261]: INFO : files: op(18): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:47:40.421176 ignition[1261]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Sep 13 00:47:40.421176 ignition[1261]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:47:40.421176 ignition[1261]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:47:40.421176 ignition[1261]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:47:40.421176 ignition[1261]: INFO : files: files passed Sep 13 00:47:40.421176 ignition[1261]: INFO : Ignition finished successfully Sep 13 00:47:40.456909 kernel: audit: type=1130 audit(1757724460.421:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.456948 kernel: audit: type=1131 audit(1757724460.421:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.466267 kernel: audit: type=1130 audit(1757724460.457:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.341769 systemd[1]: Finished ignition-files.service. Sep 13 00:47:40.358541 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:47:40.469481 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:47:40.359974 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:47:40.361109 systemd[1]: Starting ignition-quench.service... Sep 13 00:47:40.367028 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:47:40.367145 systemd[1]: Finished ignition-quench.service. Sep 13 00:47:40.380629 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:47:40.388617 systemd[1]: Reached target ignition-complete.target. Sep 13 00:47:40.398825 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:47:40.420017 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:47:40.420137 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:47:40.421887 systemd[1]: Reached target initrd-fs.target. Sep 13 00:47:40.436265 systemd[1]: Reached target initrd.target. Sep 13 00:47:40.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.438591 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:47:40.439985 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:47:40.457279 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:47:40.473421 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:47:40.485379 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:47:40.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.485509 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:47:40.487850 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:47:40.489903 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:47:40.491224 systemd[1]: Stopped target timers.target. Sep 13 00:47:40.492470 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:47:40.492550 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:47:40.493740 systemd[1]: Stopped target initrd.target. Sep 13 00:47:40.494852 systemd[1]: Stopped target basic.target. Sep 13 00:47:40.496064 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:47:40.497208 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:47:40.498365 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:47:40.499541 systemd[1]: Stopped target remote-fs.target. Sep 13 00:47:40.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.500654 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:47:40.501763 systemd[1]: Stopped target sysinit.target. Sep 13 00:47:40.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.502791 systemd[1]: Stopped target local-fs.target. Sep 13 00:47:40.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.503910 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:47:40.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.504974 systemd[1]: Stopped target swap.target. Sep 13 00:47:40.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.506059 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:47:40.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.531485 ignition[1299]: INFO : Ignition 2.14.0 Sep 13 00:47:40.531485 ignition[1299]: INFO : Stage: umount Sep 13 00:47:40.531485 ignition[1299]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:40.531485 ignition[1299]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:47:40.506142 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:47:40.507372 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:47:40.508441 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:47:40.508516 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:47:40.550908 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:47:40.550908 ignition[1299]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:47:40.509704 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:47:40.553587 ignition[1299]: INFO : PUT result: OK Sep 13 00:47:40.509765 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:47:40.510813 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:47:40.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.558522 ignition[1299]: INFO : umount: umount passed Sep 13 00:47:40.558522 ignition[1299]: INFO : Ignition finished successfully Sep 13 00:47:40.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.510869 systemd[1]: Stopped ignition-files.service. Sep 13 00:47:40.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.513040 systemd[1]: Stopping ignition-mount.service... Sep 13 00:47:40.515388 systemd[1]: Stopping iscsiuio.service... Sep 13 00:47:40.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.516417 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:47:40.516487 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:47:40.521976 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:47:40.523051 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:47:40.523131 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:47:40.524138 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:47:40.524195 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:47:40.526200 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:47:40.526357 systemd[1]: Stopped iscsiuio.service. Sep 13 00:47:40.556879 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:47:40.556997 systemd[1]: Stopped ignition-mount.service. Sep 13 00:47:40.557763 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:47:40.557831 systemd[1]: Stopped ignition-disks.service. Sep 13 00:47:40.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.559217 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:47:40.559298 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:47:40.560541 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:47:40.560599 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:47:40.561996 systemd[1]: Stopped target network.target. Sep 13 00:47:40.563610 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:47:40.563691 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:47:40.565327 systemd[1]: Stopped target paths.target. Sep 13 00:47:40.566402 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:47:40.570317 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:47:40.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.571369 systemd[1]: Stopped target slices.target. Sep 13 00:47:40.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.591000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:47:40.572631 systemd[1]: Stopped target sockets.target. Sep 13 00:47:40.573837 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:47:40.573882 systemd[1]: Closed iscsid.socket. Sep 13 00:47:40.575084 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:47:40.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.575135 systemd[1]: Closed iscsiuio.socket. Sep 13 00:47:40.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.576422 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:47:40.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.576488 systemd[1]: Stopped ignition-setup.service. Sep 13 00:47:40.577934 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:47:40.579232 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:47:40.581299 systemd-networkd[1105]: eth0: DHCPv6 lease lost Sep 13 00:47:40.605000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:47:40.584681 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:47:40.585425 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:47:40.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.585558 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:47:40.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.589037 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:47:40.589154 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:47:40.591716 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:47:40.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.591766 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:47:40.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.593682 systemd[1]: Stopping network-cleanup.service... Sep 13 00:47:40.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.595969 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:47:40.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.596042 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:47:40.597207 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:47:40.597306 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:47:40.598481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:47:40.598536 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:47:40.599849 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:47:40.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.602812 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:47:40.607412 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:47:40.607612 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:47:40.609361 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:47:40.609468 systemd[1]: Stopped network-cleanup.service. Sep 13 00:47:40.611274 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:47:40.611327 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:47:40.612804 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:47:40.612848 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:47:40.613861 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:47:40.613921 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:47:40.615118 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:47:40.615170 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:47:40.616279 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:47:40.616333 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:47:40.618528 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:47:40.622173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:47:40.622268 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:47:40.627504 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:47:40.627611 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:47:40.677281 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:47:40.677399 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:47:40.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.678697 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:47:40.679755 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:47:40.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:40.679831 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:47:40.682045 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:47:40.695197 systemd[1]: Switching root. Sep 13 00:47:40.715386 iscsid[1110]: iscsid shutting down. Sep 13 00:47:40.716559 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Sep 13 00:47:40.716634 systemd-journald[185]: Journal stopped Sep 13 00:47:43.962667 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:47:43.962734 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:47:43.962749 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:47:43.962765 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:47:43.962780 kernel: SELinux: policy capability open_perms=1 Sep 13 00:47:43.962792 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:47:43.962804 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:47:43.962822 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:47:43.962834 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:47:43.962845 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:47:43.962856 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:47:43.962869 systemd[1]: Successfully loaded SELinux policy in 59.544ms. Sep 13 00:47:43.962898 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.032ms. Sep 13 00:47:43.962915 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:47:43.962928 systemd[1]: Detected virtualization amazon. Sep 13 00:47:43.962940 systemd[1]: Detected architecture x86-64. Sep 13 00:47:43.962952 systemd[1]: Detected first boot. Sep 13 00:47:43.962964 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:47:43.962975 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:47:43.962989 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:47:43.963005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:47:43.963018 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:47:43.963034 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:47:43.963049 kernel: kauditd_printk_skb: 50 callbacks suppressed Sep 13 00:47:43.963059 kernel: audit: type=1334 audit(1757724463.734:86): prog-id=12 op=LOAD Sep 13 00:47:43.963070 kernel: audit: type=1334 audit(1757724463.734:87): prog-id=3 op=UNLOAD Sep 13 00:47:43.963083 kernel: audit: type=1334 audit(1757724463.735:88): prog-id=13 op=LOAD Sep 13 00:47:43.963094 kernel: audit: type=1334 audit(1757724463.737:89): prog-id=14 op=LOAD Sep 13 00:47:43.963106 kernel: audit: type=1334 audit(1757724463.737:90): prog-id=4 op=UNLOAD Sep 13 00:47:43.963117 kernel: audit: type=1334 audit(1757724463.737:91): prog-id=5 op=UNLOAD Sep 13 00:47:43.963128 kernel: audit: type=1334 audit(1757724463.739:92): prog-id=15 op=LOAD Sep 13 00:47:43.963139 kernel: audit: type=1334 audit(1757724463.739:93): prog-id=12 op=UNLOAD Sep 13 00:47:43.963150 kernel: audit: type=1334 audit(1757724463.745:94): prog-id=16 op=LOAD Sep 13 00:47:43.963161 kernel: audit: type=1334 audit(1757724463.747:95): prog-id=17 op=LOAD Sep 13 00:47:43.963175 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:47:43.963192 systemd[1]: Stopped iscsid.service. Sep 13 00:47:43.963204 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:47:43.963215 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:47:43.963227 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:47:43.963260 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:47:43.963273 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:47:43.963285 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:47:43.963299 systemd[1]: Created slice system-getty.slice. Sep 13 00:47:43.963310 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:47:43.963323 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:47:43.963335 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:47:43.963348 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:47:43.963359 systemd[1]: Created slice user.slice. Sep 13 00:47:43.963371 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:47:43.963383 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:47:43.963399 systemd[1]: Set up automount boot.automount. Sep 13 00:47:43.963413 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:47:43.963426 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:47:43.963438 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:47:43.963452 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:47:43.963471 systemd[1]: Reached target integritysetup.target. Sep 13 00:47:43.963487 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:47:43.963499 systemd[1]: Reached target remote-fs.target. Sep 13 00:47:43.963510 systemd[1]: Reached target slices.target. Sep 13 00:47:43.963523 systemd[1]: Reached target swap.target. Sep 13 00:47:43.963535 systemd[1]: Reached target torcx.target. Sep 13 00:47:43.963547 systemd[1]: Reached target veritysetup.target. Sep 13 00:47:43.963559 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:47:43.963571 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:47:43.963583 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:47:43.963599 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:47:43.963611 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:47:43.963623 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:47:43.963639 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:47:43.963651 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:47:43.963664 systemd[1]: Mounting media.mount... Sep 13 00:47:43.963676 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:43.963688 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:47:43.963700 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:47:43.963714 systemd[1]: Mounting tmp.mount... Sep 13 00:47:43.963725 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:47:43.963738 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:43.963749 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:47:43.963761 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:47:43.963773 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:43.963786 systemd[1]: Starting modprobe@drm.service... Sep 13 00:47:43.963798 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:43.963810 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:47:43.963825 systemd[1]: Starting modprobe@loop.service... Sep 13 00:47:43.963838 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:47:43.963850 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:47:43.963863 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:47:43.963875 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:47:43.963887 kernel: fuse: init (API version 7.34) Sep 13 00:47:43.963900 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:47:43.963912 systemd[1]: Stopped systemd-journald.service. Sep 13 00:47:43.963925 systemd[1]: Starting systemd-journald.service... Sep 13 00:47:43.963939 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:47:43.963951 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:47:43.963963 kernel: loop: module loaded Sep 13 00:47:43.963975 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:47:43.963987 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:47:43.964000 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:47:43.964012 systemd[1]: Stopped verity-setup.service. Sep 13 00:47:43.964024 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:43.964036 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:47:43.964053 systemd-journald[1416]: Journal started Sep 13 00:47:43.964106 systemd-journald[1416]: Runtime Journal (/run/log/journal/ec2f5ca0773532d0d7a50738c237bb56) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:47:40.933000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:47:41.011000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:47:41.011000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:47:41.011000 audit: BPF prog-id=10 op=LOAD Sep 13 00:47:41.011000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:47:41.011000 audit: BPF prog-id=11 op=LOAD Sep 13 00:47:41.011000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:47:41.141000 audit[1332]: AVC avc: denied { associate } for pid=1332 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:47:41.141000 audit[1332]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178e4 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:41.141000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:47:41.146000 audit[1332]: AVC avc: denied { associate } for pid=1332 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:47:41.146000 audit[1332]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179c9 a2=1ed a3=0 items=2 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:41.146000 audit: CWD cwd="/" Sep 13 00:47:41.146000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:41.146000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:41.146000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:47:43.734000 audit: BPF prog-id=12 op=LOAD Sep 13 00:47:43.734000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:47:43.735000 audit: BPF prog-id=13 op=LOAD Sep 13 00:47:43.737000 audit: BPF prog-id=14 op=LOAD Sep 13 00:47:43.737000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:47:43.737000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:47:43.739000 audit: BPF prog-id=15 op=LOAD Sep 13 00:47:43.739000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:47:43.745000 audit: BPF prog-id=16 op=LOAD Sep 13 00:47:43.747000 audit: BPF prog-id=17 op=LOAD Sep 13 00:47:43.747000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:47:43.747000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:47:43.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.756000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:47:43.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.925000 audit: BPF prog-id=18 op=LOAD Sep 13 00:47:43.925000 audit: BPF prog-id=19 op=LOAD Sep 13 00:47:43.968383 systemd[1]: Started systemd-journald.service. Sep 13 00:47:43.925000 audit: BPF prog-id=20 op=LOAD Sep 13 00:47:43.925000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:47:43.925000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:47:43.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.961000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:47:43.961000 audit[1416]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffdc8ad900 a2=4000 a3=7fffdc8ad99c items=0 ppid=1 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:43.961000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:47:41.136222 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:47:43.732701 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:47:43.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:41.137274 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:47:43.732714 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 13 00:47:41.137306 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:47:43.748060 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:47:41.137353 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:47:43.968973 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:47:41.137369 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:47:41.137415 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:47:41.137436 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:47:41.137710 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:47:41.137765 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:47:41.137784 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:47:41.139605 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:47:41.139660 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:47:41.139689 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:47:41.139713 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:47:41.139742 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:47:41.139765 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:47:43.282647 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:43Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:47:43.282926 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:43Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:47:43.283041 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:43Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:47:43.283221 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:43Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:47:43.283286 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:43Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:47:43.283344 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2025-09-13T00:47:43Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:47:43.971841 systemd[1]: Mounted media.mount. Sep 13 00:47:43.972516 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:47:43.973110 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:47:43.973714 systemd[1]: Mounted tmp.mount. Sep 13 00:47:43.974422 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:47:43.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.977434 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:47:43.977562 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:47:43.978346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:43.978468 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:43.979341 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:47:43.979454 systemd[1]: Finished modprobe@drm.service. Sep 13 00:47:43.980153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:43.980283 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:43.980953 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:47:43.981055 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:47:43.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.982751 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:47:43.982896 systemd[1]: Finished modprobe@loop.service. Sep 13 00:47:43.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:43.984563 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:47:43.987911 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:47:43.992426 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:47:43.992929 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:47:43.996340 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:47:43.998369 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:47:43.999182 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:47:44.000629 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:47:44.003537 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:47:44.005566 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:47:44.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.006838 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:47:44.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.008602 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:47:44.009567 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:47:44.011364 systemd[1]: Reached target network-pre.target. Sep 13 00:47:44.014553 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:47:44.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.016691 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:47:44.021063 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:47:44.030398 systemd-journald[1416]: Time spent on flushing to /var/log/journal/ec2f5ca0773532d0d7a50738c237bb56 is 39.657ms for 1227 entries. Sep 13 00:47:44.030398 systemd-journald[1416]: System Journal (/var/log/journal/ec2f5ca0773532d0d7a50738c237bb56) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:47:44.090890 systemd-journald[1416]: Received client request to flush runtime journal. Sep 13 00:47:44.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.045086 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:47:44.045942 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:47:44.071618 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:47:44.073754 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:47:44.092068 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:47:44.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.131677 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:47:44.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.134026 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:47:44.145059 udevadm[1453]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:47:44.660012 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:47:44.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.660000 audit: BPF prog-id=21 op=LOAD Sep 13 00:47:44.660000 audit: BPF prog-id=22 op=LOAD Sep 13 00:47:44.660000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:47:44.660000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:47:44.661786 systemd[1]: Starting systemd-udevd.service... Sep 13 00:47:44.679780 systemd-udevd[1454]: Using default interface naming scheme 'v252'. Sep 13 00:47:44.721291 systemd[1]: Started systemd-udevd.service. Sep 13 00:47:44.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.722000 audit: BPF prog-id=23 op=LOAD Sep 13 00:47:44.726577 systemd[1]: Starting systemd-networkd.service... Sep 13 00:47:44.736000 audit: BPF prog-id=24 op=LOAD Sep 13 00:47:44.736000 audit: BPF prog-id=25 op=LOAD Sep 13 00:47:44.736000 audit: BPF prog-id=26 op=LOAD Sep 13 00:47:44.737253 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:47:44.765945 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:47:44.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.768608 systemd[1]: Started systemd-userdbd.service. Sep 13 00:47:44.792558 (udev-worker)[1455]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:47:44.826894 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:47:44.832260 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:47:44.832000 audit[1461]: AVC avc: denied { confidentiality } for pid=1461 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:47:44.832000 audit[1461]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55970c07eb30 a1=338ec a2=7f4e3d4c1bc5 a3=5 items=110 ppid=1454 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:44.832000 audit: CWD cwd="/" Sep 13 00:47:44.832000 audit: PATH item=0 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=1 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=2 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=3 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=4 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=5 name=(null) inode=13965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=6 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=7 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=8 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=9 name=(null) inode=13967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=10 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=11 name=(null) inode=13968 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=12 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=13 name=(null) inode=13969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=14 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=15 name=(null) inode=13970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=16 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=17 name=(null) inode=13971 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=18 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=19 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=20 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=21 name=(null) inode=13973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=22 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=23 name=(null) inode=13974 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=24 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=25 name=(null) inode=13975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=26 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=27 name=(null) inode=13976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=28 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=29 name=(null) inode=13977 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=30 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=31 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=32 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=33 name=(null) inode=13979 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=34 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=35 name=(null) inode=13980 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=36 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=37 name=(null) inode=13981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=38 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=39 name=(null) inode=13982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=40 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=41 name=(null) inode=13983 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=42 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=43 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=44 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=45 name=(null) inode=13985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=46 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=47 name=(null) inode=13986 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=48 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=49 name=(null) inode=13987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=50 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=51 name=(null) inode=13988 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=52 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=53 name=(null) inode=13989 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=54 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=55 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=56 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=57 name=(null) inode=13991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=58 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=59 name=(null) inode=13992 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=60 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=61 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=62 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=63 name=(null) inode=13994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=64 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=65 name=(null) inode=13995 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=66 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=67 name=(null) inode=13996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=68 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=69 name=(null) inode=13997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=70 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=71 name=(null) inode=13998 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=72 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=73 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=74 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=75 name=(null) inode=14000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=76 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=77 name=(null) inode=14001 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=78 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=79 name=(null) inode=14002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=80 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=81 name=(null) inode=14003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=82 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=83 name=(null) inode=14004 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=84 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=85 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=86 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=87 name=(null) inode=14006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=88 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=89 name=(null) inode=14007 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=90 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=91 name=(null) inode=14008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=92 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=93 name=(null) inode=14009 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=94 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=95 name=(null) inode=14010 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=96 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=97 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=98 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=99 name=(null) inode=14012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=100 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=101 name=(null) inode=14013 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=102 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=103 name=(null) inode=14014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=104 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=105 name=(null) inode=14015 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=106 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=107 name=(null) inode=14016 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PATH item=109 name=(null) inode=14017 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:44.832000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:47:44.845255 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 13 00:47:44.849341 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:47:44.852154 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:47:44.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.856490 systemd-networkd[1469]: lo: Link UP Sep 13 00:47:44.856498 systemd-networkd[1469]: lo: Gained carrier Sep 13 00:47:44.856930 systemd-networkd[1469]: Enumeration completed Sep 13 00:47:44.857026 systemd[1]: Started systemd-networkd.service. Sep 13 00:47:44.858678 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:47:44.860151 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:47:44.865983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:47:44.865628 systemd-networkd[1469]: eth0: Link UP Sep 13 00:47:44.865753 systemd-networkd[1469]: eth0: Gained carrier Sep 13 00:47:44.876394 systemd-networkd[1469]: eth0: DHCPv4 address 172.31.24.139/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:47:44.894340 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:47:44.904264 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:47:44.990944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:47:44.993672 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:47:44.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:44.995599 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:47:45.023980 lvm[1568]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:47:45.052527 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:47:45.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.053164 systemd[1]: Reached target cryptsetup.target. Sep 13 00:47:45.054781 systemd[1]: Starting lvm2-activation.service... Sep 13 00:47:45.059544 lvm[1569]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:47:45.079752 systemd[1]: Finished lvm2-activation.service. Sep 13 00:47:45.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.080539 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:47:45.081080 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:47:45.081130 systemd[1]: Reached target local-fs.target. Sep 13 00:47:45.081881 systemd[1]: Reached target machines.target. Sep 13 00:47:45.084040 systemd[1]: Starting ldconfig.service... Sep 13 00:47:45.085842 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.085921 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:45.087409 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:47:45.089755 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:47:45.093052 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:47:45.098525 systemd[1]: Starting systemd-sysext.service... Sep 13 00:47:45.115589 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1571 (bootctl) Sep 13 00:47:45.117762 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:47:45.122656 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:47:45.132693 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:47:45.132946 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:47:45.154840 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:47:45.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.151409 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:47:45.227277 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:47:45.252662 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:47:45.270018 systemd-fsck[1582]: fsck.fat 4.2 (2021-01-31) Sep 13 00:47:45.270018 systemd-fsck[1582]: /dev/nvme0n1p1: 790 files, 120761/258078 clusters Sep 13 00:47:45.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.270587 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:47:45.274680 systemd[1]: Mounting boot.mount... Sep 13 00:47:45.281625 (sd-sysext)[1585]: Using extensions 'kubernetes'. Sep 13 00:47:45.283300 (sd-sysext)[1585]: Merged extensions into '/usr'. Sep 13 00:47:45.320497 systemd[1]: Mounted boot.mount. Sep 13 00:47:45.321899 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:45.324061 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:47:45.325412 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.328138 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:45.331697 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:45.335534 systemd[1]: Starting modprobe@loop.service... Sep 13 00:47:45.338680 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.338955 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:45.339158 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:45.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.341413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:45.341610 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:45.347218 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:47:45.349188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:45.349620 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:45.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.351666 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:47:45.351847 systemd[1]: Finished modprobe@loop.service. Sep 13 00:47:45.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.355630 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:47:45.355804 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.357554 systemd[1]: Finished systemd-sysext.service. Sep 13 00:47:45.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.362347 systemd[1]: Starting ensure-sysext.service... Sep 13 00:47:45.365042 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:47:45.375852 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:47:45.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.389045 systemd[1]: Reloading. Sep 13 00:47:45.418450 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:47:45.432159 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:47:45.447062 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:47:45.513088 /usr/lib/systemd/system-generators/torcx-generator[1623]: time="2025-09-13T00:47:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:47:45.513130 /usr/lib/systemd/system-generators/torcx-generator[1623]: time="2025-09-13T00:47:45Z" level=info msg="torcx already run" Sep 13 00:47:45.695763 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:47:45.696020 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:47:45.725383 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:47:45.778762 ldconfig[1570]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:47:45.804825 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:47:45.807000 audit: BPF prog-id=27 op=LOAD Sep 13 00:47:45.807000 audit: BPF prog-id=28 op=LOAD Sep 13 00:47:45.807000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:47:45.807000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:47:45.808000 audit: BPF prog-id=29 op=LOAD Sep 13 00:47:45.808000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:47:45.808000 audit: BPF prog-id=30 op=LOAD Sep 13 00:47:45.808000 audit: BPF prog-id=31 op=LOAD Sep 13 00:47:45.808000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:47:45.808000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:47:45.809000 audit: BPF prog-id=32 op=LOAD Sep 13 00:47:45.809000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:47:45.809000 audit: BPF prog-id=33 op=LOAD Sep 13 00:47:45.809000 audit: BPF prog-id=34 op=LOAD Sep 13 00:47:45.809000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:47:45.809000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:47:45.811000 audit: BPF prog-id=35 op=LOAD Sep 13 00:47:45.811000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:47:45.816896 systemd[1]: Finished ldconfig.service. Sep 13 00:47:45.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.818869 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:47:45.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.820808 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:47:45.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.826591 systemd[1]: Starting audit-rules.service... Sep 13 00:47:45.828940 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:47:45.836000 audit: BPF prog-id=36 op=LOAD Sep 13 00:47:45.834639 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:47:45.837606 systemd[1]: Starting systemd-resolved.service... Sep 13 00:47:45.839000 audit: BPF prog-id=37 op=LOAD Sep 13 00:47:45.841421 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:47:45.844812 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:47:45.846895 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:47:45.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.848705 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:47:45.856311 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.859621 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:45.864127 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:45.868697 systemd[1]: Starting modprobe@loop.service... Sep 13 00:47:45.869504 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.869718 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:45.869924 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:47:45.873676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:45.873891 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:45.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.876066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:45.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.877316 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:45.882457 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.885442 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:45.888000 audit[1686]: SYSTEM_BOOT pid=1686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.888646 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:45.889845 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.890062 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:45.890280 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:47:45.891575 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:47:45.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.892461 systemd[1]: Finished modprobe@loop.service. Sep 13 00:47:45.893972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:45.894509 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:45.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.896101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:45.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.896392 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:45.901465 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:47:45.901729 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.908384 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:47:45.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.911829 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.914865 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:45.919044 systemd[1]: Starting modprobe@drm.service... Sep 13 00:47:45.921336 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:45.924443 systemd[1]: Starting modprobe@loop.service... Sep 13 00:47:45.925215 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.925364 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:45.925499 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:47:45.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.927169 systemd[1]: Finished ensure-sysext.service. Sep 13 00:47:45.928141 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:47:45.928373 systemd[1]: Finished modprobe@drm.service. Sep 13 00:47:45.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.931591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:45.931763 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:45.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.938622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:45.938830 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:45.939544 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:47:45.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.940408 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:47:45.940576 systemd[1]: Finished modprobe@loop.service. Sep 13 00:47:45.941319 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:47:45.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.965586 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:47:45.968069 systemd[1]: Starting systemd-update-done.service... Sep 13 00:47:45.988791 systemd[1]: Finished systemd-update-done.service. Sep 13 00:47:45.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:45.999000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:47:45.999000 audit[1710]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc830cf20 a2=420 a3=0 items=0 ppid=1680 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:45.999000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:47:46.000627 augenrules[1710]: No rules Sep 13 00:47:46.001678 systemd[1]: Finished audit-rules.service. Sep 13 00:47:46.012583 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:47:46.013092 systemd[1]: Reached target time-set.target. Sep 13 00:47:46.024644 systemd-resolved[1683]: Positive Trust Anchors: Sep 13 00:47:46.024684 systemd-resolved[1683]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:47:46.024725 systemd-resolved[1683]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:47:46.055441 systemd-resolved[1683]: Defaulting to hostname 'linux'. Sep 13 00:47:46.057305 systemd[1]: Started systemd-resolved.service. Sep 13 00:47:46.057818 systemd[1]: Reached target network.target. Sep 13 00:47:46.058221 systemd[1]: Reached target nss-lookup.target. Sep 13 00:47:46.058619 systemd[1]: Reached target sysinit.target. Sep 13 00:47:46.059199 systemd[1]: Started motdgen.path. Sep 13 00:47:46.059617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:47:46.060153 systemd[1]: Started logrotate.timer. Sep 13 00:47:46.060624 systemd[1]: Started mdadm.timer. Sep 13 00:47:46.060996 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:47:46.061401 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:47:46.061449 systemd[1]: Reached target paths.target. Sep 13 00:47:46.061808 systemd[1]: Reached target timers.target. Sep 13 00:47:46.062505 systemd[1]: Listening on dbus.socket. Sep 13 00:47:46.064168 systemd[1]: Starting docker.socket... Sep 13 00:47:46.068339 systemd[1]: Listening on sshd.socket. Sep 13 00:47:46.068873 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:46.069489 systemd[1]: Listening on docker.socket. Sep 13 00:47:46.069946 systemd[1]: Reached target sockets.target. Sep 13 00:47:46.070362 systemd[1]: Reached target basic.target. Sep 13 00:47:46.070877 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:47:46.070934 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:47:46.072117 systemd[1]: Starting containerd.service... Sep 13 00:47:46.075085 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:47:46.077124 systemd[1]: Starting dbus.service... Sep 13 00:47:46.079524 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:47:46.084575 systemd[1]: Starting extend-filesystems.service... Sep 13 00:47:46.089529 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:47:46.091402 systemd[1]: Starting motdgen.service... Sep 13 00:47:46.094108 jq[1721]: false Sep 13 00:47:46.095594 systemd[1]: Starting prepare-helm.service... Sep 13 00:47:46.098485 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:47:46.101364 systemd[1]: Starting sshd-keygen.service... Sep 13 00:47:46.109706 systemd[1]: Starting systemd-logind.service... Sep 13 00:47:46.110355 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:46.110456 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:47:46.111142 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:47:46.158146 jq[1731]: true Sep 13 00:47:46.112228 systemd[1]: Starting update-engine.service... Sep 13 00:47:46.162834 tar[1733]: linux-amd64/helm Sep 13 00:47:46.114302 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:47:46.118817 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:47:46.995782 dbus-daemon[1720]: [system] SELinux support is enabled Sep 13 00:47:47.000238 jq[1738]: true Sep 13 00:47:46.119093 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:47:46.135671 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:46.135707 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:46.163565 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:47:46.163786 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:47:46.180378 systemd-networkd[1469]: eth0: Gained IPv6LL Sep 13 00:47:46.198729 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:47:46.198985 systemd[1]: Finished motdgen.service. Sep 13 00:47:46.995163 systemd-timesyncd[1684]: Contacted time server 129.250.35.251:123 (0.flatcar.pool.ntp.org). Sep 13 00:47:46.995229 systemd-timesyncd[1684]: Initial clock synchronization to Sat 2025-09-13 00:47:46.995031 UTC. Sep 13 00:47:46.995946 systemd[1]: Started dbus.service. Sep 13 00:47:46.997388 systemd-resolved[1683]: Clock change detected. Flushing caches. Sep 13 00:47:47.000401 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:47:47.001245 systemd[1]: Reached target network-online.target. Sep 13 00:47:47.006798 systemd[1]: Started amazon-ssm-agent.service. Sep 13 00:47:47.009756 systemd[1]: Starting kubelet.service... Sep 13 00:47:47.014017 systemd[1]: Started nvidia.service. Sep 13 00:47:47.014770 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:47:47.014833 systemd[1]: Reached target system-config.target. Sep 13 00:47:47.015457 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:47:47.015499 systemd[1]: Reached target user-config.target. Sep 13 00:47:47.022175 dbus-daemon[1720]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1469 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:47:47.033110 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:47:47.044787 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:47:47.090575 extend-filesystems[1722]: Found loop1 Sep 13 00:47:47.090575 extend-filesystems[1722]: Found nvme0n1 Sep 13 00:47:47.090575 extend-filesystems[1722]: Found nvme0n1p1 Sep 13 00:47:47.090575 extend-filesystems[1722]: Found nvme0n1p2 Sep 13 00:47:47.090575 extend-filesystems[1722]: Found nvme0n1p3 Sep 13 00:47:47.090575 extend-filesystems[1722]: Found usr Sep 13 00:47:47.090575 extend-filesystems[1722]: Found nvme0n1p4 Sep 13 00:47:47.090575 extend-filesystems[1722]: Found nvme0n1p6 Sep 13 00:47:47.090575 extend-filesystems[1722]: Found nvme0n1p7 Sep 13 00:47:47.114406 extend-filesystems[1722]: Found nvme0n1p9 Sep 13 00:47:47.114406 extend-filesystems[1722]: Checking size of /dev/nvme0n1p9 Sep 13 00:47:47.215859 extend-filesystems[1722]: Resized partition /dev/nvme0n1p9 Sep 13 00:47:47.236360 extend-filesystems[1782]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:47:47.261705 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:47:47.268407 update_engine[1730]: I0913 00:47:47.266978 1730 main.cc:92] Flatcar Update Engine starting Sep 13 00:47:47.275238 systemd[1]: Started update-engine.service. Sep 13 00:47:47.278710 systemd[1]: Started locksmithd.service. Sep 13 00:47:47.280184 update_engine[1730]: I0913 00:47:47.280020 1730 update_check_scheduler.cc:74] Next update check in 10m37s Sep 13 00:47:47.362484 amazon-ssm-agent[1750]: 2025/09/13 00:47:47 Failed to load instance info from vault. RegistrationKey does not exist. Sep 13 00:47:47.365525 env[1734]: time="2025-09-13T00:47:47.363661693Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:47:47.365967 amazon-ssm-agent[1750]: Initializing new seelog logger Sep 13 00:47:47.365967 amazon-ssm-agent[1750]: New Seelog Logger Creation Complete Sep 13 00:47:47.365967 amazon-ssm-agent[1750]: 2025/09/13 00:47:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:47:47.365967 amazon-ssm-agent[1750]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:47:47.375666 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:47:47.375707 amazon-ssm-agent[1750]: 2025/09/13 00:47:47 processing appconfig overrides Sep 13 00:47:47.400779 extend-filesystems[1782]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:47:47.400779 extend-filesystems[1782]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:47:47.400779 extend-filesystems[1782]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:47:47.399724 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:47:47.414652 bash[1787]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:47:47.414789 extend-filesystems[1722]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:47:47.399931 systemd[1]: Finished extend-filesystems.service. Sep 13 00:47:47.402527 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:47:47.446231 systemd-logind[1729]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:47:47.448665 systemd-logind[1729]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:47:47.448833 systemd-logind[1729]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:47:47.449532 systemd-logind[1729]: New seat seat0. Sep 13 00:47:47.463105 systemd[1]: Started systemd-logind.service. Sep 13 00:47:47.476042 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:47:47.579326 env[1734]: time="2025-09-13T00:47:47.579212944Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:47:47.592792 env[1734]: time="2025-09-13T00:47:47.592708410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:47.595262 env[1734]: time="2025-09-13T00:47:47.595205150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:47:47.597342 env[1734]: time="2025-09-13T00:47:47.597309061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.713814226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.713864812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.713908039Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.713923287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.714129447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.714541719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.714886808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.714913682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.715045339Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:47:47.715252 env[1734]: time="2025-09-13T00:47:47.715062968Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:47:47.724510 env[1734]: time="2025-09-13T00:47:47.724393335Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:47:47.724510 env[1734]: time="2025-09-13T00:47:47.724442275Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:47:47.724510 env[1734]: time="2025-09-13T00:47:47.724461387Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:47:47.724855 env[1734]: time="2025-09-13T00:47:47.724779551Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.724810482Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.724963034Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.724982071Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725001354Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725019536Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725041659Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725060919Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725092556Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725237281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725328288Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725710337Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725747563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725767154Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:47:47.726590 env[1734]: time="2025-09-13T00:47:47.725835352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.725853764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.725871428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.725888902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.725907601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.725925619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.725941466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.725958902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.725978989Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.726131218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.726150441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.726170438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.726186933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.726208732Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:47:47.727161 env[1734]: time="2025-09-13T00:47:47.726224509Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:47:47.727683 env[1734]: time="2025-09-13T00:47:47.726256073Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:47:47.727683 env[1734]: time="2025-09-13T00:47:47.726303775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:47:47.728669 env[1734]: time="2025-09-13T00:47:47.727858728Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:47:47.728669 env[1734]: time="2025-09-13T00:47:47.727972271Z" level=info msg="Connect containerd service" Sep 13 00:47:47.728669 env[1734]: time="2025-09-13T00:47:47.728029964Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:47:47.732153 env[1734]: time="2025-09-13T00:47:47.729086953Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:47:47.732153 env[1734]: time="2025-09-13T00:47:47.729388724Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:47:47.732153 env[1734]: time="2025-09-13T00:47:47.729441843Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:47:47.732153 env[1734]: time="2025-09-13T00:47:47.729507155Z" level=info msg="containerd successfully booted in 0.399848s" Sep 13 00:47:47.732153 env[1734]: time="2025-09-13T00:47:47.731238138Z" level=info msg="Start subscribing containerd event" Sep 13 00:47:47.729945 systemd[1]: Started containerd.service. Sep 13 00:47:47.746221 env[1734]: time="2025-09-13T00:47:47.746176168Z" level=info msg="Start recovering state" Sep 13 00:47:47.746463 env[1734]: time="2025-09-13T00:47:47.746446944Z" level=info msg="Start event monitor" Sep 13 00:47:47.746576 env[1734]: time="2025-09-13T00:47:47.746541951Z" level=info msg="Start snapshots syncer" Sep 13 00:47:47.746692 env[1734]: time="2025-09-13T00:47:47.746674218Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:47:47.747626 env[1734]: time="2025-09-13T00:47:47.747594080Z" level=info msg="Start streaming server" Sep 13 00:47:47.800510 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:47:47.800719 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:47:47.802830 dbus-daemon[1720]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1759 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:47:47.807251 systemd[1]: Starting polkit.service... Sep 13 00:47:47.835119 polkitd[1871]: Started polkitd version 121 Sep 13 00:47:47.864861 polkitd[1871]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:47:47.865130 polkitd[1871]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:47:47.866952 polkitd[1871]: Finished loading, compiling and executing 2 rules Sep 13 00:47:47.867651 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:47:47.867854 systemd[1]: Started polkit.service. Sep 13 00:47:47.868360 polkitd[1871]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:47:47.917633 coreos-metadata[1719]: Sep 13 00:47:47.917 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:47:47.927832 coreos-metadata[1719]: Sep 13 00:47:47.927 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 13 00:47:47.927993 systemd-hostnamed[1759]: Hostname set to (transient) Sep 13 00:47:47.928114 systemd-resolved[1683]: System hostname changed to 'ip-172-31-24-139'. Sep 13 00:47:47.929266 coreos-metadata[1719]: Sep 13 00:47:47.929 INFO Fetch successful Sep 13 00:47:47.929266 coreos-metadata[1719]: Sep 13 00:47:47.929 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:47:47.931094 coreos-metadata[1719]: Sep 13 00:47:47.930 INFO Fetch successful Sep 13 00:47:47.937360 unknown[1719]: wrote ssh authorized keys file for user: core Sep 13 00:47:47.970583 update-ssh-keys[1890]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:47:47.971626 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:47:48.030630 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Create new startup processor Sep 13 00:47:48.031013 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [LongRunningPluginsManager] registered plugins: {} Sep 13 00:47:48.031122 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Initializing bookkeeping folders Sep 13 00:47:48.031209 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO removing the completed state files Sep 13 00:47:48.031291 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Initializing bookkeeping folders for long running plugins Sep 13 00:47:48.031373 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 13 00:47:48.031473 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Initializing healthcheck folders for long running plugins Sep 13 00:47:48.032898 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Initializing locations for inventory plugin Sep 13 00:47:48.033015 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Initializing default location for custom inventory Sep 13 00:47:48.033124 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Initializing default location for file inventory Sep 13 00:47:48.033215 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Initializing default location for role inventory Sep 13 00:47:48.033303 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Init the cloudwatchlogs publisher Sep 13 00:47:48.033387 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:runDocument Sep 13 00:47:48.033474 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 13 00:47:48.033579 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:refreshAssociation Sep 13 00:47:48.033686 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:downloadContent Sep 13 00:47:48.033769 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:runDockerAction Sep 13 00:47:48.033851 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:configurePackage Sep 13 00:47:48.033931 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:softwareInventory Sep 13 00:47:48.034015 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 13 00:47:48.034103 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform independent plugin aws:configureDocker Sep 13 00:47:48.034218 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Successfully loaded platform dependent plugin aws:runShellScript Sep 13 00:47:48.034300 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 13 00:47:48.034383 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO OS: linux, Arch: amd64 Sep 13 00:47:48.037242 amazon-ssm-agent[1750]: datastore file /var/lib/amazon/ssm/i-0e34f7b1e7efd96fe/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 13 00:47:48.039519 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] Starting document processing engine... Sep 13 00:47:48.136664 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 13 00:47:48.231580 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 13 00:47:48.326133 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] Starting message polling Sep 13 00:47:48.420901 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 13 00:47:48.481346 tar[1733]: linux-amd64/LICENSE Sep 13 00:47:48.481911 tar[1733]: linux-amd64/README.md Sep 13 00:47:48.491619 systemd[1]: Finished prepare-helm.service. Sep 13 00:47:48.515739 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [instanceID=i-0e34f7b1e7efd96fe] Starting association polling Sep 13 00:47:48.611021 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 13 00:47:48.628889 locksmithd[1788]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:47:48.706365 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 13 00:47:48.801814 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 13 00:47:48.897503 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 13 00:47:48.993656 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 13 00:47:49.091465 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [OfflineService] Starting document processing engine... Sep 13 00:47:49.185710 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [OfflineService] [EngineProcessor] Starting Sep 13 00:47:49.282074 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [OfflineService] [EngineProcessor] Initial processing Sep 13 00:47:49.303041 systemd[1]: Started kubelet.service. Sep 13 00:47:49.378882 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [OfflineService] Starting message polling Sep 13 00:47:49.475201 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [OfflineService] Starting send replies to MDS Sep 13 00:47:49.513021 sshd_keygen[1748]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:47:49.542478 systemd[1]: Finished sshd-keygen.service. Sep 13 00:47:49.545102 systemd[1]: Starting issuegen.service... Sep 13 00:47:49.552993 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:47:49.553266 systemd[1]: Finished issuegen.service. Sep 13 00:47:49.556043 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:47:49.566167 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:47:49.569004 systemd[1]: Started getty@tty1.service. Sep 13 00:47:49.571610 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:47:49.572090 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 13 00:47:49.572512 systemd[1]: Reached target getty.target. Sep 13 00:47:49.573449 systemd[1]: Reached target multi-user.target. Sep 13 00:47:49.576742 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:47:49.588546 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:47:49.588786 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:47:49.589489 systemd[1]: Startup finished in 721ms (kernel) + 1min 1.032s (initrd) + 7.939s (userspace) = 1min 9.692s. Sep 13 00:47:49.669265 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:47:49.766484 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] Starting session document processing engine... Sep 13 00:47:49.864068 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 13 00:47:49.961712 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 13 00:47:50.059612 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0e34f7b1e7efd96fe, requestId: 31092037-6d35-4887-998b-51eecc0f3bae Sep 13 00:47:50.157699 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] listening reply. Sep 13 00:47:50.230671 kubelet[1915]: E0913 00:47:50.230613 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:47:50.232693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:47:50.232870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:47:50.233326 systemd[1]: kubelet.service: Consumed 1.258s CPU time. Sep 13 00:47:50.255875 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 13 00:47:50.354352 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 13 00:47:50.453281 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [StartupProcessor] Executing startup processor tasks Sep 13 00:47:50.551997 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 13 00:47:50.651070 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 13 00:47:50.750366 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 13 00:47:50.849741 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0e34f7b1e7efd96fe?role=subscribe&stream=input Sep 13 00:47:50.949455 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0e34f7b1e7efd96fe?role=subscribe&stream=input Sep 13 00:47:51.049402 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] Starting receiving message from control channel Sep 13 00:47:51.149351 amazon-ssm-agent[1750]: 2025-09-13 00:47:48 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 13 00:47:51.249596 amazon-ssm-agent[1750]: 2025-09-13 00:47:49 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 13 00:47:56.663935 systemd[1]: Created slice system-sshd.slice. Sep 13 00:47:56.665279 systemd[1]: Started sshd@0-172.31.24.139:22-147.75.109.163:58748.service. Sep 13 00:47:56.840002 sshd[1936]: Accepted publickey for core from 147.75.109.163 port 58748 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:56.842831 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:56.857281 systemd[1]: Created slice user-500.slice. Sep 13 00:47:56.859154 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:47:56.864273 systemd-logind[1729]: New session 1 of user core. Sep 13 00:47:56.872161 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:47:56.874533 systemd[1]: Starting user@500.service... Sep 13 00:47:56.879152 (systemd)[1939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:56.972119 systemd[1939]: Queued start job for default target default.target. Sep 13 00:47:56.972671 systemd[1939]: Reached target paths.target. Sep 13 00:47:56.972697 systemd[1939]: Reached target sockets.target. Sep 13 00:47:56.972710 systemd[1939]: Reached target timers.target. Sep 13 00:47:56.972722 systemd[1939]: Reached target basic.target. Sep 13 00:47:56.972834 systemd[1]: Started user@500.service. Sep 13 00:47:56.973871 systemd[1]: Started session-1.scope. Sep 13 00:47:56.974489 systemd[1939]: Reached target default.target. Sep 13 00:47:56.974657 systemd[1939]: Startup finished in 88ms. Sep 13 00:47:57.119751 systemd[1]: Started sshd@1-172.31.24.139:22-147.75.109.163:58764.service. Sep 13 00:47:57.278600 sshd[1948]: Accepted publickey for core from 147.75.109.163 port 58764 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:57.279929 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:57.283995 systemd-logind[1729]: New session 2 of user core. Sep 13 00:47:57.284838 systemd[1]: Started session-2.scope. Sep 13 00:47:57.412681 sshd[1948]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:57.415222 systemd[1]: sshd@1-172.31.24.139:22-147.75.109.163:58764.service: Deactivated successfully. Sep 13 00:47:57.415933 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:47:57.416422 systemd-logind[1729]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:47:57.417325 systemd-logind[1729]: Removed session 2. Sep 13 00:47:57.438596 systemd[1]: Started sshd@2-172.31.24.139:22-147.75.109.163:58772.service. Sep 13 00:47:57.599871 sshd[1954]: Accepted publickey for core from 147.75.109.163 port 58772 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:57.600851 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:57.605420 systemd-logind[1729]: New session 3 of user core. Sep 13 00:47:57.605921 systemd[1]: Started session-3.scope. Sep 13 00:47:57.728398 sshd[1954]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:57.731348 systemd[1]: sshd@2-172.31.24.139:22-147.75.109.163:58772.service: Deactivated successfully. Sep 13 00:47:57.732054 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:47:57.732670 systemd-logind[1729]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:47:57.733683 systemd-logind[1729]: Removed session 3. Sep 13 00:47:57.753740 systemd[1]: Started sshd@3-172.31.24.139:22-147.75.109.163:58786.service. Sep 13 00:47:57.906645 sshd[1960]: Accepted publickey for core from 147.75.109.163 port 58786 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:57.908208 sshd[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:57.914073 systemd[1]: Started session-4.scope. Sep 13 00:47:57.914757 systemd-logind[1729]: New session 4 of user core. Sep 13 00:47:58.038702 sshd[1960]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:58.042666 systemd[1]: sshd@3-172.31.24.139:22-147.75.109.163:58786.service: Deactivated successfully. Sep 13 00:47:58.043575 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:47:58.044299 systemd-logind[1729]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:47:58.045478 systemd-logind[1729]: Removed session 4. Sep 13 00:47:58.064653 systemd[1]: Started sshd@4-172.31.24.139:22-147.75.109.163:58794.service. Sep 13 00:47:58.220577 sshd[1966]: Accepted publickey for core from 147.75.109.163 port 58794 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:47:58.221793 sshd[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:58.227818 systemd[1]: Started session-5.scope. Sep 13 00:47:58.228483 systemd-logind[1729]: New session 5 of user core. Sep 13 00:47:58.347366 sudo[1969]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:47:58.347639 sudo[1969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:47:58.384611 systemd[1]: Starting docker.service... Sep 13 00:47:58.433613 env[1979]: time="2025-09-13T00:47:58.433531060Z" level=info msg="Starting up" Sep 13 00:47:58.435150 env[1979]: time="2025-09-13T00:47:58.435107810Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:47:58.435150 env[1979]: time="2025-09-13T00:47:58.435134318Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:47:58.435322 env[1979]: time="2025-09-13T00:47:58.435165680Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:47:58.435322 env[1979]: time="2025-09-13T00:47:58.435179293Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:47:58.437699 env[1979]: time="2025-09-13T00:47:58.437675082Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:47:58.437815 env[1979]: time="2025-09-13T00:47:58.437804489Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:47:58.437878 env[1979]: time="2025-09-13T00:47:58.437866130Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:47:58.437919 env[1979]: time="2025-09-13T00:47:58.437911869Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:47:58.445895 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1348669561-merged.mount: Deactivated successfully. Sep 13 00:47:58.508384 env[1979]: time="2025-09-13T00:47:58.508273476Z" level=info msg="Loading containers: start." Sep 13 00:47:58.668591 kernel: Initializing XFRM netlink socket Sep 13 00:47:58.710629 env[1979]: time="2025-09-13T00:47:58.710577707Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:47:58.711815 (udev-worker)[1990]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:47:58.787907 systemd-networkd[1469]: docker0: Link UP Sep 13 00:47:58.811259 env[1979]: time="2025-09-13T00:47:58.811215580Z" level=info msg="Loading containers: done." Sep 13 00:47:58.821384 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2333211895-merged.mount: Deactivated successfully. Sep 13 00:47:58.839690 env[1979]: time="2025-09-13T00:47:58.839639005Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:47:58.839915 env[1979]: time="2025-09-13T00:47:58.839833988Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:47:58.839958 env[1979]: time="2025-09-13T00:47:58.839929543Z" level=info msg="Daemon has completed initialization" Sep 13 00:47:58.867473 systemd[1]: Started docker.service. Sep 13 00:47:58.875083 env[1979]: time="2025-09-13T00:47:58.875017982Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:47:59.862789 env[1734]: time="2025-09-13T00:47:59.862719447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:48:00.383742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:48:00.384015 systemd[1]: Stopped kubelet.service. Sep 13 00:48:00.384071 systemd[1]: kubelet.service: Consumed 1.258s CPU time. Sep 13 00:48:00.386357 systemd[1]: Starting kubelet.service... Sep 13 00:48:00.603915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667935820.mount: Deactivated successfully. Sep 13 00:48:00.613618 systemd[1]: Started kubelet.service. Sep 13 00:48:00.667138 kubelet[2105]: E0913 00:48:00.667038 2105 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:48:00.670535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:48:00.670698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:48:04.442633 env[1734]: time="2025-09-13T00:48:04.442568175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:04.444945 env[1734]: time="2025-09-13T00:48:04.444872192Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:04.449364 env[1734]: time="2025-09-13T00:48:04.449316602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:04.451627 env[1734]: time="2025-09-13T00:48:04.451583976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:04.453212 env[1734]: time="2025-09-13T00:48:04.453153263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:48:04.454269 env[1734]: time="2025-09-13T00:48:04.454236932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:48:06.775467 env[1734]: time="2025-09-13T00:48:06.775395040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:06.785888 env[1734]: time="2025-09-13T00:48:06.785838830Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:06.791201 env[1734]: time="2025-09-13T00:48:06.791152921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:06.796392 env[1734]: time="2025-09-13T00:48:06.796345052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:06.797080 env[1734]: time="2025-09-13T00:48:06.797013024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:48:06.797851 env[1734]: time="2025-09-13T00:48:06.797822019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:48:08.607180 env[1734]: time="2025-09-13T00:48:08.607107880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:08.612299 env[1734]: time="2025-09-13T00:48:08.612255816Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:08.615217 env[1734]: time="2025-09-13T00:48:08.615181247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:08.618049 env[1734]: time="2025-09-13T00:48:08.618017385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:08.622666 env[1734]: time="2025-09-13T00:48:08.622625069Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:48:08.623969 env[1734]: time="2025-09-13T00:48:08.623931584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:48:10.032508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916667610.mount: Deactivated successfully. Sep 13 00:48:10.713100 env[1734]: time="2025-09-13T00:48:10.713039640Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:10.718172 env[1734]: time="2025-09-13T00:48:10.718115121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:10.721418 env[1734]: time="2025-09-13T00:48:10.721365170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:10.726481 env[1734]: time="2025-09-13T00:48:10.726431533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:10.727175 env[1734]: time="2025-09-13T00:48:10.727135219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:48:10.727676 env[1734]: time="2025-09-13T00:48:10.727649963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:48:10.881473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:48:10.881719 systemd[1]: Stopped kubelet.service. Sep 13 00:48:10.883206 systemd[1]: Starting kubelet.service... Sep 13 00:48:11.073964 systemd[1]: Started kubelet.service. Sep 13 00:48:11.126166 kubelet[2116]: E0913 00:48:11.126121 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:48:11.128141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:48:11.128314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:48:11.277800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737732320.mount: Deactivated successfully. Sep 13 00:48:12.381916 env[1734]: time="2025-09-13T00:48:12.381867156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:12.386504 env[1734]: time="2025-09-13T00:48:12.386462356Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:12.389867 env[1734]: time="2025-09-13T00:48:12.389826873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:12.392944 env[1734]: time="2025-09-13T00:48:12.392905256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:12.393986 env[1734]: time="2025-09-13T00:48:12.393759826Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:48:12.394663 env[1734]: time="2025-09-13T00:48:12.394639727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:48:12.880435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862742237.mount: Deactivated successfully. Sep 13 00:48:12.893949 env[1734]: time="2025-09-13T00:48:12.893882993Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:12.898075 env[1734]: time="2025-09-13T00:48:12.898024097Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:12.901057 env[1734]: time="2025-09-13T00:48:12.901008170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:12.904072 env[1734]: time="2025-09-13T00:48:12.904031331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:12.904449 env[1734]: time="2025-09-13T00:48:12.904411423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:48:12.905968 env[1734]: time="2025-09-13T00:48:12.905920687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:48:13.437877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017576211.mount: Deactivated successfully. Sep 13 00:48:15.779346 env[1734]: time="2025-09-13T00:48:15.779288317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:15.781825 env[1734]: time="2025-09-13T00:48:15.781779846Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:15.784027 env[1734]: time="2025-09-13T00:48:15.783990616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:15.786034 env[1734]: time="2025-09-13T00:48:15.785994262Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:15.787093 env[1734]: time="2025-09-13T00:48:15.787050046Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:48:17.937547 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:48:18.168868 systemd[1]: Stopped kubelet.service. Sep 13 00:48:18.172179 systemd[1]: Starting kubelet.service... Sep 13 00:48:18.206426 systemd[1]: Reloading. Sep 13 00:48:18.338211 /usr/lib/systemd/system-generators/torcx-generator[2170]: time="2025-09-13T00:48:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:48:18.338721 /usr/lib/systemd/system-generators/torcx-generator[2170]: time="2025-09-13T00:48:18Z" level=info msg="torcx already run" Sep 13 00:48:18.458642 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:48:18.458680 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:48:18.484658 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:48:18.635498 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:48:18.635638 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:48:18.635917 systemd[1]: Stopped kubelet.service. Sep 13 00:48:18.638037 systemd[1]: Starting kubelet.service... Sep 13 00:48:19.116660 amazon-ssm-agent[1750]: 2025-09-13 00:48:19 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 13 00:48:19.148194 systemd[1]: Started kubelet.service. Sep 13 00:48:19.192973 kubelet[2227]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:48:19.193296 kubelet[2227]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:48:19.193348 kubelet[2227]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:48:19.193480 kubelet[2227]: I0913 00:48:19.193456 2227 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:48:20.068126 kubelet[2227]: I0913 00:48:20.068080 2227 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:48:20.068126 kubelet[2227]: I0913 00:48:20.068118 2227 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:48:20.068486 kubelet[2227]: I0913 00:48:20.068460 2227 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:48:20.109906 kubelet[2227]: E0913 00:48:20.109870 2227 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:20.110848 kubelet[2227]: I0913 00:48:20.110826 2227 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:48:20.121912 kubelet[2227]: E0913 00:48:20.121876 2227 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:48:20.121912 kubelet[2227]: I0913 00:48:20.121907 2227 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:48:20.125966 kubelet[2227]: I0913 00:48:20.125939 2227 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:48:20.127011 kubelet[2227]: I0913 00:48:20.126981 2227 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:48:20.127184 kubelet[2227]: I0913 00:48:20.127132 2227 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:48:20.127344 kubelet[2227]: I0913 00:48:20.127164 2227 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-139","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:48:20.127446 kubelet[2227]: I0913 00:48:20.127351 2227 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:48:20.127446 kubelet[2227]: I0913 00:48:20.127360 2227 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:48:20.127510 kubelet[2227]: I0913 00:48:20.127453 2227 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:48:20.131182 kubelet[2227]: I0913 00:48:20.131152 2227 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:48:20.131182 kubelet[2227]: I0913 00:48:20.131184 2227 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:48:20.131327 kubelet[2227]: I0913 00:48:20.131214 2227 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:48:20.131327 kubelet[2227]: I0913 00:48:20.131231 2227 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:48:20.137073 kubelet[2227]: W0913 00:48:20.136989 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-139&limit=500&resourceVersion=0": dial tcp 172.31.24.139:6443: connect: connection refused Sep 13 00:48:20.137303 kubelet[2227]: E0913 00:48:20.137280 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-139&limit=500&resourceVersion=0\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:20.140319 kubelet[2227]: W0913 00:48:20.139920 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.139:6443: connect: connection refused Sep 13 00:48:20.140319 kubelet[2227]: E0913 00:48:20.139992 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:20.140531 kubelet[2227]: I0913 00:48:20.140477 2227 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:48:20.141127 kubelet[2227]: I0913 00:48:20.141103 2227 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:48:20.141210 kubelet[2227]: W0913 00:48:20.141179 2227 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:48:20.151684 kubelet[2227]: I0913 00:48:20.151649 2227 server.go:1274] "Started kubelet" Sep 13 00:48:20.163053 kubelet[2227]: I0913 00:48:20.162976 2227 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:48:20.163944 kubelet[2227]: I0913 00:48:20.163857 2227 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:48:20.165588 kubelet[2227]: I0913 00:48:20.165506 2227 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:48:20.165902 kubelet[2227]: I0913 00:48:20.165774 2227 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:48:20.169673 kubelet[2227]: E0913 00:48:20.165940 2227 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.139:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-139.1864b11ebc89c1b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-139,UID:ip-172-31-24-139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-139,},FirstTimestamp:2025-09-13 00:48:20.151615923 +0000 UTC m=+0.998339811,LastTimestamp:2025-09-13 00:48:20.151615923 +0000 UTC m=+0.998339811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-139,}" Sep 13 00:48:20.174782 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:48:20.174913 kubelet[2227]: I0913 00:48:20.174136 2227 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:48:20.178480 kubelet[2227]: I0913 00:48:20.178047 2227 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:48:20.181906 kubelet[2227]: E0913 00:48:20.180481 2227 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-24-139\" not found" Sep 13 00:48:20.181906 kubelet[2227]: I0913 00:48:20.180541 2227 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:48:20.181906 kubelet[2227]: I0913 00:48:20.180844 2227 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:48:20.181906 kubelet[2227]: I0913 00:48:20.180915 2227 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:48:20.181906 kubelet[2227]: W0913 00:48:20.181403 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.139:6443: connect: connection refused Sep 13 00:48:20.181906 kubelet[2227]: E0913 00:48:20.181463 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:20.181906 kubelet[2227]: E0913 00:48:20.181749 2227 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:48:20.182906 kubelet[2227]: E0913 00:48:20.182789 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-139?timeout=10s\": dial tcp 172.31.24.139:6443: connect: connection refused" interval="200ms" Sep 13 00:48:20.184744 kubelet[2227]: I0913 00:48:20.184727 2227 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:48:20.184875 kubelet[2227]: I0913 00:48:20.184863 2227 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:48:20.185231 kubelet[2227]: I0913 00:48:20.185210 2227 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:48:20.218196 kubelet[2227]: I0913 00:48:20.218144 2227 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:48:20.220298 kubelet[2227]: I0913 00:48:20.220246 2227 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:48:20.220298 kubelet[2227]: I0913 00:48:20.220290 2227 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:48:20.220585 kubelet[2227]: I0913 00:48:20.220322 2227 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:48:20.221644 kubelet[2227]: E0913 00:48:20.220376 2227 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:48:20.237189 kubelet[2227]: I0913 00:48:20.237155 2227 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:48:20.237189 kubelet[2227]: I0913 00:48:20.237182 2227 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:48:20.237396 kubelet[2227]: I0913 00:48:20.237205 2227 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:48:20.238228 kubelet[2227]: W0913 00:48:20.238154 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.139:6443: connect: connection refused Sep 13 00:48:20.238476 kubelet[2227]: E0913 00:48:20.238239 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:20.243085 kubelet[2227]: I0913 00:48:20.243052 2227 policy_none.go:49] "None policy: Start" Sep 13 00:48:20.244242 kubelet[2227]: I0913 00:48:20.244216 2227 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:48:20.244355 kubelet[2227]: I0913 00:48:20.244248 2227 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:48:20.252344 systemd[1]: Created slice kubepods.slice. Sep 13 00:48:20.257538 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:48:20.261821 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:48:20.280033 kubelet[2227]: I0913 00:48:20.280004 2227 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:48:20.280600 kubelet[2227]: I0913 00:48:20.280543 2227 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:48:20.280763 kubelet[2227]: I0913 00:48:20.280723 2227 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:48:20.281335 kubelet[2227]: I0913 00:48:20.281287 2227 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:48:20.285103 kubelet[2227]: E0913 00:48:20.285081 2227 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-139\" not found" Sep 13 00:48:20.337161 systemd[1]: Created slice kubepods-burstable-pod2ebf6e95d303258332af53d911e7e6e7.slice. Sep 13 00:48:20.349871 systemd[1]: Created slice kubepods-burstable-pod06ab5222d0d59c30c5999fbca20bf3f9.slice. Sep 13 00:48:20.359880 systemd[1]: Created slice kubepods-burstable-podb2f0c2ef4a15fc307cafed0b7a511924.slice. Sep 13 00:48:20.382914 kubelet[2227]: I0913 00:48:20.382856 2227 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-139" Sep 13 00:48:20.383311 kubelet[2227]: E0913 00:48:20.383281 2227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.139:6443/api/v1/nodes\": dial tcp 172.31.24.139:6443: connect: connection refused" node="ip-172-31-24-139" Sep 13 00:48:20.383431 kubelet[2227]: E0913 00:48:20.383282 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-139?timeout=10s\": dial tcp 172.31.24.139:6443: connect: connection refused" interval="400ms" Sep 13 00:48:20.481892 kubelet[2227]: I0913 00:48:20.481838 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:20.481892 kubelet[2227]: I0913 00:48:20.481899 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:20.482077 kubelet[2227]: I0913 00:48:20.481919 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:20.482077 kubelet[2227]: I0913 00:48:20.481939 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ebf6e95d303258332af53d911e7e6e7-ca-certs\") pod \"kube-apiserver-ip-172-31-24-139\" (UID: \"2ebf6e95d303258332af53d911e7e6e7\") " pod="kube-system/kube-apiserver-ip-172-31-24-139" Sep 13 00:48:20.482077 kubelet[2227]: I0913 00:48:20.481956 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:20.482077 kubelet[2227]: I0913 00:48:20.481973 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:20.482077 kubelet[2227]: I0913 00:48:20.481988 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2f0c2ef4a15fc307cafed0b7a511924-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-139\" (UID: \"b2f0c2ef4a15fc307cafed0b7a511924\") " pod="kube-system/kube-scheduler-ip-172-31-24-139" Sep 13 00:48:20.482219 kubelet[2227]: I0913 00:48:20.482002 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ebf6e95d303258332af53d911e7e6e7-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-139\" (UID: \"2ebf6e95d303258332af53d911e7e6e7\") " pod="kube-system/kube-apiserver-ip-172-31-24-139" Sep 13 00:48:20.482219 kubelet[2227]: I0913 00:48:20.482017 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ebf6e95d303258332af53d911e7e6e7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-139\" (UID: \"2ebf6e95d303258332af53d911e7e6e7\") " pod="kube-system/kube-apiserver-ip-172-31-24-139" Sep 13 00:48:20.585255 kubelet[2227]: I0913 00:48:20.585204 2227 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-139" Sep 13 00:48:20.585693 kubelet[2227]: E0913 00:48:20.585632 2227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.139:6443/api/v1/nodes\": dial tcp 172.31.24.139:6443: connect: connection refused" node="ip-172-31-24-139" Sep 13 00:48:20.648680 env[1734]: time="2025-09-13T00:48:20.648625271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-139,Uid:2ebf6e95d303258332af53d911e7e6e7,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:20.658825 env[1734]: time="2025-09-13T00:48:20.658770311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-139,Uid:06ab5222d0d59c30c5999fbca20bf3f9,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:20.663828 env[1734]: time="2025-09-13T00:48:20.663788542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-139,Uid:b2f0c2ef4a15fc307cafed0b7a511924,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:20.741137 kubelet[2227]: E0913 00:48:20.741003 2227 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.139:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-139.1864b11ebc89c1b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-139,UID:ip-172-31-24-139,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-139,},FirstTimestamp:2025-09-13 00:48:20.151615923 +0000 UTC m=+0.998339811,LastTimestamp:2025-09-13 00:48:20.151615923 +0000 UTC m=+0.998339811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-139,}" Sep 13 00:48:20.784215 kubelet[2227]: E0913 00:48:20.784172 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-139?timeout=10s\": dial tcp 172.31.24.139:6443: connect: connection refused" interval="800ms" Sep 13 00:48:20.977324 kubelet[2227]: W0913 00:48:20.977149 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-139&limit=500&resourceVersion=0": dial tcp 172.31.24.139:6443: connect: connection refused Sep 13 00:48:20.977324 kubelet[2227]: E0913 00:48:20.977242 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-139&limit=500&resourceVersion=0\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:20.988589 kubelet[2227]: I0913 00:48:20.988526 2227 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-139" Sep 13 00:48:20.989019 kubelet[2227]: E0913 00:48:20.988873 2227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.139:6443/api/v1/nodes\": dial tcp 172.31.24.139:6443: connect: connection refused" node="ip-172-31-24-139" Sep 13 00:48:21.094936 kubelet[2227]: W0913 00:48:21.094871 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.139:6443: connect: connection refused Sep 13 00:48:21.095092 kubelet[2227]: E0913 00:48:21.094943 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:21.138351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209732217.mount: Deactivated successfully. Sep 13 00:48:21.158332 env[1734]: time="2025-09-13T00:48:21.156402969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.160851 env[1734]: time="2025-09-13T00:48:21.160799054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.166869 env[1734]: time="2025-09-13T00:48:21.166818980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.169262 env[1734]: time="2025-09-13T00:48:21.169209928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.173296 env[1734]: time="2025-09-13T00:48:21.173244538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.175225 env[1734]: time="2025-09-13T00:48:21.175177443Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.176782 env[1734]: time="2025-09-13T00:48:21.176749218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.179185 env[1734]: time="2025-09-13T00:48:21.179146583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.181196 env[1734]: time="2025-09-13T00:48:21.181152758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.187227 env[1734]: time="2025-09-13T00:48:21.187189094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.191474 env[1734]: time="2025-09-13T00:48:21.191418010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.193635 env[1734]: time="2025-09-13T00:48:21.193547218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:21.231285 env[1734]: time="2025-09-13T00:48:21.230671333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:21.231454 env[1734]: time="2025-09-13T00:48:21.230708235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:21.231454 env[1734]: time="2025-09-13T00:48:21.230718711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:21.231454 env[1734]: time="2025-09-13T00:48:21.230828045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3033e2ed8bea0d0fd0186cd0e4e9342165c6ef5272e528049a808ea36c4fcd58 pid=2269 runtime=io.containerd.runc.v2 Sep 13 00:48:21.247286 systemd[1]: Started cri-containerd-3033e2ed8bea0d0fd0186cd0e4e9342165c6ef5272e528049a808ea36c4fcd58.scope. Sep 13 00:48:21.267886 env[1734]: time="2025-09-13T00:48:21.267807432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:21.268052 env[1734]: time="2025-09-13T00:48:21.267849873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:21.268052 env[1734]: time="2025-09-13T00:48:21.267861234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:21.268052 env[1734]: time="2025-09-13T00:48:21.267970829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21950b43d6cd043d21e0117d471290739e0bdbc4bf0249d91089efddcf45369b pid=2303 runtime=io.containerd.runc.v2 Sep 13 00:48:21.269178 env[1734]: time="2025-09-13T00:48:21.269102075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:21.269473 env[1734]: time="2025-09-13T00:48:21.269314852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:21.269473 env[1734]: time="2025-09-13T00:48:21.269342610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:21.270257 env[1734]: time="2025-09-13T00:48:21.269637244Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1d512abd1770eb32561b7d732ae4f97a272e3b6a6b5c6e34b7ca670b9b6c493 pid=2315 runtime=io.containerd.runc.v2 Sep 13 00:48:21.288104 systemd[1]: Started cri-containerd-21950b43d6cd043d21e0117d471290739e0bdbc4bf0249d91089efddcf45369b.scope. Sep 13 00:48:21.313608 systemd[1]: Started cri-containerd-d1d512abd1770eb32561b7d732ae4f97a272e3b6a6b5c6e34b7ca670b9b6c493.scope. Sep 13 00:48:21.342126 env[1734]: time="2025-09-13T00:48:21.341255191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-139,Uid:06ab5222d0d59c30c5999fbca20bf3f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3033e2ed8bea0d0fd0186cd0e4e9342165c6ef5272e528049a808ea36c4fcd58\"" Sep 13 00:48:21.344477 env[1734]: time="2025-09-13T00:48:21.344428018Z" level=info msg="CreateContainer within sandbox \"3033e2ed8bea0d0fd0186cd0e4e9342165c6ef5272e528049a808ea36c4fcd58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:48:21.347118 kubelet[2227]: W0913 00:48:21.347022 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.139:6443: connect: connection refused Sep 13 00:48:21.347437 kubelet[2227]: E0913 00:48:21.347124 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:21.356461 env[1734]: time="2025-09-13T00:48:21.356393800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-139,Uid:2ebf6e95d303258332af53d911e7e6e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"21950b43d6cd043d21e0117d471290739e0bdbc4bf0249d91089efddcf45369b\"" Sep 13 00:48:21.362926 env[1734]: time="2025-09-13T00:48:21.362547523Z" level=info msg="CreateContainer within sandbox \"21950b43d6cd043d21e0117d471290739e0bdbc4bf0249d91089efddcf45369b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:48:21.385179 env[1734]: time="2025-09-13T00:48:21.385113620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-139,Uid:b2f0c2ef4a15fc307cafed0b7a511924,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1d512abd1770eb32561b7d732ae4f97a272e3b6a6b5c6e34b7ca670b9b6c493\"" Sep 13 00:48:21.387270 env[1734]: time="2025-09-13T00:48:21.387206593Z" level=info msg="CreateContainer within sandbox \"3033e2ed8bea0d0fd0186cd0e4e9342165c6ef5272e528049a808ea36c4fcd58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f\"" Sep 13 00:48:21.389253 env[1734]: time="2025-09-13T00:48:21.389214959Z" level=info msg="StartContainer for \"3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f\"" Sep 13 00:48:21.391866 env[1734]: time="2025-09-13T00:48:21.391815460Z" level=info msg="CreateContainer within sandbox \"d1d512abd1770eb32561b7d732ae4f97a272e3b6a6b5c6e34b7ca670b9b6c493\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:48:21.401687 kubelet[2227]: W0913 00:48:21.401600 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.139:6443: connect: connection refused Sep 13 00:48:21.401875 kubelet[2227]: E0913 00:48:21.401729 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:21.413758 env[1734]: time="2025-09-13T00:48:21.413701307Z" level=info msg="CreateContainer within sandbox \"21950b43d6cd043d21e0117d471290739e0bdbc4bf0249d91089efddcf45369b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ec0c290ffa805ecfe375fb11898e5fa4f725935cabe0a90d359b3a3257c728b\"" Sep 13 00:48:21.414523 env[1734]: time="2025-09-13T00:48:21.414476214Z" level=info msg="StartContainer for \"8ec0c290ffa805ecfe375fb11898e5fa4f725935cabe0a90d359b3a3257c728b\"" Sep 13 00:48:21.423297 systemd[1]: Started cri-containerd-3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f.scope. Sep 13 00:48:21.435819 env[1734]: time="2025-09-13T00:48:21.435766312Z" level=info msg="CreateContainer within sandbox \"d1d512abd1770eb32561b7d732ae4f97a272e3b6a6b5c6e34b7ca670b9b6c493\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f\"" Sep 13 00:48:21.436542 env[1734]: time="2025-09-13T00:48:21.436506196Z" level=info msg="StartContainer for \"d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f\"" Sep 13 00:48:21.464364 systemd[1]: Started cri-containerd-8ec0c290ffa805ecfe375fb11898e5fa4f725935cabe0a90d359b3a3257c728b.scope. Sep 13 00:48:21.487243 systemd[1]: Started cri-containerd-d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f.scope. Sep 13 00:48:21.544999 env[1734]: time="2025-09-13T00:48:21.544957404Z" level=info msg="StartContainer for \"3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f\" returns successfully" Sep 13 00:48:21.567597 env[1734]: time="2025-09-13T00:48:21.567509866Z" level=info msg="StartContainer for \"8ec0c290ffa805ecfe375fb11898e5fa4f725935cabe0a90d359b3a3257c728b\" returns successfully" Sep 13 00:48:21.585670 kubelet[2227]: E0913 00:48:21.585547 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-139?timeout=10s\": dial tcp 172.31.24.139:6443: connect: connection refused" interval="1.6s" Sep 13 00:48:21.595336 env[1734]: time="2025-09-13T00:48:21.595269934Z" level=info msg="StartContainer for \"d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f\" returns successfully" Sep 13 00:48:21.791470 kubelet[2227]: I0913 00:48:21.791370 2227 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-139" Sep 13 00:48:21.792164 kubelet[2227]: E0913 00:48:21.792121 2227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.139:6443/api/v1/nodes\": dial tcp 172.31.24.139:6443: connect: connection refused" node="ip-172-31-24-139" Sep 13 00:48:22.248662 kubelet[2227]: E0913 00:48:22.248624 2227 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.139:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:48:23.394240 kubelet[2227]: I0913 00:48:23.394216 2227 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-139" Sep 13 00:48:24.449488 kubelet[2227]: E0913 00:48:24.449445 2227 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-139\" not found" node="ip-172-31-24-139" Sep 13 00:48:24.523516 kubelet[2227]: I0913 00:48:24.523482 2227 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-24-139" Sep 13 00:48:25.139836 kubelet[2227]: I0913 00:48:25.139794 2227 apiserver.go:52] "Watching apiserver" Sep 13 00:48:25.181896 kubelet[2227]: I0913 00:48:25.181852 2227 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:48:26.630201 systemd[1]: Reloading. Sep 13 00:48:26.798311 /usr/lib/systemd/system-generators/torcx-generator[2517]: time="2025-09-13T00:48:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:48:26.798356 /usr/lib/systemd/system-generators/torcx-generator[2517]: time="2025-09-13T00:48:26Z" level=info msg="torcx already run" Sep 13 00:48:26.968528 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:48:26.968844 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:48:26.997380 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:48:27.140541 kubelet[2227]: I0913 00:48:27.140484 2227 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:48:27.142950 systemd[1]: Stopping kubelet.service... Sep 13 00:48:27.156331 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:48:27.156639 systemd[1]: Stopped kubelet.service. Sep 13 00:48:27.156733 systemd[1]: kubelet.service: Consumed 1.505s CPU time. Sep 13 00:48:27.159394 systemd[1]: Starting kubelet.service... Sep 13 00:48:28.534606 systemd[1]: Started kubelet.service. Sep 13 00:48:28.634102 kubelet[2576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:48:28.634480 kubelet[2576]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:48:28.634632 kubelet[2576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:48:28.634773 kubelet[2576]: I0913 00:48:28.634726 2576 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:48:28.643201 kubelet[2576]: I0913 00:48:28.643171 2576 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:48:28.643355 kubelet[2576]: I0913 00:48:28.643346 2576 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:48:28.644384 kubelet[2576]: I0913 00:48:28.644354 2576 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:48:28.649753 kubelet[2576]: I0913 00:48:28.649727 2576 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:48:28.661915 kubelet[2576]: I0913 00:48:28.661799 2576 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:48:28.670143 kubelet[2576]: E0913 00:48:28.670094 2576 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:48:28.670324 kubelet[2576]: I0913 00:48:28.670309 2576 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:48:28.673687 kubelet[2576]: I0913 00:48:28.673666 2576 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:48:28.673984 kubelet[2576]: I0913 00:48:28.673969 2576 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:48:28.674256 kubelet[2576]: I0913 00:48:28.674222 2576 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:48:28.674531 kubelet[2576]: I0913 00:48:28.674346 2576 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-139","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:48:28.674767 kubelet[2576]: I0913 00:48:28.674751 2576 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:48:28.674857 kubelet[2576]: I0913 00:48:28.674848 2576 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:48:28.674950 kubelet[2576]: I0913 00:48:28.674939 2576 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:48:28.675179 kubelet[2576]: I0913 00:48:28.675166 2576 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:48:28.676101 kubelet[2576]: I0913 00:48:28.676084 2576 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:48:28.676286 kubelet[2576]: I0913 00:48:28.676275 2576 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:48:28.676383 kubelet[2576]: I0913 00:48:28.676371 2576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:48:28.679036 kubelet[2576]: I0913 00:48:28.679012 2576 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:48:28.679811 kubelet[2576]: I0913 00:48:28.679795 2576 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:48:28.680463 kubelet[2576]: I0913 00:48:28.680448 2576 server.go:1274] "Started kubelet" Sep 13 00:48:28.684946 sudo[2591]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:48:28.685321 sudo[2591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:48:28.686358 kubelet[2576]: I0913 00:48:28.686333 2576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:48:28.690342 kubelet[2576]: I0913 00:48:28.690301 2576 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:48:28.693831 kubelet[2576]: I0913 00:48:28.693805 2576 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:48:28.694108 kubelet[2576]: I0913 00:48:28.694074 2576 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:48:28.694775 kubelet[2576]: I0913 00:48:28.694757 2576 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:48:28.696452 kubelet[2576]: I0913 00:48:28.696420 2576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:48:28.708371 kubelet[2576]: I0913 00:48:28.708333 2576 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:48:28.708668 kubelet[2576]: E0913 00:48:28.708643 2576 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-24-139\" not found" Sep 13 00:48:28.710655 kubelet[2576]: I0913 00:48:28.710246 2576 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:48:28.710778 kubelet[2576]: I0913 00:48:28.710667 2576 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:48:28.720993 kubelet[2576]: I0913 00:48:28.720958 2576 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:48:28.721324 kubelet[2576]: I0913 00:48:28.721302 2576 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:48:28.757980 kubelet[2576]: I0913 00:48:28.757936 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:48:28.765534 kubelet[2576]: I0913 00:48:28.765502 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:48:28.765779 kubelet[2576]: I0913 00:48:28.765764 2576 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:48:28.765911 kubelet[2576]: I0913 00:48:28.765899 2576 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:48:28.766044 kubelet[2576]: E0913 00:48:28.766025 2576 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:48:28.772249 kubelet[2576]: I0913 00:48:28.772154 2576 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:48:28.850381 kubelet[2576]: I0913 00:48:28.850041 2576 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:48:28.850381 kubelet[2576]: I0913 00:48:28.850061 2576 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:48:28.850381 kubelet[2576]: I0913 00:48:28.850083 2576 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:48:28.850381 kubelet[2576]: I0913 00:48:28.850269 2576 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:48:28.850381 kubelet[2576]: I0913 00:48:28.850287 2576 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:48:28.850381 kubelet[2576]: I0913 00:48:28.850311 2576 policy_none.go:49] "None policy: Start" Sep 13 00:48:28.854876 kubelet[2576]: I0913 00:48:28.854848 2576 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:48:28.855022 kubelet[2576]: I0913 00:48:28.854885 2576 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:48:28.855127 kubelet[2576]: I0913 00:48:28.855107 2576 state_mem.go:75] "Updated machine memory state" Sep 13 00:48:28.861004 kubelet[2576]: I0913 00:48:28.860973 2576 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:48:28.861246 kubelet[2576]: I0913 00:48:28.861228 2576 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:48:28.861343 kubelet[2576]: I0913 00:48:28.861248 2576 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:48:28.862354 kubelet[2576]: I0913 00:48:28.862334 2576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:48:28.998253 kubelet[2576]: I0913 00:48:28.998217 2576 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-139" Sep 13 00:48:29.012500 kubelet[2576]: I0913 00:48:29.012452 2576 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-24-139" Sep 13 00:48:29.012917 kubelet[2576]: I0913 00:48:29.012905 2576 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-24-139" Sep 13 00:48:29.013634 kubelet[2576]: I0913 00:48:29.013538 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:29.014514 kubelet[2576]: I0913 00:48:29.014496 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ebf6e95d303258332af53d911e7e6e7-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-139\" (UID: \"2ebf6e95d303258332af53d911e7e6e7\") " pod="kube-system/kube-apiserver-ip-172-31-24-139" Sep 13 00:48:29.014906 kubelet[2576]: I0913 00:48:29.014872 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:29.017303 kubelet[2576]: I0913 00:48:29.017242 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:29.017520 kubelet[2576]: I0913 00:48:29.017486 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:29.017738 kubelet[2576]: I0913 00:48:29.017725 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ebf6e95d303258332af53d911e7e6e7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-139\" (UID: \"2ebf6e95d303258332af53d911e7e6e7\") " pod="kube-system/kube-apiserver-ip-172-31-24-139" Sep 13 00:48:29.017894 kubelet[2576]: I0913 00:48:29.017875 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06ab5222d0d59c30c5999fbca20bf3f9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-139\" (UID: \"06ab5222d0d59c30c5999fbca20bf3f9\") " pod="kube-system/kube-controller-manager-ip-172-31-24-139" Sep 13 00:48:29.018069 kubelet[2576]: I0913 00:48:29.018035 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2f0c2ef4a15fc307cafed0b7a511924-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-139\" (UID: \"b2f0c2ef4a15fc307cafed0b7a511924\") " pod="kube-system/kube-scheduler-ip-172-31-24-139" Sep 13 00:48:29.018186 kubelet[2576]: I0913 00:48:29.018176 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ebf6e95d303258332af53d911e7e6e7-ca-certs\") pod \"kube-apiserver-ip-172-31-24-139\" (UID: \"2ebf6e95d303258332af53d911e7e6e7\") " pod="kube-system/kube-apiserver-ip-172-31-24-139" Sep 13 00:48:29.558029 sudo[2591]: pam_unix(sudo:session): session closed for user root Sep 13 00:48:29.704042 kubelet[2576]: I0913 00:48:29.704008 2576 apiserver.go:52] "Watching apiserver" Sep 13 00:48:29.710799 kubelet[2576]: I0913 00:48:29.710755 2576 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:48:29.829717 kubelet[2576]: E0913 00:48:29.829595 2576 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-139\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-139" Sep 13 00:48:29.865739 kubelet[2576]: I0913 00:48:29.865655 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-139" podStartSLOduration=1.865606243 podStartE2EDuration="1.865606243s" podCreationTimestamp="2025-09-13 00:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:29.847601091 +0000 UTC m=+1.288566382" watchObservedRunningTime="2025-09-13 00:48:29.865606243 +0000 UTC m=+1.306571533" Sep 13 00:48:29.879884 kubelet[2576]: I0913 00:48:29.879795 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-139" podStartSLOduration=1.879773214 podStartE2EDuration="1.879773214s" podCreationTimestamp="2025-09-13 00:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:29.866504783 +0000 UTC m=+1.307470076" watchObservedRunningTime="2025-09-13 00:48:29.879773214 +0000 UTC m=+1.320738510" Sep 13 00:48:29.897067 kubelet[2576]: I0913 00:48:29.897002 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-139" podStartSLOduration=1.896964525 podStartE2EDuration="1.896964525s" podCreationTimestamp="2025-09-13 00:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:29.880887481 +0000 UTC m=+1.321852777" watchObservedRunningTime="2025-09-13 00:48:29.896964525 +0000 UTC m=+1.337929818" Sep 13 00:48:31.306774 kubelet[2576]: I0913 00:48:31.306743 2576 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:48:31.307146 env[1734]: time="2025-09-13T00:48:31.307066464Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:48:31.307402 kubelet[2576]: I0913 00:48:31.307286 2576 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:48:31.534495 sudo[1969]: pam_unix(sudo:session): session closed for user root Sep 13 00:48:31.557591 sshd[1966]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:31.560618 systemd[1]: sshd@4-172.31.24.139:22-147.75.109.163:58794.service: Deactivated successfully. Sep 13 00:48:31.561340 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:48:31.561466 systemd[1]: session-5.scope: Consumed 4.389s CPU time. Sep 13 00:48:31.562120 systemd-logind[1729]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:48:31.563424 systemd-logind[1729]: Removed session 5. Sep 13 00:48:32.357702 systemd[1]: Created slice kubepods-burstable-pod61e0f3d2_1e48_43c1_b4f4_810ffb649699.slice. Sep 13 00:48:32.370807 systemd[1]: Created slice kubepods-besteffort-pod0523f0f1_e7df_4da7_83c4_ed8e6d397f8a.slice. Sep 13 00:48:32.441433 kubelet[2576]: I0913 00:48:32.441394 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0523f0f1-e7df-4da7-83c4-ed8e6d397f8a-xtables-lock\") pod \"kube-proxy-ng2hw\" (UID: \"0523f0f1-e7df-4da7-83c4-ed8e6d397f8a\") " pod="kube-system/kube-proxy-ng2hw" Sep 13 00:48:32.441942 kubelet[2576]: I0913 00:48:32.441462 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-run\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.441942 kubelet[2576]: I0913 00:48:32.441489 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-cgroup\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.441942 kubelet[2576]: I0913 00:48:32.441519 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61e0f3d2-1e48-43c1-b4f4-810ffb649699-hubble-tls\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.441942 kubelet[2576]: I0913 00:48:32.441620 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0523f0f1-e7df-4da7-83c4-ed8e6d397f8a-kube-proxy\") pod \"kube-proxy-ng2hw\" (UID: \"0523f0f1-e7df-4da7-83c4-ed8e6d397f8a\") " pod="kube-system/kube-proxy-ng2hw" Sep 13 00:48:32.441942 kubelet[2576]: I0913 00:48:32.441668 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sq29\" (UniqueName: \"kubernetes.io/projected/0523f0f1-e7df-4da7-83c4-ed8e6d397f8a-kube-api-access-8sq29\") pod \"kube-proxy-ng2hw\" (UID: \"0523f0f1-e7df-4da7-83c4-ed8e6d397f8a\") " pod="kube-system/kube-proxy-ng2hw" Sep 13 00:48:32.441942 kubelet[2576]: I0913 00:48:32.441701 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-hostproc\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442254 kubelet[2576]: I0913 00:48:32.441747 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61e0f3d2-1e48-43c1-b4f4-810ffb649699-clustermesh-secrets\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442254 kubelet[2576]: I0913 00:48:32.441773 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-xtables-lock\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442254 kubelet[2576]: I0913 00:48:32.441797 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-host-proc-sys-kernel\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442254 kubelet[2576]: I0913 00:48:32.441841 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-bpf-maps\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442254 kubelet[2576]: I0913 00:48:32.441871 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-lib-modules\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442254 kubelet[2576]: I0913 00:48:32.441914 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0523f0f1-e7df-4da7-83c4-ed8e6d397f8a-lib-modules\") pod \"kube-proxy-ng2hw\" (UID: \"0523f0f1-e7df-4da7-83c4-ed8e6d397f8a\") " pod="kube-system/kube-proxy-ng2hw" Sep 13 00:48:32.442520 kubelet[2576]: I0913 00:48:32.441939 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cni-path\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442520 kubelet[2576]: I0913 00:48:32.441964 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-config-path\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442520 kubelet[2576]: I0913 00:48:32.442009 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-host-proc-sys-net\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442520 kubelet[2576]: I0913 00:48:32.442033 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxkgd\" (UniqueName: \"kubernetes.io/projected/61e0f3d2-1e48-43c1-b4f4-810ffb649699-kube-api-access-xxkgd\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.442520 kubelet[2576]: I0913 00:48:32.442083 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-etc-cni-netd\") pod \"cilium-6sjzm\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " pod="kube-system/cilium-6sjzm" Sep 13 00:48:32.482760 systemd[1]: Created slice kubepods-besteffort-pode381b6cc_d909_4933_a07d_7a7e2261f5f0.slice. Sep 13 00:48:32.543259 kubelet[2576]: I0913 00:48:32.543210 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e381b6cc-d909-4933-a07d-7a7e2261f5f0-cilium-config-path\") pod \"cilium-operator-5d85765b45-mxrjx\" (UID: \"e381b6cc-d909-4933-a07d-7a7e2261f5f0\") " pod="kube-system/cilium-operator-5d85765b45-mxrjx" Sep 13 00:48:32.543259 kubelet[2576]: I0913 00:48:32.543256 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sdw7\" (UniqueName: \"kubernetes.io/projected/e381b6cc-d909-4933-a07d-7a7e2261f5f0-kube-api-access-6sdw7\") pod \"cilium-operator-5d85765b45-mxrjx\" (UID: \"e381b6cc-d909-4933-a07d-7a7e2261f5f0\") " pod="kube-system/cilium-operator-5d85765b45-mxrjx" Sep 13 00:48:32.543752 kubelet[2576]: I0913 00:48:32.543729 2576 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:48:32.675994 env[1734]: time="2025-09-13T00:48:32.675950978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6sjzm,Uid:61e0f3d2-1e48-43c1-b4f4-810ffb649699,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:32.678691 env[1734]: time="2025-09-13T00:48:32.678653328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ng2hw,Uid:0523f0f1-e7df-4da7-83c4-ed8e6d397f8a,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:32.710677 env[1734]: time="2025-09-13T00:48:32.706858521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:32.710677 env[1734]: time="2025-09-13T00:48:32.706911571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:32.710677 env[1734]: time="2025-09-13T00:48:32.706923164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:32.710677 env[1734]: time="2025-09-13T00:48:32.707136211Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1 pid=2657 runtime=io.containerd.runc.v2 Sep 13 00:48:32.720814 env[1734]: time="2025-09-13T00:48:32.720728842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:32.721131 env[1734]: time="2025-09-13T00:48:32.721076996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:32.721292 env[1734]: time="2025-09-13T00:48:32.721243734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:32.722009 env[1734]: time="2025-09-13T00:48:32.721944451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af902f9bee10998317512d20de98f3e80c95560bef1aea68b13e2c6e38ffc463 pid=2677 runtime=io.containerd.runc.v2 Sep 13 00:48:32.748738 systemd[1]: Started cri-containerd-4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1.scope. Sep 13 00:48:32.780069 systemd[1]: Started cri-containerd-af902f9bee10998317512d20de98f3e80c95560bef1aea68b13e2c6e38ffc463.scope. Sep 13 00:48:32.789901 env[1734]: time="2025-09-13T00:48:32.789853061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mxrjx,Uid:e381b6cc-d909-4933-a07d-7a7e2261f5f0,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:32.819081 env[1734]: time="2025-09-13T00:48:32.817681072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6sjzm,Uid:61e0f3d2-1e48-43c1-b4f4-810ffb649699,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\"" Sep 13 00:48:32.823943 env[1734]: time="2025-09-13T00:48:32.823900942Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:48:32.838790 env[1734]: time="2025-09-13T00:48:32.838732436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ng2hw,Uid:0523f0f1-e7df-4da7-83c4-ed8e6d397f8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"af902f9bee10998317512d20de98f3e80c95560bef1aea68b13e2c6e38ffc463\"" Sep 13 00:48:32.843313 env[1734]: time="2025-09-13T00:48:32.843278942Z" level=info msg="CreateContainer within sandbox \"af902f9bee10998317512d20de98f3e80c95560bef1aea68b13e2c6e38ffc463\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:48:32.848324 env[1734]: time="2025-09-13T00:48:32.848227546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:32.848503 env[1734]: time="2025-09-13T00:48:32.848337306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:32.848503 env[1734]: time="2025-09-13T00:48:32.848372590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:32.848638 env[1734]: time="2025-09-13T00:48:32.848596545Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83 pid=2743 runtime=io.containerd.runc.v2 Sep 13 00:48:32.869517 systemd[1]: Started cri-containerd-94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83.scope. Sep 13 00:48:32.885266 env[1734]: time="2025-09-13T00:48:32.885207854Z" level=info msg="CreateContainer within sandbox \"af902f9bee10998317512d20de98f3e80c95560bef1aea68b13e2c6e38ffc463\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"223a0b8bea4d18941e799a899735481e4b0b263bbefb42a2e75472af03aab977\"" Sep 13 00:48:32.886004 env[1734]: time="2025-09-13T00:48:32.885956152Z" level=info msg="StartContainer for \"223a0b8bea4d18941e799a899735481e4b0b263bbefb42a2e75472af03aab977\"" Sep 13 00:48:32.897302 update_engine[1730]: I0913 00:48:32.897032 1730 update_attempter.cc:509] Updating boot flags... Sep 13 00:48:32.954300 systemd[1]: Started cri-containerd-223a0b8bea4d18941e799a899735481e4b0b263bbefb42a2e75472af03aab977.scope. Sep 13 00:48:32.966106 env[1734]: time="2025-09-13T00:48:32.966063540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mxrjx,Uid:e381b6cc-d909-4933-a07d-7a7e2261f5f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\"" Sep 13 00:48:33.148906 env[1734]: time="2025-09-13T00:48:33.148826119Z" level=info msg="StartContainer for \"223a0b8bea4d18941e799a899735481e4b0b263bbefb42a2e75472af03aab977\" returns successfully" Sep 13 00:48:33.853793 kubelet[2576]: I0913 00:48:33.853737 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ng2hw" podStartSLOduration=1.853718996 podStartE2EDuration="1.853718996s" podCreationTimestamp="2025-09-13 00:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:33.837479763 +0000 UTC m=+5.278445058" watchObservedRunningTime="2025-09-13 00:48:33.853718996 +0000 UTC m=+5.294684291" Sep 13 00:48:38.260115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134443571.mount: Deactivated successfully. Sep 13 00:48:41.351877 env[1734]: time="2025-09-13T00:48:41.351812877Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:41.356154 env[1734]: time="2025-09-13T00:48:41.356088867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:41.359102 env[1734]: time="2025-09-13T00:48:41.359059462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:41.359698 env[1734]: time="2025-09-13T00:48:41.359665356Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:48:41.363330 env[1734]: time="2025-09-13T00:48:41.363298535Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:48:41.363586 env[1734]: time="2025-09-13T00:48:41.363518423Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:48:41.394263 env[1734]: time="2025-09-13T00:48:41.394222646Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c\"" Sep 13 00:48:41.396118 env[1734]: time="2025-09-13T00:48:41.396074005Z" level=info msg="StartContainer for \"f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c\"" Sep 13 00:48:41.432642 systemd[1]: Started cri-containerd-f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c.scope. Sep 13 00:48:41.471097 env[1734]: time="2025-09-13T00:48:41.471032223Z" level=info msg="StartContainer for \"f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c\" returns successfully" Sep 13 00:48:41.489742 systemd[1]: cri-containerd-f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c.scope: Deactivated successfully. Sep 13 00:48:41.655549 env[1734]: time="2025-09-13T00:48:41.655465720Z" level=info msg="shim disconnected" id=f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c Sep 13 00:48:41.655830 env[1734]: time="2025-09-13T00:48:41.655570205Z" level=warning msg="cleaning up after shim disconnected" id=f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c namespace=k8s.io Sep 13 00:48:41.655830 env[1734]: time="2025-09-13T00:48:41.655590066Z" level=info msg="cleaning up dead shim" Sep 13 00:48:41.670683 env[1734]: time="2025-09-13T00:48:41.670631616Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3256 runtime=io.containerd.runc.v2\n" Sep 13 00:48:41.842028 env[1734]: time="2025-09-13T00:48:41.841993247Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:48:41.857243 env[1734]: time="2025-09-13T00:48:41.857163895Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46\"" Sep 13 00:48:41.858376 env[1734]: time="2025-09-13T00:48:41.858316115Z" level=info msg="StartContainer for \"5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46\"" Sep 13 00:48:41.889914 systemd[1]: Started cri-containerd-5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46.scope. Sep 13 00:48:41.936172 env[1734]: time="2025-09-13T00:48:41.935682615Z" level=info msg="StartContainer for \"5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46\" returns successfully" Sep 13 00:48:41.947415 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:48:41.948121 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:48:41.948618 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:48:41.952975 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:48:41.955593 systemd[1]: cri-containerd-5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46.scope: Deactivated successfully. Sep 13 00:48:41.968700 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:48:41.992928 env[1734]: time="2025-09-13T00:48:41.992872652Z" level=info msg="shim disconnected" id=5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46 Sep 13 00:48:41.992928 env[1734]: time="2025-09-13T00:48:41.992927630Z" level=warning msg="cleaning up after shim disconnected" id=5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46 namespace=k8s.io Sep 13 00:48:41.993252 env[1734]: time="2025-09-13T00:48:41.992939208Z" level=info msg="cleaning up dead shim" Sep 13 00:48:42.004044 env[1734]: time="2025-09-13T00:48:42.003961352Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3319 runtime=io.containerd.runc.v2\n" Sep 13 00:48:42.385698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c-rootfs.mount: Deactivated successfully. Sep 13 00:48:42.497934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125414062.mount: Deactivated successfully. Sep 13 00:48:42.864728 env[1734]: time="2025-09-13T00:48:42.864295303Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:48:42.895961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700039563.mount: Deactivated successfully. Sep 13 00:48:42.903446 env[1734]: time="2025-09-13T00:48:42.903388030Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765\"" Sep 13 00:48:42.906098 env[1734]: time="2025-09-13T00:48:42.906056838Z" level=info msg="StartContainer for \"c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765\"" Sep 13 00:48:42.936794 systemd[1]: Started cri-containerd-c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765.scope. Sep 13 00:48:42.998937 env[1734]: time="2025-09-13T00:48:42.998883244Z" level=info msg="StartContainer for \"c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765\" returns successfully" Sep 13 00:48:43.018185 systemd[1]: cri-containerd-c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765.scope: Deactivated successfully. Sep 13 00:48:43.134933 env[1734]: time="2025-09-13T00:48:43.134867660Z" level=info msg="shim disconnected" id=c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765 Sep 13 00:48:43.135240 env[1734]: time="2025-09-13T00:48:43.135216428Z" level=warning msg="cleaning up after shim disconnected" id=c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765 namespace=k8s.io Sep 13 00:48:43.135355 env[1734]: time="2025-09-13T00:48:43.135313513Z" level=info msg="cleaning up dead shim" Sep 13 00:48:43.149989 env[1734]: time="2025-09-13T00:48:43.149944347Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3379 runtime=io.containerd.runc.v2\n" Sep 13 00:48:43.393919 env[1734]: time="2025-09-13T00:48:43.390732337Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:43.395764 env[1734]: time="2025-09-13T00:48:43.395720328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:43.398912 env[1734]: time="2025-09-13T00:48:43.398873502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:43.399456 env[1734]: time="2025-09-13T00:48:43.399423533Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:48:43.402415 env[1734]: time="2025-09-13T00:48:43.402289888Z" level=info msg="CreateContainer within sandbox \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:48:43.429702 env[1734]: time="2025-09-13T00:48:43.429632087Z" level=info msg="CreateContainer within sandbox \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\"" Sep 13 00:48:43.431232 env[1734]: time="2025-09-13T00:48:43.430478997Z" level=info msg="StartContainer for \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\"" Sep 13 00:48:43.447202 systemd[1]: Started cri-containerd-da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12.scope. Sep 13 00:48:43.507855 env[1734]: time="2025-09-13T00:48:43.507791301Z" level=info msg="StartContainer for \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\" returns successfully" Sep 13 00:48:43.869921 env[1734]: time="2025-09-13T00:48:43.869787461Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:48:43.902463 env[1734]: time="2025-09-13T00:48:43.902410279Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56\"" Sep 13 00:48:43.902926 env[1734]: time="2025-09-13T00:48:43.902888866Z" level=info msg="StartContainer for \"6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56\"" Sep 13 00:48:43.930105 systemd[1]: Started cri-containerd-6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56.scope. Sep 13 00:48:43.995122 env[1734]: time="2025-09-13T00:48:43.995062974Z" level=info msg="StartContainer for \"6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56\" returns successfully" Sep 13 00:48:44.005245 systemd[1]: cri-containerd-6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56.scope: Deactivated successfully. Sep 13 00:48:44.067903 env[1734]: time="2025-09-13T00:48:44.067847099Z" level=info msg="shim disconnected" id=6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56 Sep 13 00:48:44.068326 env[1734]: time="2025-09-13T00:48:44.068301107Z" level=warning msg="cleaning up after shim disconnected" id=6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56 namespace=k8s.io Sep 13 00:48:44.068531 env[1734]: time="2025-09-13T00:48:44.068486575Z" level=info msg="cleaning up dead shim" Sep 13 00:48:44.083818 env[1734]: time="2025-09-13T00:48:44.083766117Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3476 runtime=io.containerd.runc.v2\n" Sep 13 00:48:44.897291 env[1734]: time="2025-09-13T00:48:44.897215885Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:48:44.921342 kubelet[2576]: I0913 00:48:44.921269 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mxrjx" podStartSLOduration=2.495850086 podStartE2EDuration="12.921244294s" podCreationTimestamp="2025-09-13 00:48:32 +0000 UTC" firstStartedPulling="2025-09-13 00:48:32.974791086 +0000 UTC m=+4.415756359" lastFinishedPulling="2025-09-13 00:48:43.400185288 +0000 UTC m=+14.841150567" observedRunningTime="2025-09-13 00:48:44.011807697 +0000 UTC m=+15.452772993" watchObservedRunningTime="2025-09-13 00:48:44.921244294 +0000 UTC m=+16.362209651" Sep 13 00:48:44.932208 env[1734]: time="2025-09-13T00:48:44.932130308Z" level=info msg="CreateContainer within sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\"" Sep 13 00:48:44.934743 env[1734]: time="2025-09-13T00:48:44.933331442Z" level=info msg="StartContainer for \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\"" Sep 13 00:48:44.969191 systemd[1]: Started cri-containerd-7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7.scope. Sep 13 00:48:45.008531 env[1734]: time="2025-09-13T00:48:45.008419515Z" level=info msg="StartContainer for \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\" returns successfully" Sep 13 00:48:45.251763 kubelet[2576]: I0913 00:48:45.251218 2576 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:48:45.298417 systemd[1]: Created slice kubepods-burstable-poda92ee3dd_b22d_4155_a033_72db3d8e264c.slice. Sep 13 00:48:45.307269 systemd[1]: Created slice kubepods-burstable-pode63bfb4e_4d7e_491e_9c01_85a4c8b5d454.slice. Sep 13 00:48:45.426375 kubelet[2576]: I0913 00:48:45.426292 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzql\" (UniqueName: \"kubernetes.io/projected/e63bfb4e-4d7e-491e-9c01-85a4c8b5d454-kube-api-access-xhzql\") pod \"coredns-7c65d6cfc9-8b56p\" (UID: \"e63bfb4e-4d7e-491e-9c01-85a4c8b5d454\") " pod="kube-system/coredns-7c65d6cfc9-8b56p" Sep 13 00:48:45.426375 kubelet[2576]: I0913 00:48:45.426378 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptmr\" (UniqueName: \"kubernetes.io/projected/a92ee3dd-b22d-4155-a033-72db3d8e264c-kube-api-access-nptmr\") pod \"coredns-7c65d6cfc9-hgzmk\" (UID: \"a92ee3dd-b22d-4155-a033-72db3d8e264c\") " pod="kube-system/coredns-7c65d6cfc9-hgzmk" Sep 13 00:48:45.426655 kubelet[2576]: I0913 00:48:45.426408 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a92ee3dd-b22d-4155-a033-72db3d8e264c-config-volume\") pod \"coredns-7c65d6cfc9-hgzmk\" (UID: \"a92ee3dd-b22d-4155-a033-72db3d8e264c\") " pod="kube-system/coredns-7c65d6cfc9-hgzmk" Sep 13 00:48:45.426655 kubelet[2576]: I0913 00:48:45.426434 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e63bfb4e-4d7e-491e-9c01-85a4c8b5d454-config-volume\") pod \"coredns-7c65d6cfc9-8b56p\" (UID: \"e63bfb4e-4d7e-491e-9c01-85a4c8b5d454\") " pod="kube-system/coredns-7c65d6cfc9-8b56p" Sep 13 00:48:45.603070 env[1734]: time="2025-09-13T00:48:45.602934252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hgzmk,Uid:a92ee3dd-b22d-4155-a033-72db3d8e264c,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:45.612375 env[1734]: time="2025-09-13T00:48:45.612328392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8b56p,Uid:e63bfb4e-4d7e-491e-9c01-85a4c8b5d454,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:47.772622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:48:47.773396 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:48:47.775160 systemd-networkd[1469]: cilium_host: Link UP Sep 13 00:48:47.775500 (udev-worker)[3601]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:48:47.776193 systemd-networkd[1469]: cilium_net: Link UP Sep 13 00:48:47.778021 systemd-networkd[1469]: cilium_net: Gained carrier Sep 13 00:48:47.778457 systemd-networkd[1469]: cilium_host: Gained carrier Sep 13 00:48:47.779488 (udev-worker)[3636]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:48:47.788138 systemd-networkd[1469]: cilium_net: Gained IPv6LL Sep 13 00:48:47.913433 systemd-networkd[1469]: cilium_vxlan: Link UP Sep 13 00:48:47.913442 systemd-networkd[1469]: cilium_vxlan: Gained carrier Sep 13 00:48:47.990753 systemd-networkd[1469]: cilium_host: Gained IPv6LL Sep 13 00:48:48.445848 kernel: NET: Registered PF_ALG protocol family Sep 13 00:48:49.322284 (udev-worker)[3646]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:48:49.346947 systemd-networkd[1469]: lxc_health: Link UP Sep 13 00:48:49.356091 systemd-networkd[1469]: lxc_health: Gained carrier Sep 13 00:48:49.356612 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:48:49.702627 systemd-networkd[1469]: lxc494d6854550c: Link UP Sep 13 00:48:49.712995 kernel: eth0: renamed from tmp019c6 Sep 13 00:48:49.721135 systemd-networkd[1469]: lxc494d6854550c: Gained carrier Sep 13 00:48:49.721741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc494d6854550c: link becomes ready Sep 13 00:48:49.724208 systemd-networkd[1469]: lxcd0dae4feab02: Link UP Sep 13 00:48:49.739982 kernel: eth0: renamed from tmp12d09 Sep 13 00:48:49.748335 (udev-worker)[3648]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:48:49.758620 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd0dae4feab02: link becomes ready Sep 13 00:48:49.758944 systemd-networkd[1469]: lxcd0dae4feab02: Gained carrier Sep 13 00:48:49.759166 systemd-networkd[1469]: cilium_vxlan: Gained IPv6LL Sep 13 00:48:50.711533 kubelet[2576]: I0913 00:48:50.711458 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6sjzm" podStartSLOduration=10.171705696 podStartE2EDuration="18.711431249s" podCreationTimestamp="2025-09-13 00:48:32 +0000 UTC" firstStartedPulling="2025-09-13 00:48:32.822264866 +0000 UTC m=+4.263230145" lastFinishedPulling="2025-09-13 00:48:41.361990412 +0000 UTC m=+12.802955698" observedRunningTime="2025-09-13 00:48:45.947401921 +0000 UTC m=+17.388367218" watchObservedRunningTime="2025-09-13 00:48:50.711431249 +0000 UTC m=+22.152396542" Sep 13 00:48:50.718750 systemd-networkd[1469]: lxc_health: Gained IPv6LL Sep 13 00:48:50.783782 systemd-networkd[1469]: lxcd0dae4feab02: Gained IPv6LL Sep 13 00:48:51.104603 systemd-networkd[1469]: lxc494d6854550c: Gained IPv6LL Sep 13 00:48:54.244910 env[1734]: time="2025-09-13T00:48:54.244817105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:54.245257 env[1734]: time="2025-09-13T00:48:54.244916702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:54.245257 env[1734]: time="2025-09-13T00:48:54.244940860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:54.245257 env[1734]: time="2025-09-13T00:48:54.245146166Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12d09c3885836ae192d0aea42019951039530d13c15618e09f0864edad2d3ec8 pid=4014 runtime=io.containerd.runc.v2 Sep 13 00:48:54.282956 systemd[1]: run-containerd-runc-k8s.io-12d09c3885836ae192d0aea42019951039530d13c15618e09f0864edad2d3ec8-runc.Kb9BvC.mount: Deactivated successfully. Sep 13 00:48:54.286003 systemd[1]: Started cri-containerd-12d09c3885836ae192d0aea42019951039530d13c15618e09f0864edad2d3ec8.scope. Sep 13 00:48:54.298068 env[1734]: time="2025-09-13T00:48:54.297989194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:54.298705 env[1734]: time="2025-09-13T00:48:54.298037520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:54.298705 env[1734]: time="2025-09-13T00:48:54.298048478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:54.298705 env[1734]: time="2025-09-13T00:48:54.298255754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/019c690f55d25727b3882d8228bbad3579fd6419bb022ca843b89628f21e152e pid=4039 runtime=io.containerd.runc.v2 Sep 13 00:48:54.321618 systemd[1]: Started cri-containerd-019c690f55d25727b3882d8228bbad3579fd6419bb022ca843b89628f21e152e.scope. Sep 13 00:48:54.382769 env[1734]: time="2025-09-13T00:48:54.382723239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8b56p,Uid:e63bfb4e-4d7e-491e-9c01-85a4c8b5d454,Namespace:kube-system,Attempt:0,} returns sandbox id \"12d09c3885836ae192d0aea42019951039530d13c15618e09f0864edad2d3ec8\"" Sep 13 00:48:54.396166 env[1734]: time="2025-09-13T00:48:54.395901205Z" level=info msg="CreateContainer within sandbox \"12d09c3885836ae192d0aea42019951039530d13c15618e09f0864edad2d3ec8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:48:54.400620 env[1734]: time="2025-09-13T00:48:54.400545525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hgzmk,Uid:a92ee3dd-b22d-4155-a033-72db3d8e264c,Namespace:kube-system,Attempt:0,} returns sandbox id \"019c690f55d25727b3882d8228bbad3579fd6419bb022ca843b89628f21e152e\"" Sep 13 00:48:54.404804 env[1734]: time="2025-09-13T00:48:54.404759590Z" level=info msg="CreateContainer within sandbox \"019c690f55d25727b3882d8228bbad3579fd6419bb022ca843b89628f21e152e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:48:54.438669 env[1734]: time="2025-09-13T00:48:54.438608197Z" level=info msg="CreateContainer within sandbox \"019c690f55d25727b3882d8228bbad3579fd6419bb022ca843b89628f21e152e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27c504de2fd8dcfee16a0af6b874cf32150cb4995ba95345182bf6e127729a9e\"" Sep 13 00:48:54.439706 env[1734]: time="2025-09-13T00:48:54.439663113Z" level=info msg="StartContainer for \"27c504de2fd8dcfee16a0af6b874cf32150cb4995ba95345182bf6e127729a9e\"" Sep 13 00:48:54.445504 env[1734]: time="2025-09-13T00:48:54.445442247Z" level=info msg="CreateContainer within sandbox \"12d09c3885836ae192d0aea42019951039530d13c15618e09f0864edad2d3ec8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abf21b65f94809844b81e109f4af6e5131ae701eb2aa03ad1a5aa4031cc48136\"" Sep 13 00:48:54.446472 env[1734]: time="2025-09-13T00:48:54.446430370Z" level=info msg="StartContainer for \"abf21b65f94809844b81e109f4af6e5131ae701eb2aa03ad1a5aa4031cc48136\"" Sep 13 00:48:54.475866 systemd[1]: Started cri-containerd-27c504de2fd8dcfee16a0af6b874cf32150cb4995ba95345182bf6e127729a9e.scope. Sep 13 00:48:54.493889 systemd[1]: Started cri-containerd-abf21b65f94809844b81e109f4af6e5131ae701eb2aa03ad1a5aa4031cc48136.scope. Sep 13 00:48:54.574675 env[1734]: time="2025-09-13T00:48:54.574574004Z" level=info msg="StartContainer for \"abf21b65f94809844b81e109f4af6e5131ae701eb2aa03ad1a5aa4031cc48136\" returns successfully" Sep 13 00:48:54.579241 env[1734]: time="2025-09-13T00:48:54.579196759Z" level=info msg="StartContainer for \"27c504de2fd8dcfee16a0af6b874cf32150cb4995ba95345182bf6e127729a9e\" returns successfully" Sep 13 00:48:54.956779 kubelet[2576]: I0913 00:48:54.956695 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8b56p" podStartSLOduration=22.956667439 podStartE2EDuration="22.956667439s" podCreationTimestamp="2025-09-13 00:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:54.937363907 +0000 UTC m=+26.378329203" watchObservedRunningTime="2025-09-13 00:48:54.956667439 +0000 UTC m=+26.397632726" Sep 13 00:48:55.616700 kubelet[2576]: I0913 00:48:55.616645 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hgzmk" podStartSLOduration=23.616626375 podStartE2EDuration="23.616626375s" podCreationTimestamp="2025-09-13 00:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:54.958710213 +0000 UTC m=+26.399675515" watchObservedRunningTime="2025-09-13 00:48:55.616626375 +0000 UTC m=+27.057591670" Sep 13 00:49:01.027367 systemd[1]: Started sshd@5-172.31.24.139:22-147.75.109.163:54392.service. Sep 13 00:49:01.226773 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 54392 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:01.231714 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:01.247648 systemd-logind[1729]: New session 6 of user core. Sep 13 00:49:01.249910 systemd[1]: Started session-6.scope. Sep 13 00:49:02.639702 sshd[4177]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:02.675344 systemd-logind[1729]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:49:02.685225 systemd[1]: sshd@5-172.31.24.139:22-147.75.109.163:54392.service: Deactivated successfully. Sep 13 00:49:02.692915 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:49:02.695647 systemd-logind[1729]: Removed session 6. Sep 13 00:49:07.648137 systemd[1]: Started sshd@6-172.31.24.139:22-147.75.109.163:54404.service. Sep 13 00:49:07.809422 sshd[4193]: Accepted publickey for core from 147.75.109.163 port 54404 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:07.810887 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:07.817061 systemd[1]: Started session-7.scope. Sep 13 00:49:07.817864 systemd-logind[1729]: New session 7 of user core. Sep 13 00:49:08.022937 sshd[4193]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:08.025943 systemd[1]: sshd@6-172.31.24.139:22-147.75.109.163:54404.service: Deactivated successfully. Sep 13 00:49:08.026703 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:49:08.027384 systemd-logind[1729]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:49:08.028399 systemd-logind[1729]: Removed session 7. Sep 13 00:49:13.047908 systemd[1]: Started sshd@7-172.31.24.139:22-147.75.109.163:54194.service. Sep 13 00:49:13.211932 sshd[4205]: Accepted publickey for core from 147.75.109.163 port 54194 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:13.213692 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:13.219473 systemd[1]: Started session-8.scope. Sep 13 00:49:13.220260 systemd-logind[1729]: New session 8 of user core. Sep 13 00:49:13.428131 sshd[4205]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:13.431537 systemd-logind[1729]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:49:13.431790 systemd[1]: sshd@7-172.31.24.139:22-147.75.109.163:54194.service: Deactivated successfully. Sep 13 00:49:13.432875 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:49:13.434154 systemd-logind[1729]: Removed session 8. Sep 13 00:49:18.453276 systemd[1]: Started sshd@8-172.31.24.139:22-147.75.109.163:54200.service. Sep 13 00:49:18.626011 sshd[4217]: Accepted publickey for core from 147.75.109.163 port 54200 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:18.627484 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:18.633192 systemd-logind[1729]: New session 9 of user core. Sep 13 00:49:18.633698 systemd[1]: Started session-9.scope. Sep 13 00:49:18.830493 sshd[4217]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:18.834804 systemd[1]: sshd@8-172.31.24.139:22-147.75.109.163:54200.service: Deactivated successfully. Sep 13 00:49:18.835631 systemd-logind[1729]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:49:18.835834 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:49:18.837361 systemd-logind[1729]: Removed session 9. Sep 13 00:49:18.857164 systemd[1]: Started sshd@9-172.31.24.139:22-147.75.109.163:54202.service. Sep 13 00:49:19.019147 sshd[4230]: Accepted publickey for core from 147.75.109.163 port 54202 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:19.020725 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:19.026240 systemd[1]: Started session-10.scope. Sep 13 00:49:19.026601 systemd-logind[1729]: New session 10 of user core. Sep 13 00:49:19.298896 sshd[4230]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:19.310931 systemd[1]: sshd@9-172.31.24.139:22-147.75.109.163:54202.service: Deactivated successfully. Sep 13 00:49:19.312637 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:49:19.312637 systemd-logind[1729]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:49:19.314159 systemd-logind[1729]: Removed session 10. Sep 13 00:49:19.325898 systemd[1]: Started sshd@10-172.31.24.139:22-147.75.109.163:54212.service. Sep 13 00:49:19.500403 sshd[4240]: Accepted publickey for core from 147.75.109.163 port 54212 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:19.502444 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:19.509535 systemd-logind[1729]: New session 11 of user core. Sep 13 00:49:19.510281 systemd[1]: Started session-11.scope. Sep 13 00:49:19.722585 sshd[4240]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:19.725904 systemd[1]: sshd@10-172.31.24.139:22-147.75.109.163:54212.service: Deactivated successfully. Sep 13 00:49:19.726925 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:49:19.727998 systemd-logind[1729]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:49:19.729252 systemd-logind[1729]: Removed session 11. Sep 13 00:49:24.749308 systemd[1]: Started sshd@11-172.31.24.139:22-147.75.109.163:35556.service. Sep 13 00:49:24.906046 sshd[4254]: Accepted publickey for core from 147.75.109.163 port 35556 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:24.907682 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:24.913710 systemd[1]: Started session-12.scope. Sep 13 00:49:24.914632 systemd-logind[1729]: New session 12 of user core. Sep 13 00:49:25.104671 sshd[4254]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:25.107884 systemd[1]: sshd@11-172.31.24.139:22-147.75.109.163:35556.service: Deactivated successfully. Sep 13 00:49:25.108687 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:49:25.109385 systemd-logind[1729]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:49:25.110246 systemd-logind[1729]: Removed session 12. Sep 13 00:49:30.132083 systemd[1]: Started sshd@12-172.31.24.139:22-147.75.109.163:50262.service. Sep 13 00:49:30.288663 sshd[4268]: Accepted publickey for core from 147.75.109.163 port 50262 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:30.290409 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:30.295213 systemd-logind[1729]: New session 13 of user core. Sep 13 00:49:30.295724 systemd[1]: Started session-13.scope. Sep 13 00:49:30.523333 sshd[4268]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:30.526752 systemd[1]: sshd@12-172.31.24.139:22-147.75.109.163:50262.service: Deactivated successfully. Sep 13 00:49:30.527470 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:49:30.528351 systemd-logind[1729]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:49:30.529176 systemd-logind[1729]: Removed session 13. Sep 13 00:49:30.550803 systemd[1]: Started sshd@13-172.31.24.139:22-147.75.109.163:50276.service. Sep 13 00:49:30.708754 sshd[4279]: Accepted publickey for core from 147.75.109.163 port 50276 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:30.712316 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:30.718898 systemd[1]: Started session-14.scope. Sep 13 00:49:30.719888 systemd-logind[1729]: New session 14 of user core. Sep 13 00:49:31.467031 sshd[4279]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:31.471253 systemd[1]: sshd@13-172.31.24.139:22-147.75.109.163:50276.service: Deactivated successfully. Sep 13 00:49:31.472491 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:49:31.473294 systemd-logind[1729]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:49:31.474418 systemd-logind[1729]: Removed session 14. Sep 13 00:49:31.492204 systemd[1]: Started sshd@14-172.31.24.139:22-147.75.109.163:50286.service. Sep 13 00:49:31.666047 sshd[4289]: Accepted publickey for core from 147.75.109.163 port 50286 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:31.667858 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:31.673587 systemd[1]: Started session-15.scope. Sep 13 00:49:31.674212 systemd-logind[1729]: New session 15 of user core. Sep 13 00:49:33.351261 sshd[4289]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:33.362893 systemd[1]: sshd@14-172.31.24.139:22-147.75.109.163:50286.service: Deactivated successfully. Sep 13 00:49:33.363893 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:49:33.363903 systemd-logind[1729]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:49:33.365437 systemd-logind[1729]: Removed session 15. Sep 13 00:49:33.376091 systemd[1]: Started sshd@15-172.31.24.139:22-147.75.109.163:50300.service. Sep 13 00:49:33.529949 sshd[4307]: Accepted publickey for core from 147.75.109.163 port 50300 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:33.531779 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:33.536875 systemd[1]: Started session-16.scope. Sep 13 00:49:33.537191 systemd-logind[1729]: New session 16 of user core. Sep 13 00:49:33.958978 sshd[4307]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:33.961660 systemd[1]: sshd@15-172.31.24.139:22-147.75.109.163:50300.service: Deactivated successfully. Sep 13 00:49:33.963003 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:49:33.963503 systemd-logind[1729]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:49:33.964409 systemd-logind[1729]: Removed session 16. Sep 13 00:49:33.982876 systemd[1]: Started sshd@16-172.31.24.139:22-147.75.109.163:50308.service. Sep 13 00:49:34.133237 sshd[4316]: Accepted publickey for core from 147.75.109.163 port 50308 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:34.135039 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:34.141887 systemd[1]: Started session-17.scope. Sep 13 00:49:34.142415 systemd-logind[1729]: New session 17 of user core. Sep 13 00:49:34.335874 sshd[4316]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:34.339325 systemd[1]: sshd@16-172.31.24.139:22-147.75.109.163:50308.service: Deactivated successfully. Sep 13 00:49:34.340184 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:49:34.340783 systemd-logind[1729]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:49:34.341448 systemd-logind[1729]: Removed session 17. Sep 13 00:49:39.362510 systemd[1]: Started sshd@17-172.31.24.139:22-147.75.109.163:50324.service. Sep 13 00:49:39.517441 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 50324 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:39.518851 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:39.526337 systemd[1]: Started session-18.scope. Sep 13 00:49:39.526893 systemd-logind[1729]: New session 18 of user core. Sep 13 00:49:39.735500 sshd[4329]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:39.738494 systemd[1]: sshd@17-172.31.24.139:22-147.75.109.163:50324.service: Deactivated successfully. Sep 13 00:49:39.739219 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:49:39.740743 systemd-logind[1729]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:49:39.742217 systemd-logind[1729]: Removed session 18. Sep 13 00:49:44.762507 systemd[1]: Started sshd@18-172.31.24.139:22-147.75.109.163:52792.service. Sep 13 00:49:44.925726 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 52792 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:44.931923 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:44.947186 systemd[1]: Started session-19.scope. Sep 13 00:49:44.947780 systemd-logind[1729]: New session 19 of user core. Sep 13 00:49:45.201247 sshd[4344]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:45.205121 systemd[1]: sshd@18-172.31.24.139:22-147.75.109.163:52792.service: Deactivated successfully. Sep 13 00:49:45.206157 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:49:45.207661 systemd-logind[1729]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:49:45.211315 systemd-logind[1729]: Removed session 19. Sep 13 00:49:50.227289 systemd[1]: Started sshd@19-172.31.24.139:22-147.75.109.163:41258.service. Sep 13 00:49:50.382658 sshd[4356]: Accepted publickey for core from 147.75.109.163 port 41258 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:50.384042 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:50.390732 systemd[1]: Started session-20.scope. Sep 13 00:49:50.392666 systemd-logind[1729]: New session 20 of user core. Sep 13 00:49:50.581863 sshd[4356]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:50.585029 systemd[1]: sshd@19-172.31.24.139:22-147.75.109.163:41258.service: Deactivated successfully. Sep 13 00:49:50.585769 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:49:50.586370 systemd-logind[1729]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:49:50.587438 systemd-logind[1729]: Removed session 20. Sep 13 00:49:55.608040 systemd[1]: Started sshd@20-172.31.24.139:22-147.75.109.163:41274.service. Sep 13 00:49:55.766126 sshd[4368]: Accepted publickey for core from 147.75.109.163 port 41274 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:55.767610 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:55.772644 systemd[1]: Started session-21.scope. Sep 13 00:49:55.773129 systemd-logind[1729]: New session 21 of user core. Sep 13 00:49:55.966266 sshd[4368]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:55.969213 systemd[1]: sshd@20-172.31.24.139:22-147.75.109.163:41274.service: Deactivated successfully. Sep 13 00:49:55.969991 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:49:55.970755 systemd-logind[1729]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:49:55.971625 systemd-logind[1729]: Removed session 21. Sep 13 00:49:55.990898 systemd[1]: Started sshd@21-172.31.24.139:22-147.75.109.163:41284.service. Sep 13 00:49:56.143429 sshd[4380]: Accepted publickey for core from 147.75.109.163 port 41284 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:56.145316 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:56.150377 systemd[1]: Started session-22.scope. Sep 13 00:49:56.150892 systemd-logind[1729]: New session 22 of user core. Sep 13 00:49:57.897891 env[1734]: time="2025-09-13T00:49:57.897832533Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:49:57.909797 env[1734]: time="2025-09-13T00:49:57.909747190Z" level=info msg="StopContainer for \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\" with timeout 2 (s)" Sep 13 00:49:57.910105 env[1734]: time="2025-09-13T00:49:57.909744622Z" level=info msg="StopContainer for \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\" with timeout 30 (s)" Sep 13 00:49:57.910538 env[1734]: time="2025-09-13T00:49:57.910502500Z" level=info msg="Stop container \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\" with signal terminated" Sep 13 00:49:57.910762 env[1734]: time="2025-09-13T00:49:57.910726847Z" level=info msg="Stop container \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\" with signal terminated" Sep 13 00:49:57.922858 systemd-networkd[1469]: lxc_health: Link DOWN Sep 13 00:49:57.922873 systemd-networkd[1469]: lxc_health: Lost carrier Sep 13 00:49:57.929151 systemd[1]: cri-containerd-da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12.scope: Deactivated successfully. Sep 13 00:49:57.959623 systemd[1]: cri-containerd-7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7.scope: Deactivated successfully. Sep 13 00:49:57.959992 systemd[1]: cri-containerd-7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7.scope: Consumed 8.257s CPU time. Sep 13 00:49:57.975962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12-rootfs.mount: Deactivated successfully. Sep 13 00:49:58.003968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7-rootfs.mount: Deactivated successfully. Sep 13 00:49:58.006521 env[1734]: time="2025-09-13T00:49:58.006457381Z" level=info msg="shim disconnected" id=da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12 Sep 13 00:49:58.007163 env[1734]: time="2025-09-13T00:49:58.006525414Z" level=warning msg="cleaning up after shim disconnected" id=da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12 namespace=k8s.io Sep 13 00:49:58.007163 env[1734]: time="2025-09-13T00:49:58.006539526Z" level=info msg="cleaning up dead shim" Sep 13 00:49:58.018691 env[1734]: time="2025-09-13T00:49:58.018629646Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4451 runtime=io.containerd.runc.v2\n" Sep 13 00:49:58.019754 env[1734]: time="2025-09-13T00:49:58.019691835Z" level=info msg="shim disconnected" id=7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7 Sep 13 00:49:58.019903 env[1734]: time="2025-09-13T00:49:58.019762276Z" level=warning msg="cleaning up after shim disconnected" id=7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7 namespace=k8s.io Sep 13 00:49:58.019903 env[1734]: time="2025-09-13T00:49:58.019775536Z" level=info msg="cleaning up dead shim" Sep 13 00:49:58.022322 env[1734]: time="2025-09-13T00:49:58.022259563Z" level=info msg="StopContainer for \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\" returns successfully" Sep 13 00:49:58.024391 env[1734]: time="2025-09-13T00:49:58.024353453Z" level=info msg="StopPodSandbox for \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\"" Sep 13 00:49:58.024505 env[1734]: time="2025-09-13T00:49:58.024441536Z" level=info msg="Container to stop \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:58.027184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83-shm.mount: Deactivated successfully. Sep 13 00:49:58.038626 env[1734]: time="2025-09-13T00:49:58.038507171Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4466 runtime=io.containerd.runc.v2\n" Sep 13 00:49:58.041205 systemd[1]: cri-containerd-94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83.scope: Deactivated successfully. Sep 13 00:49:58.043383 env[1734]: time="2025-09-13T00:49:58.043337745Z" level=info msg="StopContainer for \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\" returns successfully" Sep 13 00:49:58.044437 env[1734]: time="2025-09-13T00:49:58.044401233Z" level=info msg="StopPodSandbox for \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\"" Sep 13 00:49:58.044665 env[1734]: time="2025-09-13T00:49:58.044627924Z" level=info msg="Container to stop \"5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:58.044868 env[1734]: time="2025-09-13T00:49:58.044851665Z" level=info msg="Container to stop \"f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:58.044960 env[1734]: time="2025-09-13T00:49:58.044945613Z" level=info msg="Container to stop \"c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:58.045082 env[1734]: time="2025-09-13T00:49:58.045062510Z" level=info msg="Container to stop \"6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:58.045183 env[1734]: time="2025-09-13T00:49:58.045159017Z" level=info msg="Container to stop \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:58.048530 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1-shm.mount: Deactivated successfully. Sep 13 00:49:58.060037 systemd[1]: cri-containerd-4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1.scope: Deactivated successfully. Sep 13 00:49:58.114758 env[1734]: time="2025-09-13T00:49:58.114699498Z" level=info msg="shim disconnected" id=94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83 Sep 13 00:49:58.114758 env[1734]: time="2025-09-13T00:49:58.114747272Z" level=warning msg="cleaning up after shim disconnected" id=94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83 namespace=k8s.io Sep 13 00:49:58.114758 env[1734]: time="2025-09-13T00:49:58.114757242Z" level=info msg="cleaning up dead shim" Sep 13 00:49:58.115629 env[1734]: time="2025-09-13T00:49:58.115487808Z" level=info msg="shim disconnected" id=4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1 Sep 13 00:49:58.115629 env[1734]: time="2025-09-13T00:49:58.115542041Z" level=warning msg="cleaning up after shim disconnected" id=4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1 namespace=k8s.io Sep 13 00:49:58.115629 env[1734]: time="2025-09-13T00:49:58.115565350Z" level=info msg="cleaning up dead shim" Sep 13 00:49:58.125780 env[1734]: time="2025-09-13T00:49:58.125729207Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4521 runtime=io.containerd.runc.v2\n" Sep 13 00:49:58.126068 env[1734]: time="2025-09-13T00:49:58.126033594Z" level=info msg="TearDown network for sandbox \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\" successfully" Sep 13 00:49:58.126068 env[1734]: time="2025-09-13T00:49:58.126061878Z" level=info msg="StopPodSandbox for \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\" returns successfully" Sep 13 00:49:58.126743 env[1734]: time="2025-09-13T00:49:58.126624626Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4522 runtime=io.containerd.runc.v2\n" Sep 13 00:49:58.127061 env[1734]: time="2025-09-13T00:49:58.126857384Z" level=info msg="TearDown network for sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" successfully" Sep 13 00:49:58.127061 env[1734]: time="2025-09-13T00:49:58.126876933Z" level=info msg="StopPodSandbox for \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" returns successfully" Sep 13 00:49:58.227625 kubelet[2576]: I0913 00:49:58.226510 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-run\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.227625 kubelet[2576]: I0913 00:49:58.226607 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-bpf-maps\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.227625 kubelet[2576]: I0913 00:49:58.226634 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-lib-modules\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.227625 kubelet[2576]: I0913 00:49:58.226663 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxkgd\" (UniqueName: \"kubernetes.io/projected/61e0f3d2-1e48-43c1-b4f4-810ffb649699-kube-api-access-xxkgd\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.227625 kubelet[2576]: I0913 00:49:58.226687 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-cgroup\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.227625 kubelet[2576]: I0913 00:49:58.226723 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61e0f3d2-1e48-43c1-b4f4-810ffb649699-hubble-tls\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.228409 kubelet[2576]: I0913 00:49:58.226742 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-host-proc-sys-net\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.228409 kubelet[2576]: I0913 00:49:58.226765 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-etc-cni-netd\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.228409 kubelet[2576]: I0913 00:49:58.226788 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-hostproc\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.228409 kubelet[2576]: I0913 00:49:58.226811 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cni-path\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.228409 kubelet[2576]: I0913 00:49:58.226839 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61e0f3d2-1e48-43c1-b4f4-810ffb649699-clustermesh-secrets\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.228409 kubelet[2576]: I0913 00:49:58.226865 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sdw7\" (UniqueName: \"kubernetes.io/projected/e381b6cc-d909-4933-a07d-7a7e2261f5f0-kube-api-access-6sdw7\") pod \"e381b6cc-d909-4933-a07d-7a7e2261f5f0\" (UID: \"e381b6cc-d909-4933-a07d-7a7e2261f5f0\") " Sep 13 00:49:58.228611 kubelet[2576]: I0913 00:49:58.226896 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-host-proc-sys-kernel\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.228611 kubelet[2576]: I0913 00:49:58.226924 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e381b6cc-d909-4933-a07d-7a7e2261f5f0-cilium-config-path\") pod \"e381b6cc-d909-4933-a07d-7a7e2261f5f0\" (UID: \"e381b6cc-d909-4933-a07d-7a7e2261f5f0\") " Sep 13 00:49:58.228611 kubelet[2576]: I0913 00:49:58.226947 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-xtables-lock\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.228611 kubelet[2576]: I0913 00:49:58.226973 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-config-path\") pod \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\" (UID: \"61e0f3d2-1e48-43c1-b4f4-810ffb649699\") " Sep 13 00:49:58.240077 kubelet[2576]: I0913 00:49:58.238635 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-hostproc" (OuterVolumeSpecName: "hostproc") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.240527 kubelet[2576]: I0913 00:49:58.233673 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.240636 kubelet[2576]: I0913 00:49:58.240129 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.240691 kubelet[2576]: I0913 00:49:58.240257 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.240735 kubelet[2576]: I0913 00:49:58.240271 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.240803 kubelet[2576]: I0913 00:49:58.240789 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cni-path" (OuterVolumeSpecName: "cni-path") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.242298 kubelet[2576]: I0913 00:49:58.242155 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:49:58.246973 kubelet[2576]: I0913 00:49:58.246920 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e0f3d2-1e48-43c1-b4f4-810ffb649699-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:49:58.247883 kubelet[2576]: I0913 00:49:58.247834 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e0f3d2-1e48-43c1-b4f4-810ffb649699-kube-api-access-xxkgd" (OuterVolumeSpecName: "kube-api-access-xxkgd") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "kube-api-access-xxkgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:49:58.247981 kubelet[2576]: I0913 00:49:58.247908 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.249859 kubelet[2576]: I0913 00:49:58.249819 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e381b6cc-d909-4933-a07d-7a7e2261f5f0-kube-api-access-6sdw7" (OuterVolumeSpecName: "kube-api-access-6sdw7") pod "e381b6cc-d909-4933-a07d-7a7e2261f5f0" (UID: "e381b6cc-d909-4933-a07d-7a7e2261f5f0"). InnerVolumeSpecName "kube-api-access-6sdw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:49:58.249951 kubelet[2576]: I0913 00:49:58.249869 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.251793 kubelet[2576]: I0913 00:49:58.251746 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e381b6cc-d909-4933-a07d-7a7e2261f5f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e381b6cc-d909-4933-a07d-7a7e2261f5f0" (UID: "e381b6cc-d909-4933-a07d-7a7e2261f5f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:49:58.251793 kubelet[2576]: I0913 00:49:58.251792 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.251920 kubelet[2576]: I0913 00:49:58.251813 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:49:58.251920 kubelet[2576]: I0913 00:49:58.251883 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e0f3d2-1e48-43c1-b4f4-810ffb649699-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "61e0f3d2-1e48-43c1-b4f4-810ffb649699" (UID: "61e0f3d2-1e48-43c1-b4f4-810ffb649699"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:49:58.327536 kubelet[2576]: I0913 00:49:58.327483 2576 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-etc-cni-netd\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327536 kubelet[2576]: I0913 00:49:58.327526 2576 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-hostproc\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327536 kubelet[2576]: I0913 00:49:58.327535 2576 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cni-path\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327536 kubelet[2576]: I0913 00:49:58.327544 2576 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61e0f3d2-1e48-43c1-b4f4-810ffb649699-clustermesh-secrets\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327820 kubelet[2576]: I0913 00:49:58.327577 2576 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sdw7\" (UniqueName: \"kubernetes.io/projected/e381b6cc-d909-4933-a07d-7a7e2261f5f0-kube-api-access-6sdw7\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327820 kubelet[2576]: I0913 00:49:58.327586 2576 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-host-proc-sys-kernel\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327820 kubelet[2576]: I0913 00:49:58.327594 2576 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e381b6cc-d909-4933-a07d-7a7e2261f5f0-cilium-config-path\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327820 kubelet[2576]: I0913 00:49:58.327602 2576 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-xtables-lock\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327820 kubelet[2576]: I0913 00:49:58.327610 2576 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-config-path\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327820 kubelet[2576]: I0913 00:49:58.327618 2576 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-run\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327820 kubelet[2576]: I0913 00:49:58.327625 2576 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-bpf-maps\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.327820 kubelet[2576]: I0913 00:49:58.327632 2576 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-lib-modules\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.328026 kubelet[2576]: I0913 00:49:58.327639 2576 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxkgd\" (UniqueName: \"kubernetes.io/projected/61e0f3d2-1e48-43c1-b4f4-810ffb649699-kube-api-access-xxkgd\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.328026 kubelet[2576]: I0913 00:49:58.327649 2576 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-cilium-cgroup\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.328026 kubelet[2576]: I0913 00:49:58.327660 2576 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61e0f3d2-1e48-43c1-b4f4-810ffb649699-hubble-tls\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.328026 kubelet[2576]: I0913 00:49:58.327668 2576 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61e0f3d2-1e48-43c1-b4f4-810ffb649699-host-proc-sys-net\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:49:58.774577 systemd[1]: Removed slice kubepods-besteffort-pode381b6cc_d909_4933_a07d_7a7e2261f5f0.slice. Sep 13 00:49:58.776001 systemd[1]: Removed slice kubepods-burstable-pod61e0f3d2_1e48_43c1_b4f4_810ffb649699.slice. Sep 13 00:49:58.776136 systemd[1]: kubepods-burstable-pod61e0f3d2_1e48_43c1_b4f4_810ffb649699.slice: Consumed 8.401s CPU time. Sep 13 00:49:58.857263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83-rootfs.mount: Deactivated successfully. Sep 13 00:49:58.857371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1-rootfs.mount: Deactivated successfully. Sep 13 00:49:58.857441 systemd[1]: var-lib-kubelet-pods-e381b6cc\x2dd909\x2d4933\x2da07d\x2d7a7e2261f5f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6sdw7.mount: Deactivated successfully. Sep 13 00:49:58.857502 systemd[1]: var-lib-kubelet-pods-61e0f3d2\x2d1e48\x2d43c1\x2db4f4\x2d810ffb649699-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxxkgd.mount: Deactivated successfully. Sep 13 00:49:58.857575 systemd[1]: var-lib-kubelet-pods-61e0f3d2\x2d1e48\x2d43c1\x2db4f4\x2d810ffb649699-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:49:58.857635 systemd[1]: var-lib-kubelet-pods-61e0f3d2\x2d1e48\x2d43c1\x2db4f4\x2d810ffb649699-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:49:58.888515 kubelet[2576]: E0913 00:49:58.888477 2576 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:49:59.114529 kubelet[2576]: I0913 00:49:59.112989 2576 scope.go:117] "RemoveContainer" containerID="da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12" Sep 13 00:49:59.114851 env[1734]: time="2025-09-13T00:49:59.114709015Z" level=info msg="RemoveContainer for \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\"" Sep 13 00:49:59.124986 env[1734]: time="2025-09-13T00:49:59.124932501Z" level=info msg="RemoveContainer for \"da21ba16545a1273ed4170a9679ce2b0c0c7189571ee76817f62b2f1b7abde12\" returns successfully" Sep 13 00:49:59.125576 kubelet[2576]: I0913 00:49:59.125531 2576 scope.go:117] "RemoveContainer" containerID="7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7" Sep 13 00:49:59.127913 env[1734]: time="2025-09-13T00:49:59.127859377Z" level=info msg="RemoveContainer for \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\"" Sep 13 00:49:59.134466 env[1734]: time="2025-09-13T00:49:59.134419110Z" level=info msg="RemoveContainer for \"7ec172dc353ec84ce8123afa8ef3b667815b04b1ccdd47d4694a3ec19c5012b7\" returns successfully" Sep 13 00:49:59.134809 kubelet[2576]: I0913 00:49:59.134780 2576 scope.go:117] "RemoveContainer" containerID="6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56" Sep 13 00:49:59.138061 env[1734]: time="2025-09-13T00:49:59.136523656Z" level=info msg="RemoveContainer for \"6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56\"" Sep 13 00:49:59.144059 env[1734]: time="2025-09-13T00:49:59.144001009Z" level=info msg="RemoveContainer for \"6a809036dadea88f87ac964d579783361f60dfedf9a43d3ab6add9b9033f4f56\" returns successfully" Sep 13 00:49:59.144669 kubelet[2576]: I0913 00:49:59.144622 2576 scope.go:117] "RemoveContainer" containerID="c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765" Sep 13 00:49:59.147598 env[1734]: time="2025-09-13T00:49:59.147459873Z" level=info msg="RemoveContainer for \"c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765\"" Sep 13 00:49:59.164496 env[1734]: time="2025-09-13T00:49:59.164438610Z" level=info msg="RemoveContainer for \"c3338064be473c96d9e2efa5ecc770024253416b5ffab6c95c2881f6329a4765\" returns successfully" Sep 13 00:49:59.164938 kubelet[2576]: I0913 00:49:59.164908 2576 scope.go:117] "RemoveContainer" containerID="5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46" Sep 13 00:49:59.166757 env[1734]: time="2025-09-13T00:49:59.166694434Z" level=info msg="RemoveContainer for \"5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46\"" Sep 13 00:49:59.172531 env[1734]: time="2025-09-13T00:49:59.172475327Z" level=info msg="RemoveContainer for \"5397a11c98c196ce0635a7bf3d8a4d1f3e682f4473bf7265cecd48d3ebd7ea46\" returns successfully" Sep 13 00:49:59.172938 kubelet[2576]: I0913 00:49:59.172904 2576 scope.go:117] "RemoveContainer" containerID="f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c" Sep 13 00:49:59.175119 env[1734]: time="2025-09-13T00:49:59.175050352Z" level=info msg="RemoveContainer for \"f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c\"" Sep 13 00:49:59.180906 env[1734]: time="2025-09-13T00:49:59.180844397Z" level=info msg="RemoveContainer for \"f2570aab0a3520e587f9bca5f3065a23c590a47507b51352a66f34b00b85226c\" returns successfully" Sep 13 00:49:59.815761 sshd[4380]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:59.818626 systemd[1]: sshd@21-172.31.24.139:22-147.75.109.163:41284.service: Deactivated successfully. Sep 13 00:49:59.819322 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:49:59.819898 systemd-logind[1729]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:49:59.820887 systemd-logind[1729]: Removed session 22. Sep 13 00:49:59.841225 systemd[1]: Started sshd@22-172.31.24.139:22-147.75.109.163:41300.service. Sep 13 00:50:00.064585 sshd[4555]: Accepted publickey for core from 147.75.109.163 port 41300 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:50:00.084174 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:50:00.185706 systemd-logind[1729]: New session 23 of user core. Sep 13 00:50:00.189400 systemd[1]: Started session-23.scope. Sep 13 00:50:00.487106 kubelet[2576]: I0913 00:50:00.487043 2576 setters.go:600] "Node became not ready" node="ip-172-31-24-139" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:50:00Z","lastTransitionTime":"2025-09-13T00:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:50:00.772884 kubelet[2576]: I0913 00:50:00.772764 2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e0f3d2-1e48-43c1-b4f4-810ffb649699" path="/var/lib/kubelet/pods/61e0f3d2-1e48-43c1-b4f4-810ffb649699/volumes" Sep 13 00:50:00.773621 kubelet[2576]: I0913 00:50:00.773593 2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e381b6cc-d909-4933-a07d-7a7e2261f5f0" path="/var/lib/kubelet/pods/e381b6cc-d909-4933-a07d-7a7e2261f5f0/volumes" Sep 13 00:50:01.196867 sshd[4555]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:01.204117 systemd[1]: sshd@22-172.31.24.139:22-147.75.109.163:41300.service: Deactivated successfully. Sep 13 00:50:01.206012 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:50:01.227462 systemd-logind[1729]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:50:01.287360 systemd[1]: Started sshd@23-172.31.24.139:22-147.75.109.163:53458.service. Sep 13 00:50:01.294008 systemd-logind[1729]: Removed session 23. Sep 13 00:50:01.734104 sshd[4565]: Accepted publickey for core from 147.75.109.163 port 53458 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:50:01.750207 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:50:01.826878 systemd[1]: Started session-24.scope. Sep 13 00:50:01.855216 systemd-logind[1729]: New session 24 of user core. Sep 13 00:50:01.996770 kubelet[2576]: E0913 00:50:01.996638 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e381b6cc-d909-4933-a07d-7a7e2261f5f0" containerName="cilium-operator" Sep 13 00:50:01.997346 kubelet[2576]: E0913 00:50:01.997323 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61e0f3d2-1e48-43c1-b4f4-810ffb649699" containerName="clean-cilium-state" Sep 13 00:50:01.997467 kubelet[2576]: E0913 00:50:01.997442 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61e0f3d2-1e48-43c1-b4f4-810ffb649699" containerName="cilium-agent" Sep 13 00:50:01.997550 kubelet[2576]: E0913 00:50:01.997534 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61e0f3d2-1e48-43c1-b4f4-810ffb649699" containerName="mount-cgroup" Sep 13 00:50:01.997653 kubelet[2576]: E0913 00:50:01.997642 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61e0f3d2-1e48-43c1-b4f4-810ffb649699" containerName="apply-sysctl-overwrites" Sep 13 00:50:01.997747 kubelet[2576]: E0913 00:50:01.997737 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61e0f3d2-1e48-43c1-b4f4-810ffb649699" containerName="mount-bpf-fs" Sep 13 00:50:01.997887 kubelet[2576]: I0913 00:50:01.997870 2576 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e0f3d2-1e48-43c1-b4f4-810ffb649699" containerName="cilium-agent" Sep 13 00:50:01.997968 kubelet[2576]: I0913 00:50:01.997957 2576 memory_manager.go:354] "RemoveStaleState removing state" podUID="e381b6cc-d909-4933-a07d-7a7e2261f5f0" containerName="cilium-operator" Sep 13 00:50:02.167733 systemd[1]: Created slice kubepods-burstable-pod8cd4502f_c463_4fdd_88e0_fb6eef73bd04.slice. Sep 13 00:50:02.265799 kubelet[2576]: I0913 00:50:02.265679 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-lib-modules\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.265799 kubelet[2576]: I0913 00:50:02.265737 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhzq\" (UniqueName: \"kubernetes.io/projected/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-kube-api-access-tvhzq\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.265799 kubelet[2576]: I0913 00:50:02.265767 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-hostproc\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.265799 kubelet[2576]: I0913 00:50:02.265791 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-xtables-lock\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266534 kubelet[2576]: I0913 00:50:02.265827 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-run\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266534 kubelet[2576]: I0913 00:50:02.265848 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-bpf-maps\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266534 kubelet[2576]: I0913 00:50:02.265867 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-cgroup\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266534 kubelet[2576]: I0913 00:50:02.265892 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-clustermesh-secrets\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266534 kubelet[2576]: I0913 00:50:02.265915 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-host-proc-sys-kernel\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266534 kubelet[2576]: I0913 00:50:02.265935 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-hubble-tls\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266868 kubelet[2576]: I0913 00:50:02.265955 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-etc-cni-netd\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266868 kubelet[2576]: I0913 00:50:02.265975 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-host-proc-sys-net\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266868 kubelet[2576]: I0913 00:50:02.265999 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-ipsec-secrets\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266868 kubelet[2576]: I0913 00:50:02.266024 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cni-path\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.266868 kubelet[2576]: I0913 00:50:02.266061 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-config-path\") pod \"cilium-jmzdq\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " pod="kube-system/cilium-jmzdq" Sep 13 00:50:02.759278 sshd[4565]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:02.791828 systemd[1]: sshd@23-172.31.24.139:22-147.75.109.163:53458.service: Deactivated successfully. Sep 13 00:50:02.801860 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:50:02.809899 systemd-logind[1729]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:50:02.813633 systemd[1]: Started sshd@24-172.31.24.139:22-147.75.109.163:53460.service. Sep 13 00:50:02.818105 systemd-logind[1729]: Removed session 24. Sep 13 00:50:02.830014 env[1734]: time="2025-09-13T00:50:02.829963610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jmzdq,Uid:8cd4502f-c463-4fdd-88e0-fb6eef73bd04,Namespace:kube-system,Attempt:0,}" Sep 13 00:50:02.888373 env[1734]: time="2025-09-13T00:50:02.888166208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:02.888690 env[1734]: time="2025-09-13T00:50:02.888625142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:02.888831 env[1734]: time="2025-09-13T00:50:02.888807493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:02.889525 env[1734]: time="2025-09-13T00:50:02.889383149Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034 pid=4590 runtime=io.containerd.runc.v2 Sep 13 00:50:02.966750 systemd[1]: Started cri-containerd-2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034.scope. Sep 13 00:50:03.014633 sshd[4582]: Accepted publickey for core from 147.75.109.163 port 53460 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:50:03.015959 sshd[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:50:03.024692 systemd[1]: Started session-25.scope. Sep 13 00:50:03.031162 systemd-logind[1729]: New session 25 of user core. Sep 13 00:50:03.032054 env[1734]: time="2025-09-13T00:50:03.031751362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jmzdq,Uid:8cd4502f-c463-4fdd-88e0-fb6eef73bd04,Namespace:kube-system,Attempt:0,} returns sandbox id \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\"" Sep 13 00:50:03.036596 env[1734]: time="2025-09-13T00:50:03.035805483Z" level=info msg="CreateContainer within sandbox \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:50:03.070071 env[1734]: time="2025-09-13T00:50:03.069998808Z" level=info msg="CreateContainer within sandbox \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15\"" Sep 13 00:50:03.080612 env[1734]: time="2025-09-13T00:50:03.080540894Z" level=info msg="StartContainer for \"884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15\"" Sep 13 00:50:03.131818 systemd[1]: Started cri-containerd-884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15.scope. Sep 13 00:50:03.155925 systemd[1]: cri-containerd-884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15.scope: Deactivated successfully. Sep 13 00:50:03.206646 env[1734]: time="2025-09-13T00:50:03.206502255Z" level=info msg="shim disconnected" id=884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15 Sep 13 00:50:03.206974 env[1734]: time="2025-09-13T00:50:03.206651790Z" level=warning msg="cleaning up after shim disconnected" id=884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15 namespace=k8s.io Sep 13 00:50:03.206974 env[1734]: time="2025-09-13T00:50:03.206700249Z" level=info msg="cleaning up dead shim" Sep 13 00:50:03.247730 env[1734]: time="2025-09-13T00:50:03.247671948Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4655 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:50:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:50:03.248409 env[1734]: time="2025-09-13T00:50:03.248247980Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Sep 13 00:50:03.249464 env[1734]: time="2025-09-13T00:50:03.249397528Z" level=error msg="Failed to pipe stdout of container \"884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15\"" error="reading from a closed fifo" Sep 13 00:50:03.249894 env[1734]: time="2025-09-13T00:50:03.249835125Z" level=error msg="Failed to pipe stderr of container \"884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15\"" error="reading from a closed fifo" Sep 13 00:50:03.253367 env[1734]: time="2025-09-13T00:50:03.253280141Z" level=error msg="StartContainer for \"884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:50:03.254084 kubelet[2576]: E0913 00:50:03.253875 2576 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15" Sep 13 00:50:03.271859 kubelet[2576]: E0913 00:50:03.270711 2576 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 13 00:50:03.271859 kubelet[2576]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:50:03.271859 kubelet[2576]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:50:03.271859 kubelet[2576]: rm /hostbin/cilium-mount Sep 13 00:50:03.272381 kubelet[2576]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvhzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-jmzdq_kube-system(8cd4502f-c463-4fdd-88e0-fb6eef73bd04): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:50:03.272381 kubelet[2576]: > logger="UnhandledError" Sep 13 00:50:03.273947 kubelet[2576]: E0913 00:50:03.273880 2576 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jmzdq" podUID="8cd4502f-c463-4fdd-88e0-fb6eef73bd04" Sep 13 00:50:03.889423 kubelet[2576]: E0913 00:50:03.889375 2576 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:50:04.230780 env[1734]: time="2025-09-13T00:50:04.230627338Z" level=info msg="StopPodSandbox for \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\"" Sep 13 00:50:04.231359 env[1734]: time="2025-09-13T00:50:04.231319555Z" level=info msg="Container to stop \"884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:50:04.234403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034-shm.mount: Deactivated successfully. Sep 13 00:50:04.245276 systemd[1]: cri-containerd-2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034.scope: Deactivated successfully. Sep 13 00:50:04.290295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034-rootfs.mount: Deactivated successfully. Sep 13 00:50:04.304336 env[1734]: time="2025-09-13T00:50:04.304287341Z" level=info msg="shim disconnected" id=2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034 Sep 13 00:50:04.304336 env[1734]: time="2025-09-13T00:50:04.304337823Z" level=warning msg="cleaning up after shim disconnected" id=2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034 namespace=k8s.io Sep 13 00:50:04.304336 env[1734]: time="2025-09-13T00:50:04.304347826Z" level=info msg="cleaning up dead shim" Sep 13 00:50:04.316374 env[1734]: time="2025-09-13T00:50:04.316321094Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4689 runtime=io.containerd.runc.v2\n" Sep 13 00:50:04.316834 env[1734]: time="2025-09-13T00:50:04.316786175Z" level=info msg="TearDown network for sandbox \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\" successfully" Sep 13 00:50:04.316947 env[1734]: time="2025-09-13T00:50:04.316829896Z" level=info msg="StopPodSandbox for \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\" returns successfully" Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.514724 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-cgroup\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.514815 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-hostproc\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.514841 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-run\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.514895 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvhzq\" (UniqueName: \"kubernetes.io/projected/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-kube-api-access-tvhzq\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.514924 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-host-proc-sys-net\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.514967 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-xtables-lock\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.514996 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-clustermesh-secrets\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515038 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-host-proc-sys-kernel\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515062 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cni-path\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515084 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-bpf-maps\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515125 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-etc-cni-netd\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515153 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-config-path\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515196 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-hubble-tls\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515225 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-ipsec-secrets\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515247 2576 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-lib-modules\") pod \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\" (UID: \"8cd4502f-c463-4fdd-88e0-fb6eef73bd04\") " Sep 13 00:50:04.515541 kubelet[2576]: I0913 00:50:04.515358 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.517805 kubelet[2576]: I0913 00:50:04.515396 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.517805 kubelet[2576]: I0913 00:50:04.515438 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-hostproc" (OuterVolumeSpecName: "hostproc") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.517805 kubelet[2576]: I0913 00:50:04.515459 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.518072 kubelet[2576]: I0913 00:50:04.518030 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cni-path" (OuterVolumeSpecName: "cni-path") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.518487 kubelet[2576]: I0913 00:50:04.518316 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.527547 systemd[1]: var-lib-kubelet-pods-8cd4502f\x2dc463\x2d4fdd\x2d88e0\x2dfb6eef73bd04-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtvhzq.mount: Deactivated successfully. Sep 13 00:50:04.527717 systemd[1]: var-lib-kubelet-pods-8cd4502f\x2dc463\x2d4fdd\x2d88e0\x2dfb6eef73bd04-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:50:04.532252 kubelet[2576]: I0913 00:50:04.518655 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.532452 kubelet[2576]: I0913 00:50:04.518668 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.532690 kubelet[2576]: I0913 00:50:04.518686 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.532806 kubelet[2576]: I0913 00:50:04.521940 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:50:04.532906 kubelet[2576]: I0913 00:50:04.526227 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:50:04.533005 kubelet[2576]: I0913 00:50:04.532180 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:04.534018 systemd[1]: var-lib-kubelet-pods-8cd4502f\x2dc463\x2d4fdd\x2d88e0\x2dfb6eef73bd04-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:50:04.535327 kubelet[2576]: I0913 00:50:04.535040 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-kube-api-access-tvhzq" (OuterVolumeSpecName: "kube-api-access-tvhzq") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "kube-api-access-tvhzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:50:04.538125 kubelet[2576]: I0913 00:50:04.538078 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:50:04.541693 systemd[1]: var-lib-kubelet-pods-8cd4502f\x2dc463\x2d4fdd\x2d88e0\x2dfb6eef73bd04-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:50:04.543373 kubelet[2576]: I0913 00:50:04.543320 2576 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8cd4502f-c463-4fdd-88e0-fb6eef73bd04" (UID: "8cd4502f-c463-4fdd-88e0-fb6eef73bd04"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:50:04.615619 kubelet[2576]: I0913 00:50:04.615571 2576 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-xtables-lock\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.615619 kubelet[2576]: I0913 00:50:04.615610 2576 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-clustermesh-secrets\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.615619 kubelet[2576]: I0913 00:50:04.615621 2576 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-host-proc-sys-kernel\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.615619 kubelet[2576]: I0913 00:50:04.615630 2576 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cni-path\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615642 2576 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-bpf-maps\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615650 2576 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-etc-cni-netd\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615658 2576 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-config-path\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615666 2576 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-hubble-tls\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615674 2576 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-ipsec-secrets\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615681 2576 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-lib-modules\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615689 2576 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-cgroup\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615696 2576 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-hostproc\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615703 2576 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-cilium-run\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615711 2576 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvhzq\" (UniqueName: \"kubernetes.io/projected/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-kube-api-access-tvhzq\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.616100 kubelet[2576]: I0913 00:50:04.615719 2576 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8cd4502f-c463-4fdd-88e0-fb6eef73bd04-host-proc-sys-net\") on node \"ip-172-31-24-139\" DevicePath \"\"" Sep 13 00:50:04.774741 systemd[1]: Removed slice kubepods-burstable-pod8cd4502f_c463_4fdd_88e0_fb6eef73bd04.slice. Sep 13 00:50:05.234631 kubelet[2576]: I0913 00:50:05.234608 2576 scope.go:117] "RemoveContainer" containerID="884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15" Sep 13 00:50:05.240011 env[1734]: time="2025-09-13T00:50:05.239978940Z" level=info msg="RemoveContainer for \"884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15\"" Sep 13 00:50:05.249283 env[1734]: time="2025-09-13T00:50:05.249235483Z" level=info msg="RemoveContainer for \"884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15\" returns successfully" Sep 13 00:50:05.288077 kubelet[2576]: E0913 00:50:05.288047 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8cd4502f-c463-4fdd-88e0-fb6eef73bd04" containerName="mount-cgroup" Sep 13 00:50:05.288335 kubelet[2576]: I0913 00:50:05.288306 2576 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd4502f-c463-4fdd-88e0-fb6eef73bd04" containerName="mount-cgroup" Sep 13 00:50:05.293415 systemd[1]: Created slice kubepods-burstable-pod5caeae74_27d2_40ef_b36c_8c9afde4d089.slice. Sep 13 00:50:05.419754 kubelet[2576]: I0913 00:50:05.419701 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5caeae74-27d2-40ef-b36c-8c9afde4d089-clustermesh-secrets\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.419754 kubelet[2576]: I0913 00:50:05.419744 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-cilium-run\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.419754 kubelet[2576]: I0913 00:50:05.419761 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-bpf-maps\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419776 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-cilium-cgroup\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419790 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-cni-path\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419806 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-host-proc-sys-net\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419824 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-etc-cni-netd\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419844 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf52b\" (UniqueName: \"kubernetes.io/projected/5caeae74-27d2-40ef-b36c-8c9afde4d089-kube-api-access-nf52b\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419861 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5caeae74-27d2-40ef-b36c-8c9afde4d089-hubble-tls\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419875 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-hostproc\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419889 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-xtables-lock\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419903 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-host-proc-sys-kernel\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419917 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5caeae74-27d2-40ef-b36c-8c9afde4d089-lib-modules\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419940 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5caeae74-27d2-40ef-b36c-8c9afde4d089-cilium-config-path\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.420052 kubelet[2576]: I0913 00:50:05.419957 2576 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5caeae74-27d2-40ef-b36c-8c9afde4d089-cilium-ipsec-secrets\") pod \"cilium-6vmxj\" (UID: \"5caeae74-27d2-40ef-b36c-8c9afde4d089\") " pod="kube-system/cilium-6vmxj" Sep 13 00:50:05.597369 env[1734]: time="2025-09-13T00:50:05.597260138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vmxj,Uid:5caeae74-27d2-40ef-b36c-8c9afde4d089,Namespace:kube-system,Attempt:0,}" Sep 13 00:50:05.622083 env[1734]: time="2025-09-13T00:50:05.621998221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:05.622083 env[1734]: time="2025-09-13T00:50:05.622038672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:05.622083 env[1734]: time="2025-09-13T00:50:05.622050757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:05.622435 env[1734]: time="2025-09-13T00:50:05.622394593Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce pid=4718 runtime=io.containerd.runc.v2 Sep 13 00:50:05.644617 systemd[1]: Started cri-containerd-a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce.scope. Sep 13 00:50:05.683441 env[1734]: time="2025-09-13T00:50:05.683388166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vmxj,Uid:5caeae74-27d2-40ef-b36c-8c9afde4d089,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\"" Sep 13 00:50:05.688989 env[1734]: time="2025-09-13T00:50:05.688924997Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:50:05.710525 env[1734]: time="2025-09-13T00:50:05.710449859Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd\"" Sep 13 00:50:05.711536 env[1734]: time="2025-09-13T00:50:05.711472547Z" level=info msg="StartContainer for \"6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd\"" Sep 13 00:50:05.762923 systemd[1]: Started cri-containerd-6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd.scope. Sep 13 00:50:05.809932 env[1734]: time="2025-09-13T00:50:05.809817009Z" level=info msg="StartContainer for \"6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd\" returns successfully" Sep 13 00:50:05.858227 systemd[1]: cri-containerd-6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd.scope: Deactivated successfully. Sep 13 00:50:05.915723 env[1734]: time="2025-09-13T00:50:05.915667548Z" level=info msg="shim disconnected" id=6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd Sep 13 00:50:05.916040 env[1734]: time="2025-09-13T00:50:05.916017065Z" level=warning msg="cleaning up after shim disconnected" id=6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd namespace=k8s.io Sep 13 00:50:05.916164 env[1734]: time="2025-09-13T00:50:05.916149287Z" level=info msg="cleaning up dead shim" Sep 13 00:50:05.935247 env[1734]: time="2025-09-13T00:50:05.935184951Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4803 runtime=io.containerd.runc.v2\n" Sep 13 00:50:06.241249 env[1734]: time="2025-09-13T00:50:06.241214602Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:50:06.277702 env[1734]: time="2025-09-13T00:50:06.277633954Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74\"" Sep 13 00:50:06.278595 env[1734]: time="2025-09-13T00:50:06.278492994Z" level=info msg="StartContainer for \"40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74\"" Sep 13 00:50:06.303146 systemd[1]: Started cri-containerd-40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74.scope. Sep 13 00:50:06.320248 kubelet[2576]: W0913 00:50:06.319752 2576 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cd4502f_c463_4fdd_88e0_fb6eef73bd04.slice/cri-containerd-884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15.scope WatchSource:0}: container "884fefd21810c4dd54c69c352b66aaa3f0d589611bb4e3b28fc31cd71d915b15" in namespace "k8s.io": not found Sep 13 00:50:06.347926 env[1734]: time="2025-09-13T00:50:06.347878095Z" level=info msg="StartContainer for \"40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74\" returns successfully" Sep 13 00:50:06.360305 systemd[1]: cri-containerd-40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74.scope: Deactivated successfully. Sep 13 00:50:06.406144 env[1734]: time="2025-09-13T00:50:06.406092829Z" level=info msg="shim disconnected" id=40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74 Sep 13 00:50:06.406144 env[1734]: time="2025-09-13T00:50:06.406139998Z" level=warning msg="cleaning up after shim disconnected" id=40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74 namespace=k8s.io Sep 13 00:50:06.406144 env[1734]: time="2025-09-13T00:50:06.406149361Z" level=info msg="cleaning up dead shim" Sep 13 00:50:06.414748 env[1734]: time="2025-09-13T00:50:06.414686144Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4867 runtime=io.containerd.runc.v2\n" Sep 13 00:50:06.772046 kubelet[2576]: I0913 00:50:06.771978 2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd4502f-c463-4fdd-88e0-fb6eef73bd04" path="/var/lib/kubelet/pods/8cd4502f-c463-4fdd-88e0-fb6eef73bd04/volumes" Sep 13 00:50:07.245723 env[1734]: time="2025-09-13T00:50:07.245458116Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:50:07.268300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570534573.mount: Deactivated successfully. Sep 13 00:50:07.283772 env[1734]: time="2025-09-13T00:50:07.283712750Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8\"" Sep 13 00:50:07.284545 env[1734]: time="2025-09-13T00:50:07.284511979Z" level=info msg="StartContainer for \"cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8\"" Sep 13 00:50:07.302399 systemd[1]: Started cri-containerd-cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8.scope. Sep 13 00:50:07.340387 env[1734]: time="2025-09-13T00:50:07.340341660Z" level=info msg="StartContainer for \"cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8\" returns successfully" Sep 13 00:50:07.348731 systemd[1]: cri-containerd-cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8.scope: Deactivated successfully. Sep 13 00:50:07.398914 env[1734]: time="2025-09-13T00:50:07.396941016Z" level=info msg="shim disconnected" id=cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8 Sep 13 00:50:07.398914 env[1734]: time="2025-09-13T00:50:07.397032212Z" level=warning msg="cleaning up after shim disconnected" id=cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8 namespace=k8s.io Sep 13 00:50:07.398914 env[1734]: time="2025-09-13T00:50:07.397047119Z" level=info msg="cleaning up dead shim" Sep 13 00:50:07.420782 env[1734]: time="2025-09-13T00:50:07.420724716Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4925 runtime=io.containerd.runc.v2\n" Sep 13 00:50:07.537827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8-rootfs.mount: Deactivated successfully. Sep 13 00:50:08.249433 env[1734]: time="2025-09-13T00:50:08.249389010Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:50:08.270783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482199151.mount: Deactivated successfully. Sep 13 00:50:08.281843 env[1734]: time="2025-09-13T00:50:08.281759981Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e\"" Sep 13 00:50:08.282893 env[1734]: time="2025-09-13T00:50:08.282832889Z" level=info msg="StartContainer for \"1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e\"" Sep 13 00:50:08.312491 systemd[1]: Started cri-containerd-1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e.scope. Sep 13 00:50:08.347482 systemd[1]: cri-containerd-1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e.scope: Deactivated successfully. Sep 13 00:50:08.351392 env[1734]: time="2025-09-13T00:50:08.351344564Z" level=info msg="StartContainer for \"1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e\" returns successfully" Sep 13 00:50:08.381772 env[1734]: time="2025-09-13T00:50:08.381717472Z" level=info msg="shim disconnected" id=1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e Sep 13 00:50:08.381772 env[1734]: time="2025-09-13T00:50:08.381771464Z" level=warning msg="cleaning up after shim disconnected" id=1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e namespace=k8s.io Sep 13 00:50:08.382000 env[1734]: time="2025-09-13T00:50:08.381781810Z" level=info msg="cleaning up dead shim" Sep 13 00:50:08.393478 env[1734]: time="2025-09-13T00:50:08.393291399Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4980 runtime=io.containerd.runc.v2\n" Sep 13 00:50:08.537776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e-rootfs.mount: Deactivated successfully. Sep 13 00:50:08.890286 kubelet[2576]: E0913 00:50:08.890253 2576 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:50:09.256930 env[1734]: time="2025-09-13T00:50:09.256789188Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:50:09.297250 env[1734]: time="2025-09-13T00:50:09.297202022Z" level=info msg="CreateContainer within sandbox \"a5b51f48ea2b4126bac9f67438d00779559f60b23d142aa712f486123f20bdce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92e2318f2cfb3cad15546e2aa56f4b5b6810249ba5686df4e5b4419cc6e0bcfd\"" Sep 13 00:50:09.297872 env[1734]: time="2025-09-13T00:50:09.297843116Z" level=info msg="StartContainer for \"92e2318f2cfb3cad15546e2aa56f4b5b6810249ba5686df4e5b4419cc6e0bcfd\"" Sep 13 00:50:09.330887 systemd[1]: Started cri-containerd-92e2318f2cfb3cad15546e2aa56f4b5b6810249ba5686df4e5b4419cc6e0bcfd.scope. Sep 13 00:50:09.367525 env[1734]: time="2025-09-13T00:50:09.367478395Z" level=info msg="StartContainer for \"92e2318f2cfb3cad15546e2aa56f4b5b6810249ba5686df4e5b4419cc6e0bcfd\" returns successfully" Sep 13 00:50:09.436695 kubelet[2576]: W0913 00:50:09.436641 2576 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5caeae74_27d2_40ef_b36c_8c9afde4d089.slice/cri-containerd-6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd.scope WatchSource:0}: task 6e8efb9f5d149c948f47a752c910fa64a26bf8c1cbf602a8f9926f0894cd6afd not found: not found Sep 13 00:50:10.199600 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:50:11.603804 systemd[1]: run-containerd-runc-k8s.io-92e2318f2cfb3cad15546e2aa56f4b5b6810249ba5686df4e5b4419cc6e0bcfd-runc.iT5F9e.mount: Deactivated successfully. Sep 13 00:50:12.547614 kubelet[2576]: W0913 00:50:12.547569 2576 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5caeae74_27d2_40ef_b36c_8c9afde4d089.slice/cri-containerd-40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74.scope WatchSource:0}: task 40990f94af94525c898f38bf5d0e9314e3ea86124ba9e19bc7ef58e2cc347b74 not found: not found Sep 13 00:50:13.222034 (udev-worker)[5088]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:13.223366 (udev-worker)[5550]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:13.229092 systemd-networkd[1469]: lxc_health: Link UP Sep 13 00:50:13.237892 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:50:13.237376 systemd-networkd[1469]: lxc_health: Gained carrier Sep 13 00:50:13.631331 kubelet[2576]: I0913 00:50:13.631247 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6vmxj" podStartSLOduration=8.631226698 podStartE2EDuration="8.631226698s" podCreationTimestamp="2025-09-13 00:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:50:10.279976498 +0000 UTC m=+101.720941793" watchObservedRunningTime="2025-09-13 00:50:13.631226698 +0000 UTC m=+105.072191993" Sep 13 00:50:13.811345 systemd[1]: run-containerd-runc-k8s.io-92e2318f2cfb3cad15546e2aa56f4b5b6810249ba5686df4e5b4419cc6e0bcfd-runc.2HomMg.mount: Deactivated successfully. Sep 13 00:50:13.913756 kubelet[2576]: E0913 00:50:13.912115 2576 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:50082->127.0.0.1:40337: read tcp 127.0.0.1:50082->127.0.0.1:40337: read: connection reset by peer Sep 13 00:50:14.686913 systemd-networkd[1469]: lxc_health: Gained IPv6LL Sep 13 00:50:15.659497 kubelet[2576]: W0913 00:50:15.659437 2576 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5caeae74_27d2_40ef_b36c_8c9afde4d089.slice/cri-containerd-cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8.scope WatchSource:0}: task cc6abb51844a06afbb5f4dd111c442051f8fcb985c37931b0765f15f4425d8a8 not found: not found Sep 13 00:50:16.871543 systemd[1]: run-containerd-runc-k8s.io-92e2318f2cfb3cad15546e2aa56f4b5b6810249ba5686df4e5b4419cc6e0bcfd-runc.ixvJgj.mount: Deactivated successfully. Sep 13 00:50:16.964761 kubelet[2576]: E0913 00:50:16.964593 2576 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50088->127.0.0.1:40337: write tcp 127.0.0.1:50088->127.0.0.1:40337: write: connection reset by peer Sep 13 00:50:18.771719 kubelet[2576]: W0913 00:50:18.771672 2576 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5caeae74_27d2_40ef_b36c_8c9afde4d089.slice/cri-containerd-1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e.scope WatchSource:0}: task 1097f445b69c9a1013021a6424f8d522174d6911d5ce366285839d37dda4897e not found: not found Sep 13 00:50:21.371281 systemd[1]: run-containerd-runc-k8s.io-92e2318f2cfb3cad15546e2aa56f4b5b6810249ba5686df4e5b4419cc6e0bcfd-runc.aa3smU.mount: Deactivated successfully. Sep 13 00:50:23.601011 sshd[4582]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:23.604312 systemd[1]: sshd@24-172.31.24.139:22-147.75.109.163:53460.service: Deactivated successfully. Sep 13 00:50:23.605161 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:50:23.606082 systemd-logind[1729]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:50:23.606945 systemd-logind[1729]: Removed session 25. Sep 13 00:50:28.786707 env[1734]: time="2025-09-13T00:50:28.786661159Z" level=info msg="StopPodSandbox for \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\"" Sep 13 00:50:28.787422 env[1734]: time="2025-09-13T00:50:28.787360623Z" level=info msg="TearDown network for sandbox \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\" successfully" Sep 13 00:50:28.787580 env[1734]: time="2025-09-13T00:50:28.787536615Z" level=info msg="StopPodSandbox for \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\" returns successfully" Sep 13 00:50:28.788868 env[1734]: time="2025-09-13T00:50:28.788830959Z" level=info msg="RemovePodSandbox for \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\"" Sep 13 00:50:28.788998 env[1734]: time="2025-09-13T00:50:28.788897967Z" level=info msg="Forcibly stopping sandbox \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\"" Sep 13 00:50:28.789063 env[1734]: time="2025-09-13T00:50:28.789018388Z" level=info msg="TearDown network for sandbox \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\" successfully" Sep 13 00:50:28.801033 env[1734]: time="2025-09-13T00:50:28.800944710Z" level=info msg="RemovePodSandbox \"94d5e618c7e2e064bd657ffc057bd4353fe95abbf2dcb297786f374564b50d83\" returns successfully" Sep 13 00:50:28.801716 env[1734]: time="2025-09-13T00:50:28.801679208Z" level=info msg="StopPodSandbox for \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\"" Sep 13 00:50:28.801835 env[1734]: time="2025-09-13T00:50:28.801781778Z" level=info msg="TearDown network for sandbox \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\" successfully" Sep 13 00:50:28.801835 env[1734]: time="2025-09-13T00:50:28.801827427Z" level=info msg="StopPodSandbox for \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\" returns successfully" Sep 13 00:50:28.802266 env[1734]: time="2025-09-13T00:50:28.802223591Z" level=info msg="RemovePodSandbox for \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\"" Sep 13 00:50:28.802875 env[1734]: time="2025-09-13T00:50:28.802273446Z" level=info msg="Forcibly stopping sandbox \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\"" Sep 13 00:50:28.802875 env[1734]: time="2025-09-13T00:50:28.802360003Z" level=info msg="TearDown network for sandbox \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\" successfully" Sep 13 00:50:28.807803 env[1734]: time="2025-09-13T00:50:28.807744216Z" level=info msg="RemovePodSandbox \"2850f6a0a45b84c3d3f5565fbaa3cdf268b7c31d8da29e678442eb5bd6bbc034\" returns successfully" Sep 13 00:50:28.808408 env[1734]: time="2025-09-13T00:50:28.808374021Z" level=info msg="StopPodSandbox for \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\"" Sep 13 00:50:28.808509 env[1734]: time="2025-09-13T00:50:28.808476633Z" level=info msg="TearDown network for sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" successfully" Sep 13 00:50:28.808573 env[1734]: time="2025-09-13T00:50:28.808508991Z" level=info msg="StopPodSandbox for \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" returns successfully" Sep 13 00:50:28.808831 env[1734]: time="2025-09-13T00:50:28.808801969Z" level=info msg="RemovePodSandbox for \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\"" Sep 13 00:50:28.808913 env[1734]: time="2025-09-13T00:50:28.808833033Z" level=info msg="Forcibly stopping sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\"" Sep 13 00:50:28.808913 env[1734]: time="2025-09-13T00:50:28.808896132Z" level=info msg="TearDown network for sandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" successfully" Sep 13 00:50:28.816455 env[1734]: time="2025-09-13T00:50:28.816408838Z" level=info msg="RemovePodSandbox \"4fdf7d9ec74bd707ac610da1cce304619188d5ce8c7e6f5bbf0400be76ab78c1\" returns successfully" Sep 13 00:50:37.119275 systemd[1]: cri-containerd-3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f.scope: Deactivated successfully. Sep 13 00:50:37.119626 systemd[1]: cri-containerd-3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f.scope: Consumed 3.892s CPU time. Sep 13 00:50:37.149541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f-rootfs.mount: Deactivated successfully. Sep 13 00:50:37.181855 env[1734]: time="2025-09-13T00:50:37.181808535Z" level=info msg="shim disconnected" id=3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f Sep 13 00:50:37.181855 env[1734]: time="2025-09-13T00:50:37.181852946Z" level=warning msg="cleaning up after shim disconnected" id=3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f namespace=k8s.io Sep 13 00:50:37.181855 env[1734]: time="2025-09-13T00:50:37.181862639Z" level=info msg="cleaning up dead shim" Sep 13 00:50:37.190417 env[1734]: time="2025-09-13T00:50:37.190367754Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5710 runtime=io.containerd.runc.v2\n" Sep 13 00:50:37.318828 kubelet[2576]: I0913 00:50:37.318778 2576 scope.go:117] "RemoveContainer" containerID="3de6c817b926f913d7fc054075572826dc24d0a41a4314b80ee057f0c8a6ba6f" Sep 13 00:50:37.323178 env[1734]: time="2025-09-13T00:50:37.323135186Z" level=info msg="CreateContainer within sandbox \"3033e2ed8bea0d0fd0186cd0e4e9342165c6ef5272e528049a808ea36c4fcd58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:50:37.346506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2372667367.mount: Deactivated successfully. Sep 13 00:50:37.349290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793934397.mount: Deactivated successfully. Sep 13 00:50:37.359061 env[1734]: time="2025-09-13T00:50:37.358975458Z" level=info msg="CreateContainer within sandbox \"3033e2ed8bea0d0fd0186cd0e4e9342165c6ef5272e528049a808ea36c4fcd58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"233d74fb69a1d32a61710fca18b3b3b2c5e0c75380337a53898cdfec67fcb295\"" Sep 13 00:50:37.359496 env[1734]: time="2025-09-13T00:50:37.359452522Z" level=info msg="StartContainer for \"233d74fb69a1d32a61710fca18b3b3b2c5e0c75380337a53898cdfec67fcb295\"" Sep 13 00:50:37.380045 systemd[1]: Started cri-containerd-233d74fb69a1d32a61710fca18b3b3b2c5e0c75380337a53898cdfec67fcb295.scope. Sep 13 00:50:37.432474 env[1734]: time="2025-09-13T00:50:37.432422447Z" level=info msg="StartContainer for \"233d74fb69a1d32a61710fca18b3b3b2c5e0c75380337a53898cdfec67fcb295\" returns successfully" Sep 13 00:50:41.371751 kubelet[2576]: E0913 00:50:41.371699 2576 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-139?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:50:43.221521 systemd[1]: cri-containerd-d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f.scope: Deactivated successfully. Sep 13 00:50:43.221781 systemd[1]: cri-containerd-d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f.scope: Consumed 1.385s CPU time. Sep 13 00:50:43.266790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f-rootfs.mount: Deactivated successfully. Sep 13 00:50:43.278027 env[1734]: time="2025-09-13T00:50:43.277958435Z" level=info msg="shim disconnected" id=d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f Sep 13 00:50:43.278582 env[1734]: time="2025-09-13T00:50:43.278510248Z" level=warning msg="cleaning up after shim disconnected" id=d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f namespace=k8s.io Sep 13 00:50:43.278582 env[1734]: time="2025-09-13T00:50:43.278537362Z" level=info msg="cleaning up dead shim" Sep 13 00:50:43.289236 env[1734]: time="2025-09-13T00:50:43.289167827Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5768 runtime=io.containerd.runc.v2\n" Sep 13 00:50:43.334166 kubelet[2576]: I0913 00:50:43.334104 2576 scope.go:117] "RemoveContainer" containerID="d70c186af55eeef80f5aa1b95320491295f4bf5e1e43779b8d8bc0b78f8a810f" Sep 13 00:50:43.336921 env[1734]: time="2025-09-13T00:50:43.336859331Z" level=info msg="CreateContainer within sandbox \"d1d512abd1770eb32561b7d732ae4f97a272e3b6a6b5c6e34b7ca670b9b6c493\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:50:43.362951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530777114.mount: Deactivated successfully. Sep 13 00:50:43.373359 env[1734]: time="2025-09-13T00:50:43.373302178Z" level=info msg="CreateContainer within sandbox \"d1d512abd1770eb32561b7d732ae4f97a272e3b6a6b5c6e34b7ca670b9b6c493\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0eee7cf41d9557ab6a707d27e1989d40199760ca3a0e118b94ce06c7cbba1063\"" Sep 13 00:50:43.373955 env[1734]: time="2025-09-13T00:50:43.373920471Z" level=info msg="StartContainer for \"0eee7cf41d9557ab6a707d27e1989d40199760ca3a0e118b94ce06c7cbba1063\"" Sep 13 00:50:43.397900 systemd[1]: Started cri-containerd-0eee7cf41d9557ab6a707d27e1989d40199760ca3a0e118b94ce06c7cbba1063.scope. Sep 13 00:50:43.456134 env[1734]: time="2025-09-13T00:50:43.456059230Z" level=info msg="StartContainer for \"0eee7cf41d9557ab6a707d27e1989d40199760ca3a0e118b94ce06c7cbba1063\" returns successfully" Sep 13 00:50:51.372779 kubelet[2576]: E0913 00:50:51.372543 2576 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-139?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"