Sep 13 00:47:59.133475 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:47:59.133497 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:47:59.133507 kernel: BIOS-provided physical RAM map: Sep 13 00:47:59.133513 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:47:59.133518 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:47:59.133524 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:47:59.133530 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:47:59.133536 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:47:59.133542 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:47:59.133549 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:47:59.133555 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 13 00:47:59.133560 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 13 00:47:59.133566 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:47:59.133571 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:47:59.133578 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:47:59.133586 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:47:59.133592 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:47:59.133598 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:47:59.133606 kernel: NX (Execute Disable) protection: active Sep 13 00:47:59.133612 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 13 00:47:59.133618 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 13 00:47:59.133624 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 13 00:47:59.133630 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 13 00:47:59.133636 kernel: extended physical RAM map: Sep 13 00:47:59.133642 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:47:59.133649 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:47:59.133655 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:47:59.133661 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:47:59.133667 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:47:59.133673 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:47:59.133679 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:47:59.133685 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Sep 13 00:47:59.133691 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Sep 13 00:47:59.133697 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Sep 13 00:47:59.133702 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Sep 13 00:47:59.133708 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Sep 13 00:47:59.133716 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 13 00:47:59.133721 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:47:59.133727 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:47:59.133733 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:47:59.133742 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:47:59.133749 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:47:59.133755 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:47:59.133763 kernel: efi: EFI v2.70 by EDK II Sep 13 00:47:59.133769 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Sep 13 00:47:59.133776 kernel: random: crng init done Sep 13 00:47:59.133782 kernel: SMBIOS 2.8 present. Sep 13 00:47:59.133789 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 13 00:47:59.133795 kernel: Hypervisor detected: KVM Sep 13 00:47:59.133801 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:47:59.133808 kernel: kvm-clock: cpu 0, msr 6819f001, primary cpu clock Sep 13 00:47:59.133814 kernel: kvm-clock: using sched offset of 5300495822 cycles Sep 13 00:47:59.133825 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:47:59.133832 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:47:59.133839 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:47:59.133846 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:47:59.133852 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 13 00:47:59.133859 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:47:59.133866 kernel: Using GB pages for direct mapping Sep 13 00:47:59.133872 kernel: Secure boot disabled Sep 13 00:47:59.133879 kernel: ACPI: Early table checksum verification disabled Sep 13 00:47:59.133888 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:47:59.133896 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:47:59.133904 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:59.133919 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:59.133928 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:47:59.133934 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:59.133941 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:59.133951 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:59.133957 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:59.133965 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:47:59.133972 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:47:59.133979 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:47:59.133985 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:47:59.133992 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:47:59.133998 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:47:59.134005 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:47:59.134011 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:47:59.134018 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:47:59.134026 kernel: No NUMA configuration found Sep 13 00:47:59.134032 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 13 00:47:59.134039 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 13 00:47:59.134045 kernel: Zone ranges: Sep 13 00:47:59.134052 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:47:59.134058 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 13 00:47:59.134065 kernel: Normal empty Sep 13 00:47:59.134071 kernel: Movable zone start for each node Sep 13 00:47:59.134078 kernel: Early memory node ranges Sep 13 00:47:59.134086 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:47:59.134092 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:47:59.134099 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:47:59.134105 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 13 00:47:59.134112 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 13 00:47:59.134118 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 13 00:47:59.134125 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 13 00:47:59.134131 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:47:59.134138 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:47:59.134145 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:47:59.134153 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:47:59.134159 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 13 00:47:59.134166 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:47:59.134172 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 13 00:47:59.134179 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:47:59.134186 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:47:59.134192 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:47:59.134199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:47:59.134205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:47:59.134213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:47:59.134219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:47:59.134226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:47:59.134236 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:47:59.134244 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:47:59.134251 kernel: TSC deadline timer available Sep 13 00:47:59.134257 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:47:59.134264 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:47:59.134271 kernel: kvm-guest: setup PV sched yield Sep 13 00:47:59.134278 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:47:59.134285 kernel: Booting paravirtualized kernel on KVM Sep 13 00:47:59.134297 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:47:59.134305 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:47:59.134312 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:47:59.134319 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:47:59.134326 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:47:59.134333 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:47:59.134350 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Sep 13 00:47:59.134358 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:47:59.134364 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:47:59.134371 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 13 00:47:59.134380 kernel: Policy zone: DMA32 Sep 13 00:47:59.134388 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:47:59.134395 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:47:59.134402 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:47:59.134410 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:47:59.134417 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:47:59.134425 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 169308K reserved, 0K cma-reserved) Sep 13 00:47:59.134432 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:47:59.134438 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:47:59.134445 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:47:59.134452 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:47:59.134460 kernel: rcu: RCU event tracing is enabled. Sep 13 00:47:59.134467 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:47:59.134475 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:47:59.134482 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:47:59.134489 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:47:59.134496 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:47:59.134503 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:47:59.134510 kernel: Console: colour dummy device 80x25 Sep 13 00:47:59.134517 kernel: printk: console [ttyS0] enabled Sep 13 00:47:59.134524 kernel: ACPI: Core revision 20210730 Sep 13 00:47:59.134531 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:47:59.134539 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:47:59.134546 kernel: x2apic enabled Sep 13 00:47:59.134553 kernel: Switched APIC routing to physical x2apic. Sep 13 00:47:59.134560 kernel: kvm-guest: setup PV IPIs Sep 13 00:47:59.134567 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:47:59.134574 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:47:59.134581 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:47:59.134588 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:47:59.134597 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:47:59.134606 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:47:59.134613 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:47:59.134620 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:47:59.134627 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:47:59.134634 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:47:59.134641 kernel: active return thunk: retbleed_return_thunk Sep 13 00:47:59.134648 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:47:59.134657 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:47:59.134664 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:47:59.134673 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:47:59.134680 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:47:59.134687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:47:59.134694 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:47:59.134701 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:47:59.134708 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:47:59.134714 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:47:59.134721 kernel: LSM: Security Framework initializing Sep 13 00:47:59.134728 kernel: SELinux: Initializing. Sep 13 00:47:59.134736 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:47:59.134743 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:47:59.134750 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:47:59.134757 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:47:59.134764 kernel: ... version: 0 Sep 13 00:47:59.134771 kernel: ... bit width: 48 Sep 13 00:47:59.134778 kernel: ... generic registers: 6 Sep 13 00:47:59.134785 kernel: ... value mask: 0000ffffffffffff Sep 13 00:47:59.134792 kernel: ... max period: 00007fffffffffff Sep 13 00:47:59.134800 kernel: ... fixed-purpose events: 0 Sep 13 00:47:59.134807 kernel: ... event mask: 000000000000003f Sep 13 00:47:59.134814 kernel: signal: max sigframe size: 1776 Sep 13 00:47:59.134820 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:47:59.134827 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:47:59.134834 kernel: x86: Booting SMP configuration: Sep 13 00:47:59.134841 kernel: .... node #0, CPUs: #1 Sep 13 00:47:59.134848 kernel: kvm-clock: cpu 1, msr 6819f041, secondary cpu clock Sep 13 00:47:59.134855 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:47:59.134863 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Sep 13 00:47:59.134869 kernel: #2 Sep 13 00:47:59.134877 kernel: kvm-clock: cpu 2, msr 6819f081, secondary cpu clock Sep 13 00:47:59.134883 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:47:59.134890 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Sep 13 00:47:59.134897 kernel: #3 Sep 13 00:47:59.134904 kernel: kvm-clock: cpu 3, msr 6819f0c1, secondary cpu clock Sep 13 00:47:59.134918 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:47:59.134925 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Sep 13 00:47:59.134935 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:47:59.134942 kernel: smpboot: Max logical packages: 1 Sep 13 00:47:59.134949 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:47:59.134957 kernel: devtmpfs: initialized Sep 13 00:47:59.134964 kernel: x86/mm: Memory block size: 128MB Sep 13 00:47:59.134971 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:47:59.134978 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:47:59.134985 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 13 00:47:59.134992 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:47:59.135000 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:47:59.135007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:47:59.135014 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:47:59.135021 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:47:59.135028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:47:59.135035 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:47:59.135042 kernel: audit: type=2000 audit(1757724478.709:1): state=initialized audit_enabled=0 res=1 Sep 13 00:47:59.135048 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:47:59.135055 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:47:59.135063 kernel: cpuidle: using governor menu Sep 13 00:47:59.135070 kernel: ACPI: bus type PCI registered Sep 13 00:47:59.135077 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:47:59.135084 kernel: dca service started, version 1.12.1 Sep 13 00:47:59.135091 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:47:59.135098 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:47:59.135105 kernel: PCI: Using configuration type 1 for base access Sep 13 00:47:59.135112 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:47:59.135121 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:47:59.135130 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:47:59.135136 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:47:59.135144 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:47:59.135150 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:47:59.135157 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:47:59.135164 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:47:59.135171 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:47:59.135178 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:47:59.135185 kernel: ACPI: Interpreter enabled Sep 13 00:47:59.135193 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:47:59.135200 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:47:59.135207 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:47:59.135214 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:47:59.135221 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:47:59.135438 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:47:59.135519 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:47:59.135592 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:47:59.135605 kernel: PCI host bridge to bus 0000:00 Sep 13 00:47:59.135697 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:47:59.135766 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:47:59.135833 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:47:59.135899 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:47:59.135974 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:47:59.136039 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 13 00:47:59.136110 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:47:59.136217 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:47:59.136311 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:47:59.136403 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:47:59.136479 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 13 00:47:59.136552 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:47:59.136630 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 13 00:47:59.136702 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:47:59.136799 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:47:59.136884 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 13 00:47:59.136976 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 13 00:47:59.137053 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 13 00:47:59.137160 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:47:59.137256 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 13 00:47:59.137387 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 13 00:47:59.137468 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 13 00:47:59.137557 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:47:59.137633 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 13 00:47:59.137723 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 13 00:47:59.137803 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 13 00:47:59.137889 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 13 00:47:59.137995 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:47:59.138077 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:47:59.138163 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:47:59.138243 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 13 00:47:59.138323 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 13 00:47:59.138446 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:47:59.138524 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 13 00:47:59.138534 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:47:59.138541 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:47:59.138548 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:47:59.138555 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:47:59.138562 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:47:59.138569 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:47:59.138576 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:47:59.138585 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:47:59.138592 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:47:59.138599 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:47:59.138606 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:47:59.138613 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:47:59.138620 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:47:59.138627 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:47:59.138634 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:47:59.138640 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:47:59.138649 kernel: iommu: Default domain type: Translated Sep 13 00:47:59.138656 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:47:59.138728 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:47:59.138800 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:47:59.138871 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:47:59.138880 kernel: vgaarb: loaded Sep 13 00:47:59.138888 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:47:59.138895 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:47:59.138904 kernel: PTP clock support registered Sep 13 00:47:59.138919 kernel: Registered efivars operations Sep 13 00:47:59.138926 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:47:59.138933 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:47:59.138941 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:47:59.138947 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 13 00:47:59.138954 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Sep 13 00:47:59.138961 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Sep 13 00:47:59.138968 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 13 00:47:59.138976 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 13 00:47:59.138983 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:47:59.138990 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:47:59.138997 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:47:59.139004 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:47:59.139011 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:47:59.139018 kernel: pnp: PnP ACPI init Sep 13 00:47:59.139125 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:47:59.139139 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:47:59.139147 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:47:59.139154 kernel: NET: Registered PF_INET protocol family Sep 13 00:47:59.139161 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:47:59.139168 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:47:59.139175 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:47:59.139182 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:47:59.139189 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:47:59.139196 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:47:59.139204 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:47:59.139211 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:47:59.139218 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:47:59.139225 kernel: NET: Registered PF_XDP protocol family Sep 13 00:47:59.139309 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 13 00:47:59.139408 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 13 00:47:59.139479 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:47:59.139552 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:47:59.139631 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:47:59.139704 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:47:59.140649 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:47:59.140726 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 13 00:47:59.140736 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:47:59.140743 kernel: Initialise system trusted keyrings Sep 13 00:47:59.140750 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:47:59.140757 kernel: Key type asymmetric registered Sep 13 00:47:59.140764 kernel: Asymmetric key parser 'x509' registered Sep 13 00:47:59.140774 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:47:59.140781 kernel: io scheduler mq-deadline registered Sep 13 00:47:59.140798 kernel: io scheduler kyber registered Sep 13 00:47:59.140807 kernel: io scheduler bfq registered Sep 13 00:47:59.140814 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:47:59.140822 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:47:59.140829 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:47:59.140836 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:47:59.140843 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:47:59.140852 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:47:59.140859 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:47:59.140867 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:47:59.140874 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:47:59.140969 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:47:59.140981 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:47:59.141047 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:47:59.141123 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:47:58 UTC (1757724478) Sep 13 00:47:59.141242 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:47:59.141253 kernel: efifb: probing for efifb Sep 13 00:47:59.141261 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 13 00:47:59.141268 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 13 00:47:59.141276 kernel: efifb: scrolling: redraw Sep 13 00:47:59.141283 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:47:59.141290 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:47:59.141297 kernel: fb0: EFI VGA frame buffer device Sep 13 00:47:59.141305 kernel: pstore: Registered efi as persistent store backend Sep 13 00:47:59.141314 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:47:59.141321 kernel: Segment Routing with IPv6 Sep 13 00:47:59.141330 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:47:59.141339 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:47:59.141387 kernel: Key type dns_resolver registered Sep 13 00:47:59.141396 kernel: IPI shorthand broadcast: enabled Sep 13 00:47:59.141404 kernel: sched_clock: Marking stable (506329352, 125955974)->(700763932, -68478606) Sep 13 00:47:59.141411 kernel: registered taskstats version 1 Sep 13 00:47:59.141418 kernel: Loading compiled-in X.509 certificates Sep 13 00:47:59.141426 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:47:59.141433 kernel: Key type .fscrypt registered Sep 13 00:47:59.141440 kernel: Key type fscrypt-provisioning registered Sep 13 00:47:59.141448 kernel: pstore: Using crash dump compression: deflate Sep 13 00:47:59.141455 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:47:59.141465 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:47:59.141472 kernel: ima: No architecture policies found Sep 13 00:47:59.141479 kernel: clk: Disabling unused clocks Sep 13 00:47:59.141486 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:47:59.141493 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:47:59.141501 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:47:59.141508 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:47:59.141515 kernel: Run /init as init process Sep 13 00:47:59.141522 kernel: with arguments: Sep 13 00:47:59.141531 kernel: /init Sep 13 00:47:59.141538 kernel: with environment: Sep 13 00:47:59.141545 kernel: HOME=/ Sep 13 00:47:59.141552 kernel: TERM=linux Sep 13 00:47:59.141559 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:47:59.141569 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:47:59.141578 systemd[1]: Detected virtualization kvm. Sep 13 00:47:59.141586 systemd[1]: Detected architecture x86-64. Sep 13 00:47:59.141595 systemd[1]: Running in initrd. Sep 13 00:47:59.141602 systemd[1]: No hostname configured, using default hostname. Sep 13 00:47:59.141610 systemd[1]: Hostname set to . Sep 13 00:47:59.141618 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:47:59.141626 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:47:59.141633 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:47:59.141641 systemd[1]: Reached target cryptsetup.target. Sep 13 00:47:59.141648 systemd[1]: Reached target paths.target. Sep 13 00:47:59.141657 systemd[1]: Reached target slices.target. Sep 13 00:47:59.141665 systemd[1]: Reached target swap.target. Sep 13 00:47:59.141672 systemd[1]: Reached target timers.target. Sep 13 00:47:59.141680 systemd[1]: Listening on iscsid.socket. Sep 13 00:47:59.141688 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:47:59.141696 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:47:59.141703 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:47:59.141711 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:47:59.141720 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:47:59.141728 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:47:59.141736 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:47:59.141743 systemd[1]: Reached target sockets.target. Sep 13 00:47:59.141751 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:47:59.141759 systemd[1]: Finished network-cleanup.service. Sep 13 00:47:59.141766 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:47:59.141774 systemd[1]: Starting systemd-journald.service... Sep 13 00:47:59.141782 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:47:59.141791 systemd[1]: Starting systemd-resolved.service... Sep 13 00:47:59.141799 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:47:59.141807 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:47:59.141814 kernel: audit: type=1130 audit(1757724479.131:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.141822 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:47:59.141830 kernel: audit: type=1130 audit(1757724479.136:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.141838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:47:59.141848 systemd-journald[198]: Journal started Sep 13 00:47:59.141889 systemd-journald[198]: Runtime Journal (/run/log/journal/3ea64464c47145c196976f985825688e) is 6.0M, max 48.4M, 42.4M free. Sep 13 00:47:59.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.150555 systemd[1]: Started systemd-journald.service. Sep 13 00:47:59.150616 kernel: audit: type=1130 audit(1757724479.143:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.150636 kernel: audit: type=1130 audit(1757724479.147:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.147109 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:47:59.148647 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:47:59.151527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:47:59.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.155362 kernel: audit: type=1130 audit(1757724479.151:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.157319 systemd-modules-load[199]: Inserted module 'overlay' Sep 13 00:47:59.167082 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:47:59.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.168288 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:47:59.172329 kernel: audit: type=1130 audit(1757724479.167:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.177523 systemd-resolved[200]: Positive Trust Anchors: Sep 13 00:47:59.178470 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:47:59.179859 dracut-cmdline[215]: dracut-dracut-053 Sep 13 00:47:59.180705 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:47:59.183877 systemd-resolved[200]: Defaulting to hostname 'linux'. Sep 13 00:47:59.187509 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:47:59.194675 kernel: audit: type=1130 audit(1757724479.187:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.185819 systemd[1]: Started systemd-resolved.service. Sep 13 00:47:59.195387 systemd[1]: Reached target nss-lookup.target. Sep 13 00:47:59.198796 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:47:59.203081 systemd-modules-load[199]: Inserted module 'br_netfilter' Sep 13 00:47:59.203985 kernel: Bridge firewalling registered Sep 13 00:47:59.222370 kernel: SCSI subsystem initialized Sep 13 00:47:59.246387 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:47:59.246422 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:47:59.248285 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:47:59.250364 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:47:59.251037 systemd-modules-load[199]: Inserted module 'dm_multipath' Sep 13 00:47:59.251764 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:47:59.256179 kernel: audit: type=1130 audit(1757724479.251:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.253026 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:47:59.262447 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:47:59.266527 kernel: audit: type=1130 audit(1757724479.262:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.276366 kernel: iscsi: registered transport (tcp) Sep 13 00:47:59.297373 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:47:59.297429 kernel: QLogic iSCSI HBA Driver Sep 13 00:47:59.328145 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:47:59.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.330521 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:47:59.382379 kernel: raid6: avx2x4 gen() 29554 MB/s Sep 13 00:47:59.399375 kernel: raid6: avx2x4 xor() 7133 MB/s Sep 13 00:47:59.416365 kernel: raid6: avx2x2 gen() 31729 MB/s Sep 13 00:47:59.433372 kernel: raid6: avx2x2 xor() 18634 MB/s Sep 13 00:47:59.450369 kernel: raid6: avx2x1 gen() 24796 MB/s Sep 13 00:47:59.467376 kernel: raid6: avx2x1 xor() 15102 MB/s Sep 13 00:47:59.484373 kernel: raid6: sse2x4 gen() 13924 MB/s Sep 13 00:47:59.501382 kernel: raid6: sse2x4 xor() 7081 MB/s Sep 13 00:47:59.518381 kernel: raid6: sse2x2 gen() 16176 MB/s Sep 13 00:47:59.535389 kernel: raid6: sse2x2 xor() 9768 MB/s Sep 13 00:47:59.552385 kernel: raid6: sse2x1 gen() 11755 MB/s Sep 13 00:47:59.569719 kernel: raid6: sse2x1 xor() 7715 MB/s Sep 13 00:47:59.569767 kernel: raid6: using algorithm avx2x2 gen() 31729 MB/s Sep 13 00:47:59.569777 kernel: raid6: .... xor() 18634 MB/s, rmw enabled Sep 13 00:47:59.570405 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:47:59.582374 kernel: xor: automatically using best checksumming function avx Sep 13 00:47:59.682406 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:47:59.691356 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:47:59.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.693000 audit: BPF prog-id=7 op=LOAD Sep 13 00:47:59.693000 audit: BPF prog-id=8 op=LOAD Sep 13 00:47:59.693809 systemd[1]: Starting systemd-udevd.service... Sep 13 00:47:59.706276 systemd-udevd[399]: Using default interface naming scheme 'v252'. Sep 13 00:47:59.710262 systemd[1]: Started systemd-udevd.service. Sep 13 00:47:59.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.713520 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:47:59.725082 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Sep 13 00:47:59.751822 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:47:59.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.753609 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:47:59.796466 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:47:59.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:59.829783 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:47:59.847069 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:47:59.847088 kernel: GPT:9289727 != 19775487 Sep 13 00:47:59.847105 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:47:59.847117 kernel: GPT:9289727 != 19775487 Sep 13 00:47:59.847128 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:47:59.847140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:47:59.847153 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:47:59.850378 kernel: libata version 3.00 loaded. Sep 13 00:47:59.858652 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:48:00.037363 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:48:00.037390 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:48:00.037500 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:48:00.037581 kernel: scsi host0: ahci Sep 13 00:48:00.037683 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Sep 13 00:48:00.037693 kernel: scsi host1: ahci Sep 13 00:48:00.037837 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:48:00.037847 kernel: AES CTR mode by8 optimization enabled Sep 13 00:48:00.037857 kernel: scsi host2: ahci Sep 13 00:48:00.037994 kernel: scsi host3: ahci Sep 13 00:48:00.038088 kernel: scsi host4: ahci Sep 13 00:48:00.038183 kernel: scsi host5: ahci Sep 13 00:48:00.038271 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Sep 13 00:48:00.038281 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Sep 13 00:48:00.038289 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Sep 13 00:48:00.038299 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Sep 13 00:48:00.038308 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Sep 13 00:48:00.038321 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Sep 13 00:47:59.873975 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:48:00.010546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:48:00.025625 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:48:00.030833 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:48:00.047937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:48:00.051616 systemd[1]: Starting disk-uuid.service... Sep 13 00:48:00.058551 disk-uuid[528]: Primary Header is updated. Sep 13 00:48:00.058551 disk-uuid[528]: Secondary Entries is updated. Sep 13 00:48:00.058551 disk-uuid[528]: Secondary Header is updated. Sep 13 00:48:00.062367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:48:00.347107 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:48:00.347190 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:48:00.347202 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:48:00.348942 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:48:00.349373 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:48:00.350379 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:48:00.351376 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:48:00.352944 kernel: ata3.00: applying bridge limits Sep 13 00:48:00.352962 kernel: ata3.00: configured for UDMA/100 Sep 13 00:48:00.353386 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:48:00.386605 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:48:00.404114 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:48:00.404137 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:48:01.073059 disk-uuid[529]: The operation has completed successfully. Sep 13 00:48:01.074264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:48:01.095055 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:48:01.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.095143 systemd[1]: Finished disk-uuid.service. Sep 13 00:48:01.106109 systemd[1]: Starting verity-setup.service... Sep 13 00:48:01.119365 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:48:01.138165 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:48:01.140772 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:48:01.143927 systemd[1]: Finished verity-setup.service. Sep 13 00:48:01.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.209129 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:48:01.210645 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:48:01.210735 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:48:01.212963 systemd[1]: Starting ignition-setup.service... Sep 13 00:48:01.215139 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:48:01.221874 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:48:01.221906 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:48:01.221919 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:48:01.231159 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:48:01.239216 systemd[1]: Finished ignition-setup.service. Sep 13 00:48:01.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.241791 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:48:01.281470 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:48:01.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.284000 audit: BPF prog-id=9 op=LOAD Sep 13 00:48:01.285291 systemd[1]: Starting systemd-networkd.service... Sep 13 00:48:01.308445 systemd-networkd[716]: lo: Link UP Sep 13 00:48:01.308454 systemd-networkd[716]: lo: Gained carrier Sep 13 00:48:01.309059 systemd-networkd[716]: Enumeration completed Sep 13 00:48:01.309148 systemd[1]: Started systemd-networkd.service. Sep 13 00:48:01.310192 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:48:01.311890 systemd-networkd[716]: eth0: Link UP Sep 13 00:48:01.311895 systemd-networkd[716]: eth0: Gained carrier Sep 13 00:48:01.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.314052 systemd[1]: Reached target network.target. Sep 13 00:48:01.319897 systemd[1]: Starting iscsiuio.service... Sep 13 00:48:01.330243 ignition[647]: Ignition 2.14.0 Sep 13 00:48:01.330254 ignition[647]: Stage: fetch-offline Sep 13 00:48:01.330327 ignition[647]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:48:01.330336 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:48:01.330502 ignition[647]: parsed url from cmdline: "" Sep 13 00:48:01.330508 ignition[647]: no config URL provided Sep 13 00:48:01.330513 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:48:01.330521 ignition[647]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:48:01.330540 ignition[647]: op(1): [started] loading QEMU firmware config module Sep 13 00:48:01.330544 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:48:01.342799 ignition[647]: op(1): [finished] loading QEMU firmware config module Sep 13 00:48:01.342829 ignition[647]: QEMU firmware config was not found. Ignoring... Sep 13 00:48:01.345328 ignition[647]: parsing config with SHA512: 361e38ade77ae5191a1771409d907834b14f21356865428418f275b2fdcf6f8f077e3faa1facf3d23d58e516c948fcdb0414b3661096c61316a0c984b8fee3f2 Sep 13 00:48:01.349580 systemd[1]: Started iscsiuio.service. Sep 13 00:48:01.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.352266 systemd[1]: Starting iscsid.service... Sep 13 00:48:01.352478 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:48:01.355811 iscsid[727]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:48:01.355811 iscsid[727]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:48:01.355811 iscsid[727]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:48:01.355811 iscsid[727]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:48:01.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.368275 iscsid[727]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:48:01.368275 iscsid[727]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:48:01.359936 ignition[647]: fetch-offline: fetch-offline passed Sep 13 00:48:01.357434 systemd[1]: Started iscsid.service. Sep 13 00:48:01.360086 ignition[647]: Ignition finished successfully Sep 13 00:48:01.359051 unknown[647]: fetched base config from "system" Sep 13 00:48:01.359066 unknown[647]: fetched user config from "qemu" Sep 13 00:48:01.359237 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:48:01.364527 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:48:01.366453 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:48:01.373217 systemd[1]: Starting ignition-kargs.service... Sep 13 00:48:01.381980 ignition[734]: Ignition 2.14.0 Sep 13 00:48:01.381987 ignition[734]: Stage: kargs Sep 13 00:48:01.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.382098 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:48:01.382228 ignition[734]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:48:01.386057 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:48:01.382256 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:48:01.388306 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:48:01.383665 ignition[734]: kargs: kargs passed Sep 13 00:48:01.390285 systemd[1]: Reached target remote-fs.target. Sep 13 00:48:01.383759 ignition[734]: Ignition finished successfully Sep 13 00:48:01.397093 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:48:01.399253 systemd[1]: Finished ignition-kargs.service. Sep 13 00:48:01.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.402948 systemd[1]: Starting ignition-disks.service... Sep 13 00:48:01.409236 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:48:01.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.426761 ignition[743]: Ignition 2.14.0 Sep 13 00:48:01.426774 ignition[743]: Stage: disks Sep 13 00:48:01.426878 ignition[743]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:48:01.426888 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:48:01.427633 ignition[743]: disks: disks passed Sep 13 00:48:01.427669 ignition[743]: Ignition finished successfully Sep 13 00:48:01.431433 systemd[1]: Finished ignition-disks.service. Sep 13 00:48:01.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.433926 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:48:01.435599 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:48:01.437258 systemd[1]: Reached target local-fs.target. Sep 13 00:48:01.438829 systemd[1]: Reached target sysinit.target. Sep 13 00:48:01.440409 systemd[1]: Reached target basic.target. Sep 13 00:48:01.442766 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:48:01.451715 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.82 Sep 13 00:48:01.451731 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 13 00:48:01.455782 systemd-fsck[755]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:48:01.461043 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:48:01.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.463057 systemd[1]: Mounting sysroot.mount... Sep 13 00:48:01.472187 systemd[1]: Mounted sysroot.mount. Sep 13 00:48:01.473498 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:48:01.472665 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:48:01.474584 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:48:01.475959 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:48:01.475998 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:48:01.476024 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:48:01.477732 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:48:01.479953 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:48:01.484862 initrd-setup-root[765]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:48:01.488396 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:48:01.492032 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:48:01.495792 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:48:01.521385 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:48:01.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.523847 systemd[1]: Starting ignition-mount.service... Sep 13 00:48:01.525977 systemd[1]: Starting sysroot-boot.service... Sep 13 00:48:01.529985 bash[806]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:48:01.545422 ignition[807]: INFO : Ignition 2.14.0 Sep 13 00:48:01.545422 ignition[807]: INFO : Stage: mount Sep 13 00:48:01.547175 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:48:01.547175 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:48:01.547175 ignition[807]: INFO : mount: mount passed Sep 13 00:48:01.547175 ignition[807]: INFO : Ignition finished successfully Sep 13 00:48:01.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:01.547496 systemd[1]: Finished ignition-mount.service. Sep 13 00:48:01.552927 systemd[1]: Finished sysroot-boot.service. Sep 13 00:48:01.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:02.151217 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:48:02.160236 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Sep 13 00:48:02.160273 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:48:02.160283 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:48:02.161067 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:48:02.165290 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:48:02.167709 systemd[1]: Starting ignition-files.service... Sep 13 00:48:02.181888 ignition[836]: INFO : Ignition 2.14.0 Sep 13 00:48:02.181888 ignition[836]: INFO : Stage: files Sep 13 00:48:02.183986 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:48:02.183986 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:48:02.183986 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:48:02.183986 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:48:02.183986 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:48:02.191153 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:48:02.191153 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:48:02.191153 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:48:02.191153 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:48:02.191153 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:48:02.191153 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:48:02.191153 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:48:02.191153 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:48:02.191153 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:48:02.191153 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:48:02.191153 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:48:02.186248 unknown[836]: wrote ssh authorized keys file for user: core Sep 13 00:48:02.507486 systemd-networkd[716]: eth0: Gained IPv6LL Sep 13 00:48:02.559048 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 13 00:48:04.099625 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:48:04.099625 ignition[836]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 13 00:48:04.104063 ignition[836]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:48:04.104063 ignition[836]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:48:04.104063 ignition[836]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 13 00:48:04.104063 ignition[836]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:48:04.104063 ignition[836]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:48:04.142399 ignition[836]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:48:04.144467 ignition[836]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:48:04.146145 ignition[836]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:48:04.148072 ignition[836]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:48:04.149792 ignition[836]: INFO : files: files passed Sep 13 00:48:04.149792 ignition[836]: INFO : Ignition finished successfully Sep 13 00:48:04.151236 systemd[1]: Finished ignition-files.service. Sep 13 00:48:04.156030 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 13 00:48:04.156057 kernel: audit: type=1130 audit(1757724484.151:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.153181 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:48:04.156241 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:48:04.157140 systemd[1]: Starting ignition-quench.service... Sep 13 00:48:04.167863 kernel: audit: type=1130 audit(1757724484.160:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.167906 kernel: audit: type=1131 audit(1757724484.160:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.159890 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:48:04.159965 systemd[1]: Finished ignition-quench.service. Sep 13 00:48:04.171977 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:48:04.175014 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:48:04.177011 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:48:04.182232 kernel: audit: type=1130 audit(1757724484.177:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.177661 systemd[1]: Reached target ignition-complete.target. Sep 13 00:48:04.183785 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:48:04.198861 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:48:04.198975 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:48:04.205924 kernel: audit: type=1130 audit(1757724484.199:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.205947 kernel: audit: type=1131 audit(1757724484.199:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.199853 systemd[1]: Reached target initrd-fs.target. Sep 13 00:48:04.207685 systemd[1]: Reached target initrd.target. Sep 13 00:48:04.208227 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:48:04.209067 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:48:04.220144 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:48:04.224395 kernel: audit: type=1130 audit(1757724484.220:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.224452 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:48:04.235035 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:48:04.235719 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:48:04.237080 systemd[1]: Stopped target timers.target. Sep 13 00:48:04.238681 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:48:04.244102 kernel: audit: type=1131 audit(1757724484.239:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.238777 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:48:04.240089 systemd[1]: Stopped target initrd.target. Sep 13 00:48:04.244431 systemd[1]: Stopped target basic.target. Sep 13 00:48:04.245844 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:48:04.248557 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:48:04.250191 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:48:04.251933 systemd[1]: Stopped target remote-fs.target. Sep 13 00:48:04.253400 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:48:04.253909 systemd[1]: Stopped target sysinit.target. Sep 13 00:48:04.254222 systemd[1]: Stopped target local-fs.target. Sep 13 00:48:04.257093 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:48:04.258440 systemd[1]: Stopped target swap.target. Sep 13 00:48:04.259897 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:48:04.265260 kernel: audit: type=1131 audit(1757724484.260:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.259996 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:48:04.261280 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:48:04.271411 kernel: audit: type=1131 audit(1757724484.266:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.265806 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:48:04.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.265934 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:48:04.267383 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:48:04.267500 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:48:04.271991 systemd[1]: Stopped target paths.target. Sep 13 00:48:04.273645 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:48:04.278539 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:48:04.281074 systemd[1]: Stopped target slices.target. Sep 13 00:48:04.283334 systemd[1]: Stopped target sockets.target. Sep 13 00:48:04.285671 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:48:04.286863 systemd[1]: Closed iscsid.socket. Sep 13 00:48:04.288800 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:48:04.290325 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:48:04.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.293084 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:48:04.294386 systemd[1]: Stopped ignition-files.service. Sep 13 00:48:04.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.297968 systemd[1]: Stopping ignition-mount.service... Sep 13 00:48:04.300103 systemd[1]: Stopping iscsiuio.service... Sep 13 00:48:04.303145 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:48:04.304859 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:48:04.305067 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:48:04.308583 ignition[876]: INFO : Ignition 2.14.0 Sep 13 00:48:04.308583 ignition[876]: INFO : Stage: umount Sep 13 00:48:04.308583 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:48:04.308583 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:48:04.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.314305 ignition[876]: INFO : umount: umount passed Sep 13 00:48:04.314305 ignition[876]: INFO : Ignition finished successfully Sep 13 00:48:04.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.310222 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:48:04.310356 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:48:04.312390 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:48:04.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.312478 systemd[1]: Stopped iscsiuio.service. Sep 13 00:48:04.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.314616 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:48:04.314688 systemd[1]: Stopped ignition-mount.service. Sep 13 00:48:04.316297 systemd[1]: Stopped target network.target. Sep 13 00:48:04.317603 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:48:04.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.317639 systemd[1]: Closed iscsiuio.socket. Sep 13 00:48:04.319586 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:48:04.319625 systemd[1]: Stopped ignition-disks.service. Sep 13 00:48:04.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.321219 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:48:04.321259 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:48:04.322913 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:48:04.322951 systemd[1]: Stopped ignition-setup.service. Sep 13 00:48:04.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.337000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:48:04.323931 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:48:04.325401 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:48:04.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.328058 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:48:04.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.328537 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:48:04.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.328608 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:48:04.332324 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:48:04.332386 systemd-networkd[716]: eth0: DHCPv6 lease lost Sep 13 00:48:04.348000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:48:04.332431 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:48:04.335917 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:48:04.335997 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:48:04.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.337674 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:48:04.337699 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:48:04.339812 systemd[1]: Stopping network-cleanup.service... Sep 13 00:48:04.340736 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:48:04.340791 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:48:04.342527 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:48:04.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.342562 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:48:04.344412 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:48:04.344447 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:48:04.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.346244 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:48:04.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.349047 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:48:04.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.351859 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:48:04.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.351934 systemd[1]: Stopped network-cleanup.service. Sep 13 00:48:04.357111 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:48:04.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.357230 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:48:04.358786 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:48:04.358820 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:48:04.360554 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:48:04.360632 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:48:04.362115 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:48:04.362150 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:48:04.364014 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:48:04.364048 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:48:04.365621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:48:04.365656 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:48:04.366669 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:48:04.368073 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:48:04.368119 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:48:04.369075 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:48:04.369113 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:48:04.371003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:48:04.371040 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:48:04.372648 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:48:04.373009 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:48:04.373076 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:48:04.399121 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:48:04.399232 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:48:04.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.400962 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:48:04.402409 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:48:04.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.402450 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:48:04.405051 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:48:04.421547 systemd[1]: Switching root. Sep 13 00:48:04.442314 iscsid[727]: iscsid shutting down. Sep 13 00:48:04.443053 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Sep 13 00:48:04.443092 systemd-journald[198]: Journal stopped Sep 13 00:48:08.588894 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:48:08.589101 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:48:08.589114 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:48:08.589124 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:48:08.589142 kernel: SELinux: policy capability open_perms=1 Sep 13 00:48:08.589151 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:48:08.589161 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:48:08.589171 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:48:08.589181 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:48:08.589193 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:48:08.589206 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:48:08.589217 systemd[1]: Successfully loaded SELinux policy in 44.345ms. Sep 13 00:48:08.589251 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.143ms. Sep 13 00:48:08.589265 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:48:08.589276 systemd[1]: Detected virtualization kvm. Sep 13 00:48:08.589286 systemd[1]: Detected architecture x86-64. Sep 13 00:48:08.589296 systemd[1]: Detected first boot. Sep 13 00:48:08.589306 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:48:08.589318 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:48:08.589329 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:48:08.589351 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:48:08.589372 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:48:08.589384 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:48:08.589396 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:48:08.589584 systemd[1]: Stopped iscsid.service. Sep 13 00:48:08.589606 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:48:08.589617 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:48:08.589628 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:48:08.589647 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:48:08.589658 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:48:08.589668 systemd[1]: Created slice system-getty.slice. Sep 13 00:48:08.589679 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:48:08.589690 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:48:08.589703 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:48:08.589715 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:48:08.589856 systemd[1]: Created slice user.slice. Sep 13 00:48:08.589867 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:48:08.589880 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:48:08.589890 systemd[1]: Set up automount boot.automount. Sep 13 00:48:08.589902 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:48:08.589913 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:48:08.589924 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:48:08.589934 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:48:08.589944 systemd[1]: Reached target integritysetup.target. Sep 13 00:48:08.589955 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:48:08.589965 systemd[1]: Reached target remote-fs.target. Sep 13 00:48:08.589976 systemd[1]: Reached target slices.target. Sep 13 00:48:08.589988 systemd[1]: Reached target swap.target. Sep 13 00:48:08.589999 systemd[1]: Reached target torcx.target. Sep 13 00:48:08.590012 systemd[1]: Reached target veritysetup.target. Sep 13 00:48:08.590026 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:48:08.590037 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:48:08.590048 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:48:08.590058 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:48:08.590068 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:48:08.590080 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:48:08.590090 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:48:08.590102 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:48:08.590113 systemd[1]: Mounting media.mount... Sep 13 00:48:08.590124 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:48:08.590134 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:48:08.590144 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:48:08.590154 systemd[1]: Mounting tmp.mount... Sep 13 00:48:08.590165 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:48:08.590179 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:48:08.590191 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:48:08.590202 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:48:08.590212 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:48:08.590222 systemd[1]: Starting modprobe@drm.service... Sep 13 00:48:08.590232 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:48:08.590243 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:48:08.590253 systemd[1]: Starting modprobe@loop.service... Sep 13 00:48:08.590264 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:48:08.590274 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:48:08.590286 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:48:08.590296 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:48:08.590312 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:48:08.590323 kernel: fuse: init (API version 7.34) Sep 13 00:48:08.590332 kernel: loop: module loaded Sep 13 00:48:08.590368 systemd[1]: Stopped systemd-journald.service. Sep 13 00:48:08.590379 systemd[1]: Starting systemd-journald.service... Sep 13 00:48:08.590390 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:48:08.590401 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:48:08.590411 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:48:08.590424 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:48:08.590434 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:48:08.590445 systemd[1]: Stopped verity-setup.service. Sep 13 00:48:08.590455 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:48:08.590469 systemd-journald[987]: Journal started Sep 13 00:48:08.590509 systemd-journald[987]: Runtime Journal (/run/log/journal/3ea64464c47145c196976f985825688e) is 6.0M, max 48.4M, 42.4M free. Sep 13 00:48:04.510000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:48:04.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:48:04.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:48:04.700000 audit: BPF prog-id=10 op=LOAD Sep 13 00:48:04.700000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:48:04.700000 audit: BPF prog-id=11 op=LOAD Sep 13 00:48:04.700000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:48:04.790000 audit[910]: AVC avc: denied { associate } for pid=910 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:48:04.790000 audit[910]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e4 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=893 pid=910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:48:04.790000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:48:04.792000 audit[910]: AVC avc: denied { associate } for pid=910 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:48:04.792000 audit[910]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879c9 a2=1ed a3=0 items=2 ppid=893 pid=910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:48:04.792000 audit: CWD cwd="/" Sep 13 00:48:04.792000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:04.792000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:04.792000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:48:08.265000 audit: BPF prog-id=12 op=LOAD Sep 13 00:48:08.265000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:48:08.265000 audit: BPF prog-id=13 op=LOAD Sep 13 00:48:08.265000 audit: BPF prog-id=14 op=LOAD Sep 13 00:48:08.265000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:48:08.265000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:48:08.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.468000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:48:08.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.569000 audit: BPF prog-id=15 op=LOAD Sep 13 00:48:08.569000 audit: BPF prog-id=16 op=LOAD Sep 13 00:48:08.569000 audit: BPF prog-id=17 op=LOAD Sep 13 00:48:08.569000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:48:08.569000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:48:08.587000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:48:08.587000 audit[987]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdcdfe21d0 a2=4000 a3=7ffdcdfe226c items=0 ppid=1 pid=987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:48:08.587000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:48:08.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:04.788395 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:48:08.263567 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:48:04.788620 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:48:08.263580 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:48:04.788639 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:48:08.266826 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:48:04.788673 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:48:04.788683 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:48:04.788720 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:48:04.788732 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:48:04.788974 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:48:04.789010 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:48:04.789022 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:48:04.789697 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:48:04.789732 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:48:04.789759 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:48:04.789774 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:48:04.789792 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:48:04.789806 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:48:07.968786 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:48:07.969076 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:48:07.969198 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:48:07.969395 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:48:07.969451 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:48:07.969525 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-09-13T00:48:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:48:08.593563 systemd[1]: Started systemd-journald.service. Sep 13 00:48:08.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.594327 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:48:08.595365 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:48:08.596259 systemd[1]: Mounted media.mount. Sep 13 00:48:08.597144 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:48:08.598048 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:48:08.598983 systemd[1]: Mounted tmp.mount. Sep 13 00:48:08.599985 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:48:08.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.601257 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:48:08.601524 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:48:08.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.602939 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:48:08.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.604150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:48:08.604322 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:48:08.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.605516 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:48:08.605643 systemd[1]: Finished modprobe@drm.service. Sep 13 00:48:08.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.606654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:48:08.606807 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:48:08.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.607999 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:48:08.608124 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:48:08.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.609177 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:48:08.609387 systemd[1]: Finished modprobe@loop.service. Sep 13 00:48:08.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.610488 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:48:08.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.611795 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:48:08.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.613048 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:48:08.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.614396 systemd[1]: Reached target network-pre.target. Sep 13 00:48:08.616561 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:48:08.618291 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:48:08.619439 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:48:08.621051 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:48:08.623272 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:48:08.624398 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:48:08.625530 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:48:08.626592 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:48:08.627876 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:48:08.630003 systemd-journald[987]: Time spent on flushing to /var/log/journal/3ea64464c47145c196976f985825688e is 14.979ms for 1144 entries. Sep 13 00:48:08.630003 systemd-journald[987]: System Journal (/var/log/journal/3ea64464c47145c196976f985825688e) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:48:09.309136 systemd-journald[987]: Received client request to flush runtime journal. Sep 13 00:48:08.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:08.630258 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:48:08.634037 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:48:08.635053 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:48:09.310073 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:48:08.647591 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:48:08.649862 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:48:08.849787 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:48:08.905467 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:48:08.908226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:48:08.928768 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:48:08.930215 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:48:08.961712 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:48:09.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.310544 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:48:09.315367 kernel: kauditd_printk_skb: 94 callbacks suppressed Sep 13 00:48:09.315422 kernel: audit: type=1130 audit(1757724489.311:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.632421 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:48:09.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.636000 audit: BPF prog-id=18 op=LOAD Sep 13 00:48:09.637888 kernel: audit: type=1130 audit(1757724489.633:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.637950 kernel: audit: type=1334 audit(1757724489.636:133): prog-id=18 op=LOAD Sep 13 00:48:09.637969 kernel: audit: type=1334 audit(1757724489.637:134): prog-id=19 op=LOAD Sep 13 00:48:09.637000 audit: BPF prog-id=19 op=LOAD Sep 13 00:48:09.638755 systemd[1]: Starting systemd-udevd.service... Sep 13 00:48:09.638902 kernel: audit: type=1334 audit(1757724489.637:135): prog-id=7 op=UNLOAD Sep 13 00:48:09.638939 kernel: audit: type=1334 audit(1757724489.637:136): prog-id=8 op=UNLOAD Sep 13 00:48:09.637000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:48:09.637000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:48:09.657529 systemd-udevd[1018]: Using default interface naming scheme 'v252'. Sep 13 00:48:09.670445 systemd[1]: Started systemd-udevd.service. Sep 13 00:48:09.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.676562 kernel: audit: type=1130 audit(1757724489.671:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.674000 audit: BPF prog-id=20 op=LOAD Sep 13 00:48:09.677091 systemd[1]: Starting systemd-networkd.service... Sep 13 00:48:09.679373 kernel: audit: type=1334 audit(1757724489.674:138): prog-id=20 op=LOAD Sep 13 00:48:09.681000 audit: BPF prog-id=21 op=LOAD Sep 13 00:48:09.682000 audit: BPF prog-id=22 op=LOAD Sep 13 00:48:09.683529 kernel: audit: type=1334 audit(1757724489.681:139): prog-id=21 op=LOAD Sep 13 00:48:09.683582 kernel: audit: type=1334 audit(1757724489.682:140): prog-id=22 op=LOAD Sep 13 00:48:09.683000 audit: BPF prog-id=23 op=LOAD Sep 13 00:48:09.684565 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:48:09.712334 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:48:09.714421 systemd[1]: Started systemd-userdbd.service. Sep 13 00:48:09.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.737204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:48:09.744370 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:48:09.754365 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:48:09.864437 systemd-networkd[1029]: lo: Link UP Sep 13 00:48:09.865032 systemd-networkd[1029]: lo: Gained carrier Sep 13 00:48:09.793000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:48:09.793000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561c851da580 a1=338ec a2=7f5ed4b69bc5 a3=5 items=110 ppid=1018 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:48:09.793000 audit: CWD cwd="/" Sep 13 00:48:09.793000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=1 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=2 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=3 name=(null) inode=15487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=4 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=5 name=(null) inode=15488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=6 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=7 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=8 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=9 name=(null) inode=15490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=10 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=11 name=(null) inode=15491 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=12 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=13 name=(null) inode=15492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=14 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=15 name=(null) inode=15493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=16 name=(null) inode=15489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=17 name=(null) inode=15494 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=18 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=19 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=20 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=21 name=(null) inode=15496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=22 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=23 name=(null) inode=15497 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=24 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=25 name=(null) inode=15498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=26 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=27 name=(null) inode=15499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=28 name=(null) inode=15495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=29 name=(null) inode=15500 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=30 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=31 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=32 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=33 name=(null) inode=15502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=34 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=35 name=(null) inode=15503 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=36 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=37 name=(null) inode=15504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=38 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=39 name=(null) inode=15505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=40 name=(null) inode=15501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=41 name=(null) inode=15506 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=42 name=(null) inode=15486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=43 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=44 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=45 name=(null) inode=15508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=46 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=47 name=(null) inode=15509 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=48 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=49 name=(null) inode=15510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=50 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=51 name=(null) inode=15511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=52 name=(null) inode=15507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=53 name=(null) inode=15512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=55 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=56 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=57 name=(null) inode=15514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=58 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=59 name=(null) inode=15515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=60 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=61 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=62 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=63 name=(null) inode=15517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=64 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=65 name=(null) inode=15518 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=66 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=67 name=(null) inode=15519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=68 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=69 name=(null) inode=15520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=70 name=(null) inode=15516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=71 name=(null) inode=15521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=72 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=73 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=74 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=75 name=(null) inode=15523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=76 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=77 name=(null) inode=15524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=78 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=79 name=(null) inode=15525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=80 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=81 name=(null) inode=15526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=82 name=(null) inode=15522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=83 name=(null) inode=15527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=84 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=85 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=86 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=87 name=(null) inode=15529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=88 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=89 name=(null) inode=15530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=90 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=91 name=(null) inode=15531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=92 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=93 name=(null) inode=15532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=94 name=(null) inode=15528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=95 name=(null) inode=15533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=96 name=(null) inode=15513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=97 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=98 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=99 name=(null) inode=15535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=100 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=101 name=(null) inode=15536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=102 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=103 name=(null) inode=15537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=104 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=105 name=(null) inode=15538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=106 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=107 name=(null) inode=15539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PATH item=109 name=(null) inode=15540 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:48:09.793000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:48:09.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.870289 systemd-networkd[1029]: Enumeration completed Sep 13 00:48:09.870441 systemd[1]: Started systemd-networkd.service. Sep 13 00:48:09.871766 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:48:09.872985 systemd-networkd[1029]: eth0: Link UP Sep 13 00:48:09.872991 systemd-networkd[1029]: eth0: Gained carrier Sep 13 00:48:09.881436 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:48:09.885925 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:48:09.886060 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:48:09.886184 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:48:09.886275 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:48:09.889498 systemd-networkd[1029]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:48:09.890370 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:48:09.934632 kernel: kvm: Nested Virtualization enabled Sep 13 00:48:09.934730 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:48:09.934745 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:48:09.935786 kernel: SVM: Virtual GIF supported Sep 13 00:48:09.952374 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:48:09.978712 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:48:09.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:09.980892 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:48:09.989141 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:48:10.016362 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:48:10.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.017360 systemd[1]: Reached target cryptsetup.target. Sep 13 00:48:10.019069 systemd[1]: Starting lvm2-activation.service... Sep 13 00:48:10.022904 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:48:10.048002 systemd[1]: Finished lvm2-activation.service. Sep 13 00:48:10.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.062492 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:48:10.063333 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:48:10.063366 systemd[1]: Reached target local-fs.target. Sep 13 00:48:10.064159 systemd[1]: Reached target machines.target. Sep 13 00:48:10.065866 systemd[1]: Starting ldconfig.service... Sep 13 00:48:10.066829 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.066871 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:48:10.067836 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:48:10.069853 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:48:10.071916 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:48:10.074305 systemd[1]: Starting systemd-sysext.service... Sep 13 00:48:10.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.076965 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1057 (bootctl) Sep 13 00:48:10.078045 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:48:10.079826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:48:10.087808 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:48:10.091616 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:48:10.091763 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:48:10.103446 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 00:48:10.108886 systemd-fsck[1064]: fsck.fat 4.2 (2021-01-31) Sep 13 00:48:10.108886 systemd-fsck[1064]: /dev/vda1: 791 files, 120781/258078 clusters Sep 13 00:48:10.110660 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:48:10.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.112992 systemd[1]: Mounting boot.mount... Sep 13 00:48:10.132002 systemd[1]: Mounted boot.mount. Sep 13 00:48:10.354391 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:48:10.356168 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:48:10.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.367024 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:48:10.367769 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:48:10.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.370448 kernel: loop1: detected capacity change from 0 to 224512 Sep 13 00:48:10.376685 (sd-sysext)[1071]: Using extensions 'kubernetes'. Sep 13 00:48:10.377134 (sd-sysext)[1071]: Merged extensions into '/usr'. Sep 13 00:48:10.393033 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:48:10.394493 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:48:10.395467 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.397222 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:48:10.399395 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:48:10.401459 systemd[1]: Starting modprobe@loop.service... Sep 13 00:48:10.402421 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.402616 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:48:10.402737 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:48:10.405271 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:48:10.406460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:48:10.406611 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:48:10.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.408030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:48:10.408160 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:48:10.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.409390 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:48:10.409505 systemd[1]: Finished modprobe@loop.service. Sep 13 00:48:10.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.410863 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:48:10.410960 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.411928 systemd[1]: Finished systemd-sysext.service. Sep 13 00:48:10.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.413931 systemd[1]: Starting ensure-sysext.service... Sep 13 00:48:10.415687 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:48:10.421272 systemd[1]: Reloading. Sep 13 00:48:10.428862 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:48:10.430014 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:48:10.431532 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:48:10.485030 ldconfig[1056]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:48:10.521474 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-13T00:48:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:48:10.521862 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-13T00:48:10Z" level=info msg="torcx already run" Sep 13 00:48:10.576484 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:48:10.576500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:48:10.593210 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:48:10.642000 audit: BPF prog-id=24 op=LOAD Sep 13 00:48:10.642000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:48:10.643000 audit: BPF prog-id=25 op=LOAD Sep 13 00:48:10.643000 audit: BPF prog-id=26 op=LOAD Sep 13 00:48:10.643000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:48:10.643000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:48:10.644000 audit: BPF prog-id=27 op=LOAD Sep 13 00:48:10.644000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:48:10.645000 audit: BPF prog-id=28 op=LOAD Sep 13 00:48:10.645000 audit: BPF prog-id=29 op=LOAD Sep 13 00:48:10.645000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:48:10.645000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:48:10.646000 audit: BPF prog-id=30 op=LOAD Sep 13 00:48:10.646000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:48:10.646000 audit: BPF prog-id=31 op=LOAD Sep 13 00:48:10.646000 audit: BPF prog-id=32 op=LOAD Sep 13 00:48:10.646000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:48:10.646000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:48:10.650072 systemd[1]: Finished ldconfig.service. Sep 13 00:48:10.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.651221 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:48:10.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.656676 systemd[1]: Starting audit-rules.service... Sep 13 00:48:10.659271 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:48:10.661820 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:48:10.663000 audit: BPF prog-id=33 op=LOAD Sep 13 00:48:10.665000 audit: BPF prog-id=34 op=LOAD Sep 13 00:48:10.664502 systemd[1]: Starting systemd-resolved.service... Sep 13 00:48:10.666878 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:48:10.668876 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:48:10.670485 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:48:10.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:48:10.686167 augenrules[1160]: No rules Sep 13 00:48:10.685000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:48:10.685000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd887b3380 a2=420 a3=0 items=0 ppid=1140 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:48:10.685000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:48:10.690908 systemd[1]: Finished audit-rules.service. Sep 13 00:48:10.693410 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:48:10.696782 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.698894 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:48:10.701433 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:48:10.703988 systemd[1]: Starting modprobe@loop.service... Sep 13 00:48:10.704935 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.705403 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:48:10.706819 systemd[1]: Starting systemd-update-done.service... Sep 13 00:48:10.707931 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:48:10.708786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:48:10.708910 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:48:10.710358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:48:10.710462 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:48:10.711873 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:48:10.711988 systemd[1]: Finished modprobe@loop.service. Sep 13 00:48:10.713257 systemd[1]: Finished systemd-update-done.service. Sep 13 00:48:10.717498 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.718978 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:48:10.721251 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:48:10.724475 systemd[1]: Starting modprobe@loop.service... Sep 13 00:48:10.725792 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.726250 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:48:10.726677 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:48:10.728837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:48:10.729110 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:48:10.730790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:48:10.731116 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:48:10.733517 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:48:10.733808 systemd[1]: Finished modprobe@loop.service. Sep 13 00:48:10.736415 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:48:10.740665 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:48:10.741061 systemd-resolved[1146]: Positive Trust Anchors: Sep 13 00:48:10.741079 systemd-resolved[1146]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:48:10.741115 systemd-resolved[1146]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:48:10.741967 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:48:10.743856 systemd[1]: Starting modprobe@drm.service... Sep 13 00:48:10.745621 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:48:11.522670 systemd-timesyncd[1150]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:48:11.522682 systemd[1]: Starting modprobe@loop.service... Sep 13 00:48:11.522735 systemd-timesyncd[1150]: Initial clock synchronization to Sat 2025-09-13 00:48:11.522592 UTC. Sep 13 00:48:11.523501 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:48:11.523647 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:48:11.524882 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:48:11.525874 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:48:11.526881 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:48:11.528423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:48:11.528590 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:48:11.529919 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:48:11.530030 systemd[1]: Finished modprobe@drm.service. Sep 13 00:48:11.531241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:48:11.531353 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:48:11.532700 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:48:11.532821 systemd[1]: Finished modprobe@loop.service. Sep 13 00:48:11.534067 systemd-resolved[1146]: Defaulting to hostname 'linux'. Sep 13 00:48:11.535282 systemd[1]: Finished ensure-sysext.service. Sep 13 00:48:11.537000 systemd[1]: Started systemd-resolved.service. Sep 13 00:48:11.537864 systemd[1]: Reached target network.target. Sep 13 00:48:11.538736 systemd[1]: Reached target nss-lookup.target. Sep 13 00:48:11.539564 systemd[1]: Reached target time-set.target. Sep 13 00:48:11.540456 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:48:11.540479 systemd[1]: Reached target sysinit.target. Sep 13 00:48:11.541390 systemd[1]: Started motdgen.path. Sep 13 00:48:11.542116 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:48:11.543346 systemd[1]: Started logrotate.timer. Sep 13 00:48:11.544186 systemd[1]: Started mdadm.timer. Sep 13 00:48:11.544964 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:48:11.545967 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:48:11.545993 systemd[1]: Reached target paths.target. Sep 13 00:48:11.546803 systemd[1]: Reached target timers.target. Sep 13 00:48:11.547927 systemd[1]: Listening on dbus.socket. Sep 13 00:48:11.549726 systemd[1]: Starting docker.socket... Sep 13 00:48:11.552934 systemd[1]: Listening on sshd.socket. Sep 13 00:48:11.553762 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:48:11.553803 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:48:11.554152 systemd[1]: Listening on docker.socket. Sep 13 00:48:11.555029 systemd[1]: Reached target sockets.target. Sep 13 00:48:11.555803 systemd[1]: Reached target basic.target. Sep 13 00:48:11.556560 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:48:11.556582 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:48:11.557444 systemd[1]: Starting containerd.service... Sep 13 00:48:11.559261 systemd[1]: Starting dbus.service... Sep 13 00:48:11.560937 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:48:11.564034 systemd[1]: Starting extend-filesystems.service... Sep 13 00:48:11.565036 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:48:11.566229 systemd[1]: Starting motdgen.service... Sep 13 00:48:11.567291 jq[1182]: false Sep 13 00:48:11.568525 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:48:11.571356 systemd[1]: Starting sshd-keygen.service... Sep 13 00:48:11.575786 systemd[1]: Starting systemd-logind.service... Sep 13 00:48:11.577073 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:48:11.577253 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:48:11.578562 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:48:11.579646 systemd[1]: Starting update-engine.service... Sep 13 00:48:11.580913 dbus-daemon[1181]: [system] SELinux support is enabled Sep 13 00:48:11.582198 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:48:11.584387 systemd[1]: Started dbus.service. Sep 13 00:48:11.586610 jq[1198]: true Sep 13 00:48:11.589749 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:48:11.589919 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:48:11.590232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:48:11.590392 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:48:11.594819 extend-filesystems[1183]: Found loop1 Sep 13 00:48:11.594819 extend-filesystems[1183]: Found sr0 Sep 13 00:48:11.594819 extend-filesystems[1183]: Found vda Sep 13 00:48:11.594819 extend-filesystems[1183]: Found vda1 Sep 13 00:48:11.594819 extend-filesystems[1183]: Found vda2 Sep 13 00:48:11.594819 extend-filesystems[1183]: Found vda3 Sep 13 00:48:11.594819 extend-filesystems[1183]: Found usr Sep 13 00:48:11.594819 extend-filesystems[1183]: Found vda4 Sep 13 00:48:11.594819 extend-filesystems[1183]: Found vda6 Sep 13 00:48:11.594780 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:48:11.671547 jq[1202]: true Sep 13 00:48:11.671781 extend-filesystems[1183]: Found vda7 Sep 13 00:48:11.671781 extend-filesystems[1183]: Found vda9 Sep 13 00:48:11.671781 extend-filesystems[1183]: Checking size of /dev/vda9 Sep 13 00:48:11.594809 systemd[1]: Reached target system-config.target. Sep 13 00:48:11.597054 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:48:11.597075 systemd[1]: Reached target user-config.target. Sep 13 00:48:11.601728 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:48:11.601926 systemd[1]: Finished motdgen.service. Sep 13 00:48:11.682184 update_engine[1196]: I0913 00:48:11.680174 1196 main.cc:92] Flatcar Update Engine starting Sep 13 00:48:11.684386 systemd[1]: Started update-engine.service. Sep 13 00:48:11.684534 update_engine[1196]: I0913 00:48:11.684499 1196 update_check_scheduler.cc:74] Next update check in 5m5s Sep 13 00:48:11.687123 extend-filesystems[1183]: Resized partition /dev/vda9 Sep 13 00:48:11.688784 systemd[1]: Started locksmithd.service. Sep 13 00:48:11.689515 extend-filesystems[1219]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:48:11.695526 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:48:11.703878 systemd-logind[1195]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:48:11.703913 systemd-logind[1195]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:48:11.704593 systemd-logind[1195]: New seat seat0. Sep 13 00:48:11.713916 systemd[1]: Started systemd-logind.service. Sep 13 00:48:11.731861 env[1203]: time="2025-09-13T00:48:11.731764432Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:48:11.733517 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:48:11.751109 env[1203]: time="2025-09-13T00:48:11.751075998Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:48:11.759139 extend-filesystems[1219]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:48:11.759139 extend-filesystems[1219]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:48:11.759139 extend-filesystems[1219]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:48:11.764268 extend-filesystems[1183]: Resized filesystem in /dev/vda9 Sep 13 00:48:11.761276 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:48:11.765546 env[1203]: time="2025-09-13T00:48:11.760059696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:48:11.765546 env[1203]: time="2025-09-13T00:48:11.761888125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:48:11.765546 env[1203]: time="2025-09-13T00:48:11.762142802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:48:11.765546 env[1203]: time="2025-09-13T00:48:11.763030467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:48:11.765546 env[1203]: time="2025-09-13T00:48:11.763049483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:48:11.765546 env[1203]: time="2025-09-13T00:48:11.763063289Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:48:11.765546 env[1203]: time="2025-09-13T00:48:11.763071985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:48:11.765546 env[1203]: time="2025-09-13T00:48:11.763453290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:48:11.765951 bash[1230]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:48:11.764282 systemd[1]: Finished extend-filesystems.service. Sep 13 00:48:11.767093 env[1203]: time="2025-09-13T00:48:11.767065374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:48:11.767355 env[1203]: time="2025-09-13T00:48:11.767320222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:48:11.767515 env[1203]: time="2025-09-13T00:48:11.767457680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:48:11.767771 env[1203]: time="2025-09-13T00:48:11.767717787Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:48:11.767861 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:48:11.768081 env[1203]: time="2025-09-13T00:48:11.768032427Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:48:11.775230 env[1203]: time="2025-09-13T00:48:11.775164332Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:48:11.775334 env[1203]: time="2025-09-13T00:48:11.775235426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:48:11.775334 env[1203]: time="2025-09-13T00:48:11.775255904Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:48:11.775334 env[1203]: time="2025-09-13T00:48:11.775317800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.775397 env[1203]: time="2025-09-13T00:48:11.775345212Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.775397 env[1203]: time="2025-09-13T00:48:11.775362464Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.775397 env[1203]: time="2025-09-13T00:48:11.775379025Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.775460 env[1203]: time="2025-09-13T00:48:11.775396137Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.775460 env[1203]: time="2025-09-13T00:48:11.775435912Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.775460 env[1203]: time="2025-09-13T00:48:11.775453815Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.775547 env[1203]: time="2025-09-13T00:48:11.775470767Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.775547 env[1203]: time="2025-09-13T00:48:11.775502186Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:48:11.775718 env[1203]: time="2025-09-13T00:48:11.775680380Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:48:11.775826 env[1203]: time="2025-09-13T00:48:11.775797159Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:48:11.776123 env[1203]: time="2025-09-13T00:48:11.776100428Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:48:11.776169 env[1203]: time="2025-09-13T00:48:11.776145803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776169 env[1203]: time="2025-09-13T00:48:11.776164358Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:48:11.776274 env[1203]: time="2025-09-13T00:48:11.776245129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776274 env[1203]: time="2025-09-13T00:48:11.776269996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776357 env[1203]: time="2025-09-13T00:48:11.776284223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776357 env[1203]: time="2025-09-13T00:48:11.776298449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776357 env[1203]: time="2025-09-13T00:48:11.776315972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776357 env[1203]: time="2025-09-13T00:48:11.776330299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776357 env[1203]: time="2025-09-13T00:48:11.776345037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776454 env[1203]: time="2025-09-13T00:48:11.776357630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776454 env[1203]: time="2025-09-13T00:48:11.776373971Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:48:11.776587 env[1203]: time="2025-09-13T00:48:11.776552105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776634 env[1203]: time="2025-09-13T00:48:11.776600345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776634 env[1203]: time="2025-09-13T00:48:11.776617317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.776685 env[1203]: time="2025-09-13T00:48:11.776632456Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:48:11.776685 env[1203]: time="2025-09-13T00:48:11.776651722Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:48:11.776685 env[1203]: time="2025-09-13T00:48:11.776666189Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:48:11.776756 env[1203]: time="2025-09-13T00:48:11.776710562Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:48:11.776779 env[1203]: time="2025-09-13T00:48:11.776765675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:48:11.777127 env[1203]: time="2025-09-13T00:48:11.777052954Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:48:11.777809 env[1203]: time="2025-09-13T00:48:11.777129147Z" level=info msg="Connect containerd service" Sep 13 00:48:11.777809 env[1203]: time="2025-09-13T00:48:11.777188669Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:48:11.778054 env[1203]: time="2025-09-13T00:48:11.778009528Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:48:11.778262 env[1203]: time="2025-09-13T00:48:11.778208541Z" level=info msg="Start subscribing containerd event" Sep 13 00:48:11.778323 env[1203]: time="2025-09-13T00:48:11.778306645Z" level=info msg="Start recovering state" Sep 13 00:48:11.778369 env[1203]: time="2025-09-13T00:48:11.778351459Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:48:11.778411 env[1203]: time="2025-09-13T00:48:11.778396494Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:48:11.778580 systemd[1]: Started containerd.service. Sep 13 00:48:11.780533 env[1203]: time="2025-09-13T00:48:11.780181201Z" level=info msg="containerd successfully booted in 0.057062s" Sep 13 00:48:11.780533 env[1203]: time="2025-09-13T00:48:11.780312507Z" level=info msg="Start event monitor" Sep 13 00:48:11.780533 env[1203]: time="2025-09-13T00:48:11.780369424Z" level=info msg="Start snapshots syncer" Sep 13 00:48:11.780533 env[1203]: time="2025-09-13T00:48:11.780402907Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:48:11.780533 env[1203]: time="2025-09-13T00:48:11.780423185Z" level=info msg="Start streaming server" Sep 13 00:48:11.801145 locksmithd[1220]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:48:11.867793 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:48:11.867857 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:48:12.037361 sshd_keygen[1200]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:48:12.103949 systemd-networkd[1029]: eth0: Gained IPv6LL Sep 13 00:48:12.106278 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:48:12.107747 systemd[1]: Reached target network-online.target. Sep 13 00:48:12.110690 systemd[1]: Starting kubelet.service... Sep 13 00:48:12.127415 systemd[1]: Finished sshd-keygen.service. Sep 13 00:48:12.130231 systemd[1]: Starting issuegen.service... Sep 13 00:48:12.135579 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:48:12.135776 systemd[1]: Finished issuegen.service. Sep 13 00:48:12.138587 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:48:12.144522 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:48:12.146890 systemd[1]: Started getty@tty1.service. Sep 13 00:48:12.148838 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:48:12.149997 systemd[1]: Reached target getty.target. Sep 13 00:48:12.504048 systemd[1]: Created slice system-sshd.slice. Sep 13 00:48:12.507150 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:52836.service. Sep 13 00:48:12.566930 sshd[1258]: Accepted publickey for core from 10.0.0.1 port 52836 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:48:12.569176 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:12.578330 systemd[1]: Created slice user-500.slice. Sep 13 00:48:12.580330 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:48:12.585255 systemd-logind[1195]: New session 1 of user core. Sep 13 00:48:12.593114 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:48:12.670756 systemd[1]: Starting user@500.service... Sep 13 00:48:12.675464 (systemd)[1261]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:12.759799 systemd[1261]: Queued start job for default target default.target. Sep 13 00:48:12.760956 systemd[1261]: Reached target paths.target. Sep 13 00:48:12.760981 systemd[1261]: Reached target sockets.target. Sep 13 00:48:12.760993 systemd[1261]: Reached target timers.target. Sep 13 00:48:12.761003 systemd[1261]: Reached target basic.target. Sep 13 00:48:12.761050 systemd[1261]: Reached target default.target. Sep 13 00:48:12.761077 systemd[1261]: Startup finished in 76ms. Sep 13 00:48:12.761307 systemd[1]: Started user@500.service. Sep 13 00:48:12.763719 systemd[1]: Started session-1.scope. Sep 13 00:48:12.821708 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:52846.service. Sep 13 00:48:12.889805 sshd[1270]: Accepted publickey for core from 10.0.0.1 port 52846 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:48:12.891545 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:12.896733 systemd-logind[1195]: New session 2 of user core. Sep 13 00:48:12.897917 systemd[1]: Started session-2.scope. Sep 13 00:48:12.993878 sshd[1270]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:12.998604 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:52856.service. Sep 13 00:48:13.000617 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:52846.service: Deactivated successfully. Sep 13 00:48:13.001521 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:48:13.003337 systemd-logind[1195]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:48:13.004364 systemd-logind[1195]: Removed session 2. Sep 13 00:48:13.042326 sshd[1275]: Accepted publickey for core from 10.0.0.1 port 52856 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:48:13.044035 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:13.047677 systemd-logind[1195]: New session 3 of user core. Sep 13 00:48:13.048495 systemd[1]: Started session-3.scope. Sep 13 00:48:13.104030 sshd[1275]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:13.106703 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:52856.service: Deactivated successfully. Sep 13 00:48:13.107538 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:48:13.108072 systemd-logind[1195]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:48:13.108806 systemd-logind[1195]: Removed session 3. Sep 13 00:48:13.571228 systemd[1]: Started kubelet.service. Sep 13 00:48:13.572587 systemd[1]: Reached target multi-user.target. Sep 13 00:48:13.574844 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:48:13.582706 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:48:13.582840 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:48:13.584039 systemd[1]: Startup finished in 970ms (kernel) + 5.481s (initrd) + 8.344s (userspace) = 14.796s. Sep 13 00:48:14.210264 kubelet[1284]: E0913 00:48:14.210182 1284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:48:14.211915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:48:14.212034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:48:14.212278 systemd[1]: kubelet.service: Consumed 1.989s CPU time. Sep 13 00:48:23.108584 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:55444.service. Sep 13 00:48:23.149879 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 55444 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:48:23.151823 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:23.156059 systemd-logind[1195]: New session 4 of user core. Sep 13 00:48:23.157100 systemd[1]: Started session-4.scope. Sep 13 00:48:23.211350 sshd[1295]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:23.214219 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:55444.service: Deactivated successfully. Sep 13 00:48:23.214763 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:48:23.215203 systemd-logind[1195]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:48:23.216153 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:55460.service. Sep 13 00:48:23.216762 systemd-logind[1195]: Removed session 4. Sep 13 00:48:23.255808 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 55460 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:48:23.257174 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:23.260598 systemd-logind[1195]: New session 5 of user core. Sep 13 00:48:23.261359 systemd[1]: Started session-5.scope. Sep 13 00:48:23.309999 sshd[1301]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:23.312513 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:55460.service: Deactivated successfully. Sep 13 00:48:23.312999 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:48:23.313478 systemd-logind[1195]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:48:23.314398 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:55472.service. Sep 13 00:48:23.315061 systemd-logind[1195]: Removed session 5. Sep 13 00:48:23.353586 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 55472 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:48:23.355268 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:23.358552 systemd-logind[1195]: New session 6 of user core. Sep 13 00:48:23.359302 systemd[1]: Started session-6.scope. Sep 13 00:48:23.414654 sshd[1308]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:23.417228 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:55472.service: Deactivated successfully. Sep 13 00:48:23.417741 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:48:23.418211 systemd-logind[1195]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:48:23.419196 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:55486.service. Sep 13 00:48:23.419977 systemd-logind[1195]: Removed session 6. Sep 13 00:48:23.458425 sshd[1314]: Accepted publickey for core from 10.0.0.1 port 55486 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:48:23.459630 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:23.462827 systemd-logind[1195]: New session 7 of user core. Sep 13 00:48:23.463552 systemd[1]: Started session-7.scope. Sep 13 00:48:23.517561 sudo[1317]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:48:23.517746 sudo[1317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:48:23.529599 systemd[1]: Starting coreos-metadata.service... Sep 13 00:48:23.536047 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:48:23.536180 systemd[1]: Finished coreos-metadata.service. Sep 13 00:48:24.195586 systemd[1]: Stopped kubelet.service. Sep 13 00:48:24.195737 systemd[1]: kubelet.service: Consumed 1.989s CPU time. Sep 13 00:48:24.197562 systemd[1]: Starting kubelet.service... Sep 13 00:48:24.219817 systemd[1]: Reloading. Sep 13 00:48:24.287874 /usr/lib/systemd/system-generators/torcx-generator[1377]: time="2025-09-13T00:48:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:48:24.287905 /usr/lib/systemd/system-generators/torcx-generator[1377]: time="2025-09-13T00:48:24Z" level=info msg="torcx already run" Sep 13 00:48:24.814921 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:48:24.814938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:48:24.831624 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:48:24.907709 systemd[1]: Started kubelet.service. Sep 13 00:48:24.909392 systemd[1]: Stopping kubelet.service... Sep 13 00:48:24.915399 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:48:24.915613 systemd[1]: Stopped kubelet.service. Sep 13 00:48:24.917156 systemd[1]: Starting kubelet.service... Sep 13 00:48:25.025856 systemd[1]: Started kubelet.service. Sep 13 00:48:25.081590 kubelet[1422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:48:25.081590 kubelet[1422]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:48:25.081590 kubelet[1422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:48:25.082074 kubelet[1422]: I0913 00:48:25.081574 1422 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:48:25.392561 kubelet[1422]: I0913 00:48:25.392423 1422 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:48:25.392561 kubelet[1422]: I0913 00:48:25.392457 1422 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:48:25.392782 kubelet[1422]: I0913 00:48:25.392758 1422 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:48:25.418374 kubelet[1422]: I0913 00:48:25.418322 1422 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:48:25.426420 kubelet[1422]: E0913 00:48:25.426361 1422 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:48:25.426420 kubelet[1422]: I0913 00:48:25.426404 1422 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:48:25.431244 kubelet[1422]: I0913 00:48:25.431212 1422 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:48:25.432134 kubelet[1422]: I0913 00:48:25.432054 1422 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:48:25.432391 kubelet[1422]: I0913 00:48:25.432106 1422 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.82","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:48:25.432391 kubelet[1422]: I0913 00:48:25.432354 1422 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:48:25.432391 kubelet[1422]: I0913 00:48:25.432367 1422 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:48:25.432587 kubelet[1422]: I0913 00:48:25.432560 1422 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:48:25.439117 kubelet[1422]: I0913 00:48:25.439076 1422 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:48:25.439117 kubelet[1422]: I0913 00:48:25.439108 1422 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:48:25.439220 kubelet[1422]: I0913 00:48:25.439144 1422 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:48:25.439220 kubelet[1422]: I0913 00:48:25.439161 1422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:48:25.439583 kubelet[1422]: E0913 00:48:25.439549 1422 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:25.439913 kubelet[1422]: E0913 00:48:25.439876 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:25.442634 kubelet[1422]: I0913 00:48:25.442594 1422 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:48:25.443143 kubelet[1422]: I0913 00:48:25.443117 1422 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:48:25.443883 kubelet[1422]: W0913 00:48:25.443853 1422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:48:25.444579 kubelet[1422]: W0913 00:48:25.444531 1422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.82" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 13 00:48:25.444639 kubelet[1422]: W0913 00:48:25.444588 1422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 13 00:48:25.444639 kubelet[1422]: E0913 00:48:25.444597 1422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.82\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 13 00:48:25.444639 kubelet[1422]: E0913 00:48:25.444613 1422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 13 00:48:25.445700 kubelet[1422]: I0913 00:48:25.445677 1422 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:48:25.445748 kubelet[1422]: I0913 00:48:25.445726 1422 server.go:1287] "Started kubelet" Sep 13 00:48:25.445845 kubelet[1422]: I0913 00:48:25.445798 1422 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:48:25.445991 kubelet[1422]: I0913 00:48:25.445933 1422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:48:25.446354 kubelet[1422]: I0913 00:48:25.446337 1422 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:48:25.446818 kubelet[1422]: I0913 00:48:25.446784 1422 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:48:25.448595 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:48:25.448808 kubelet[1422]: I0913 00:48:25.448787 1422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:48:25.449829 kubelet[1422]: I0913 00:48:25.449803 1422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:48:25.453758 kubelet[1422]: E0913 00:48:25.453724 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:25.453880 kubelet[1422]: I0913 00:48:25.453864 1422 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:48:25.454126 kubelet[1422]: I0913 00:48:25.454110 1422 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:48:25.454271 kubelet[1422]: I0913 00:48:25.454257 1422 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:48:25.455064 kubelet[1422]: E0913 00:48:25.455048 1422 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:48:25.455551 kubelet[1422]: I0913 00:48:25.455535 1422 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:48:25.455729 kubelet[1422]: I0913 00:48:25.455708 1422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:48:25.456887 kubelet[1422]: I0913 00:48:25.456853 1422 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:48:25.466969 kubelet[1422]: E0913 00:48:25.466902 1422 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.82\" not found" node="10.0.0.82" Sep 13 00:48:25.471443 kubelet[1422]: I0913 00:48:25.471423 1422 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:48:25.471443 kubelet[1422]: I0913 00:48:25.471438 1422 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:48:25.471571 kubelet[1422]: I0913 00:48:25.471461 1422 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:48:25.554461 kubelet[1422]: E0913 00:48:25.554424 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:25.655067 kubelet[1422]: E0913 00:48:25.654902 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:25.755342 kubelet[1422]: E0913 00:48:25.755271 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:25.855762 kubelet[1422]: E0913 00:48:25.855720 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:25.956724 kubelet[1422]: E0913 00:48:25.956684 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:26.057145 kubelet[1422]: E0913 00:48:26.057105 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:26.157550 kubelet[1422]: E0913 00:48:26.157516 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:26.258076 kubelet[1422]: E0913 00:48:26.257945 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:26.358438 kubelet[1422]: E0913 00:48:26.358400 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:26.394580 kubelet[1422]: I0913 00:48:26.394547 1422 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 13 00:48:26.394721 kubelet[1422]: W0913 00:48:26.394693 1422 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 13 00:48:26.406648 kubelet[1422]: I0913 00:48:26.406564 1422 policy_none.go:49] "None policy: Start" Sep 13 00:48:26.406648 kubelet[1422]: I0913 00:48:26.406608 1422 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:48:26.406648 kubelet[1422]: I0913 00:48:26.406630 1422 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:48:26.413979 systemd[1]: Created slice kubepods.slice. Sep 13 00:48:26.418414 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:48:26.421189 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:48:26.431233 kubelet[1422]: I0913 00:48:26.431197 1422 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:48:26.431711 kubelet[1422]: I0913 00:48:26.431692 1422 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:48:26.431793 kubelet[1422]: I0913 00:48:26.431720 1422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:48:26.432224 kubelet[1422]: I0913 00:48:26.432140 1422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:48:26.432849 kubelet[1422]: E0913 00:48:26.432828 1422 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:48:26.432908 kubelet[1422]: E0913 00:48:26.432872 1422 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.82\" not found" Sep 13 00:48:26.440930 kubelet[1422]: E0913 00:48:26.440895 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:26.493267 kubelet[1422]: I0913 00:48:26.493195 1422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:48:26.494354 kubelet[1422]: I0913 00:48:26.494318 1422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:48:26.494532 kubelet[1422]: I0913 00:48:26.494370 1422 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:48:26.494532 kubelet[1422]: I0913 00:48:26.494407 1422 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:48:26.494532 kubelet[1422]: I0913 00:48:26.494417 1422 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:48:26.494532 kubelet[1422]: E0913 00:48:26.494506 1422 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:48:26.533794 kubelet[1422]: I0913 00:48:26.533646 1422 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.82" Sep 13 00:48:26.538746 kubelet[1422]: I0913 00:48:26.538682 1422 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.82" Sep 13 00:48:26.538746 kubelet[1422]: E0913 00:48:26.538738 1422 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.82\": node \"10.0.0.82\" not found" Sep 13 00:48:26.549011 kubelet[1422]: E0913 00:48:26.548982 1422 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.82\" not found" Sep 13 00:48:26.650593 kubelet[1422]: I0913 00:48:26.650543 1422 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 13 00:48:26.650958 env[1203]: time="2025-09-13T00:48:26.650890619Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:48:26.651279 kubelet[1422]: I0913 00:48:26.651065 1422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 13 00:48:26.932335 sudo[1317]: pam_unix(sudo:session): session closed for user root Sep 13 00:48:26.933946 sshd[1314]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:26.936536 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:55486.service: Deactivated successfully. Sep 13 00:48:26.937419 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:48:26.938069 systemd-logind[1195]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:48:26.938914 systemd-logind[1195]: Removed session 7. Sep 13 00:48:27.441202 kubelet[1422]: E0913 00:48:27.441142 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:27.441202 kubelet[1422]: I0913 00:48:27.441167 1422 apiserver.go:52] "Watching apiserver" Sep 13 00:48:27.449815 systemd[1]: Created slice kubepods-besteffort-pod89ba5705_c649_4204_ad6f_7b5533c9c190.slice. Sep 13 00:48:27.455354 kubelet[1422]: I0913 00:48:27.455319 1422 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:48:27.459201 systemd[1]: Created slice kubepods-burstable-pode2abb93a_1d17_4a46_8a68_4bd0b985cfc8.slice. Sep 13 00:48:27.464622 kubelet[1422]: I0913 00:48:27.464586 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-hubble-tls\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.464723 kubelet[1422]: I0913 00:48:27.464624 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89ba5705-c649-4204-ad6f-7b5533c9c190-xtables-lock\") pod \"kube-proxy-mv4bw\" (UID: \"89ba5705-c649-4204-ad6f-7b5533c9c190\") " pod="kube-system/kube-proxy-mv4bw" Sep 13 00:48:27.464723 kubelet[1422]: I0913 00:48:27.464645 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-bpf-maps\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.464723 kubelet[1422]: I0913 00:48:27.464672 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-cgroup\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.464723 kubelet[1422]: I0913 00:48:27.464688 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-xtables-lock\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.464855 kubelet[1422]: I0913 00:48:27.464772 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmctb\" (UniqueName: \"kubernetes.io/projected/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-kube-api-access-dmctb\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.464855 kubelet[1422]: I0913 00:48:27.464821 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89ba5705-c649-4204-ad6f-7b5533c9c190-lib-modules\") pod \"kube-proxy-mv4bw\" (UID: \"89ba5705-c649-4204-ad6f-7b5533c9c190\") " pod="kube-system/kube-proxy-mv4bw" Sep 13 00:48:27.464917 kubelet[1422]: I0913 00:48:27.464855 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-hostproc\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.464917 kubelet[1422]: I0913 00:48:27.464901 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-lib-modules\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.464987 kubelet[1422]: I0913 00:48:27.464952 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttr67\" (UniqueName: \"kubernetes.io/projected/89ba5705-c649-4204-ad6f-7b5533c9c190-kube-api-access-ttr67\") pod \"kube-proxy-mv4bw\" (UID: \"89ba5705-c649-4204-ad6f-7b5533c9c190\") " pod="kube-system/kube-proxy-mv4bw" Sep 13 00:48:27.465021 kubelet[1422]: I0913 00:48:27.464986 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-clustermesh-secrets\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.465050 kubelet[1422]: I0913 00:48:27.465022 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-config-path\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.465050 kubelet[1422]: I0913 00:48:27.465043 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-host-proc-sys-net\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.465158 kubelet[1422]: I0913 00:48:27.465060 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-host-proc-sys-kernel\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.465158 kubelet[1422]: I0913 00:48:27.465081 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89ba5705-c649-4204-ad6f-7b5533c9c190-kube-proxy\") pod \"kube-proxy-mv4bw\" (UID: \"89ba5705-c649-4204-ad6f-7b5533c9c190\") " pod="kube-system/kube-proxy-mv4bw" Sep 13 00:48:27.465158 kubelet[1422]: I0913 00:48:27.465108 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-run\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.465158 kubelet[1422]: I0913 00:48:27.465125 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cni-path\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.465290 kubelet[1422]: I0913 00:48:27.465163 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-etc-cni-netd\") pod \"cilium-hf98k\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " pod="kube-system/cilium-hf98k" Sep 13 00:48:27.566360 kubelet[1422]: I0913 00:48:27.566261 1422 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:48:27.757711 kubelet[1422]: E0913 00:48:27.757678 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:27.758465 env[1203]: time="2025-09-13T00:48:27.758421425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mv4bw,Uid:89ba5705-c649-4204-ad6f-7b5533c9c190,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:27.767421 kubelet[1422]: E0913 00:48:27.767399 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:27.767891 env[1203]: time="2025-09-13T00:48:27.767846760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hf98k,Uid:e2abb93a-1d17-4a46-8a68-4bd0b985cfc8,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:28.407210 env[1203]: time="2025-09-13T00:48:28.407163678Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:28.410596 env[1203]: time="2025-09-13T00:48:28.410547073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:28.412515 env[1203]: time="2025-09-13T00:48:28.412462535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:28.413713 env[1203]: time="2025-09-13T00:48:28.413683014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:28.415091 env[1203]: time="2025-09-13T00:48:28.415037735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:28.416399 env[1203]: time="2025-09-13T00:48:28.416362830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:28.417701 env[1203]: time="2025-09-13T00:48:28.417672546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:28.419267 env[1203]: time="2025-09-13T00:48:28.419229536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:28.440293 env[1203]: time="2025-09-13T00:48:28.440196015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:28.440293 env[1203]: time="2025-09-13T00:48:28.440265856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:28.440293 env[1203]: time="2025-09-13T00:48:28.440281796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:28.440591 env[1203]: time="2025-09-13T00:48:28.440520834Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61 pid=1479 runtime=io.containerd.runc.v2 Sep 13 00:48:28.441417 kubelet[1422]: E0913 00:48:28.441360 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:28.444284 env[1203]: time="2025-09-13T00:48:28.444170469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:28.444284 env[1203]: time="2025-09-13T00:48:28.444246511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:28.444284 env[1203]: time="2025-09-13T00:48:28.444261409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:28.444577 env[1203]: time="2025-09-13T00:48:28.444532217Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be6d4c6df3abb40bb5cc728ef2f7ebc966cbb731bcee01da0a4e2563e98930a4 pid=1496 runtime=io.containerd.runc.v2 Sep 13 00:48:28.455008 systemd[1]: Started cri-containerd-ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61.scope. Sep 13 00:48:28.462325 systemd[1]: Started cri-containerd-be6d4c6df3abb40bb5cc728ef2f7ebc966cbb731bcee01da0a4e2563e98930a4.scope. Sep 13 00:48:28.480522 env[1203]: time="2025-09-13T00:48:28.480453543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hf98k,Uid:e2abb93a-1d17-4a46-8a68-4bd0b985cfc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\"" Sep 13 00:48:28.481789 kubelet[1422]: E0913 00:48:28.481749 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:28.484050 env[1203]: time="2025-09-13T00:48:28.484018308Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:48:28.484593 env[1203]: time="2025-09-13T00:48:28.484562208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mv4bw,Uid:89ba5705-c649-4204-ad6f-7b5533c9c190,Namespace:kube-system,Attempt:0,} returns sandbox id \"be6d4c6df3abb40bb5cc728ef2f7ebc966cbb731bcee01da0a4e2563e98930a4\"" Sep 13 00:48:28.485214 kubelet[1422]: E0913 00:48:28.485194 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:28.574501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660120055.mount: Deactivated successfully. Sep 13 00:48:29.442265 kubelet[1422]: E0913 00:48:29.442224 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:30.443353 kubelet[1422]: E0913 00:48:30.443316 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:31.444252 kubelet[1422]: E0913 00:48:31.444186 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:32.444999 kubelet[1422]: E0913 00:48:32.444941 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:33.445831 kubelet[1422]: E0913 00:48:33.445754 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:33.898831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116989484.mount: Deactivated successfully. Sep 13 00:48:34.446653 kubelet[1422]: E0913 00:48:34.446565 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:35.446694 kubelet[1422]: E0913 00:48:35.446645 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:36.447608 kubelet[1422]: E0913 00:48:36.447535 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:37.448244 kubelet[1422]: E0913 00:48:37.448187 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:38.431720 env[1203]: time="2025-09-13T00:48:38.431658364Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:38.434233 env[1203]: time="2025-09-13T00:48:38.434177388Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:38.435848 env[1203]: time="2025-09-13T00:48:38.435815931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:38.436307 env[1203]: time="2025-09-13T00:48:38.436266987Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:48:38.437510 env[1203]: time="2025-09-13T00:48:38.437446529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:48:38.439324 env[1203]: time="2025-09-13T00:48:38.439285938Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:48:38.450596 kubelet[1422]: E0913 00:48:38.450560 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:38.456159 env[1203]: time="2025-09-13T00:48:38.456106443Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\"" Sep 13 00:48:38.456698 env[1203]: time="2025-09-13T00:48:38.456666282Z" level=info msg="StartContainer for \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\"" Sep 13 00:48:38.478569 systemd[1]: Started cri-containerd-fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea.scope. Sep 13 00:48:38.610072 env[1203]: time="2025-09-13T00:48:38.610009125Z" level=info msg="StartContainer for \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\" returns successfully" Sep 13 00:48:38.626621 systemd[1]: cri-containerd-fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea.scope: Deactivated successfully. Sep 13 00:48:39.450042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea-rootfs.mount: Deactivated successfully. Sep 13 00:48:39.451650 kubelet[1422]: E0913 00:48:39.451615 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:39.496450 env[1203]: time="2025-09-13T00:48:39.496396878Z" level=info msg="shim disconnected" id=fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea Sep 13 00:48:39.496988 env[1203]: time="2025-09-13T00:48:39.496923336Z" level=warning msg="cleaning up after shim disconnected" id=fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea namespace=k8s.io Sep 13 00:48:39.496988 env[1203]: time="2025-09-13T00:48:39.496952531Z" level=info msg="cleaning up dead shim" Sep 13 00:48:39.509049 env[1203]: time="2025-09-13T00:48:39.508982391Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1606 runtime=io.containerd.runc.v2\n" Sep 13 00:48:39.537918 kubelet[1422]: E0913 00:48:39.537868 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:39.539938 env[1203]: time="2025-09-13T00:48:39.539868113Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:48:39.554441 env[1203]: time="2025-09-13T00:48:39.554376341Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\"" Sep 13 00:48:39.554990 env[1203]: time="2025-09-13T00:48:39.554948894Z" level=info msg="StartContainer for \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\"" Sep 13 00:48:39.572408 systemd[1]: Started cri-containerd-afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c.scope. Sep 13 00:48:39.625202 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:48:39.625417 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:48:39.626587 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:48:39.628247 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:48:39.630786 systemd[1]: cri-containerd-afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c.scope: Deactivated successfully. Sep 13 00:48:39.635635 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:48:39.640712 env[1203]: time="2025-09-13T00:48:39.640651978Z" level=info msg="StartContainer for \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\" returns successfully" Sep 13 00:48:39.665070 env[1203]: time="2025-09-13T00:48:39.665007984Z" level=info msg="shim disconnected" id=afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c Sep 13 00:48:39.665070 env[1203]: time="2025-09-13T00:48:39.665062958Z" level=warning msg="cleaning up after shim disconnected" id=afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c namespace=k8s.io Sep 13 00:48:39.665070 env[1203]: time="2025-09-13T00:48:39.665072185Z" level=info msg="cleaning up dead shim" Sep 13 00:48:39.674013 env[1203]: time="2025-09-13T00:48:39.673970022Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1668 runtime=io.containerd.runc.v2\n" Sep 13 00:48:40.451434 systemd[1]: run-containerd-runc-k8s.io-afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c-runc.OPml0W.mount: Deactivated successfully. Sep 13 00:48:40.451553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c-rootfs.mount: Deactivated successfully. Sep 13 00:48:40.452208 kubelet[1422]: E0913 00:48:40.452107 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:40.508381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516144354.mount: Deactivated successfully. Sep 13 00:48:40.540567 kubelet[1422]: E0913 00:48:40.540532 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:40.542051 env[1203]: time="2025-09-13T00:48:40.542001019Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:48:40.559110 env[1203]: time="2025-09-13T00:48:40.559059781Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\"" Sep 13 00:48:40.559523 env[1203]: time="2025-09-13T00:48:40.559500197Z" level=info msg="StartContainer for \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\"" Sep 13 00:48:40.596495 systemd[1]: Started cri-containerd-5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8.scope. Sep 13 00:48:40.643272 env[1203]: time="2025-09-13T00:48:40.643139109Z" level=info msg="StartContainer for \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\" returns successfully" Sep 13 00:48:40.644459 systemd[1]: cri-containerd-5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8.scope: Deactivated successfully. Sep 13 00:48:40.867392 env[1203]: time="2025-09-13T00:48:40.867249296Z" level=info msg="shim disconnected" id=5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8 Sep 13 00:48:40.867392 env[1203]: time="2025-09-13T00:48:40.867312805Z" level=warning msg="cleaning up after shim disconnected" id=5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8 namespace=k8s.io Sep 13 00:48:40.867392 env[1203]: time="2025-09-13T00:48:40.867325489Z" level=info msg="cleaning up dead shim" Sep 13 00:48:40.889619 env[1203]: time="2025-09-13T00:48:40.889559045Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1724 runtime=io.containerd.runc.v2\n" Sep 13 00:48:41.450000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8-rootfs.mount: Deactivated successfully. Sep 13 00:48:41.452577 kubelet[1422]: E0913 00:48:41.452534 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:41.478940 env[1203]: time="2025-09-13T00:48:41.478885021Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:41.482081 env[1203]: time="2025-09-13T00:48:41.482044847Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:41.483531 env[1203]: time="2025-09-13T00:48:41.483500798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:41.484975 env[1203]: time="2025-09-13T00:48:41.484930890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:41.485371 env[1203]: time="2025-09-13T00:48:41.485333825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:48:41.487204 env[1203]: time="2025-09-13T00:48:41.487168195Z" level=info msg="CreateContainer within sandbox \"be6d4c6df3abb40bb5cc728ef2f7ebc966cbb731bcee01da0a4e2563e98930a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:48:41.502237 env[1203]: time="2025-09-13T00:48:41.502183384Z" level=info msg="CreateContainer within sandbox \"be6d4c6df3abb40bb5cc728ef2f7ebc966cbb731bcee01da0a4e2563e98930a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"186d921e46124019c62417259979a2ce9175a2cc2bc12a5cc58ccdbef99f2752\"" Sep 13 00:48:41.502702 env[1203]: time="2025-09-13T00:48:41.502669095Z" level=info msg="StartContainer for \"186d921e46124019c62417259979a2ce9175a2cc2bc12a5cc58ccdbef99f2752\"" Sep 13 00:48:41.521769 systemd[1]: Started cri-containerd-186d921e46124019c62417259979a2ce9175a2cc2bc12a5cc58ccdbef99f2752.scope. Sep 13 00:48:41.545128 kubelet[1422]: E0913 00:48:41.545057 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:41.547962 env[1203]: time="2025-09-13T00:48:41.547821993Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:48:41.576981 env[1203]: time="2025-09-13T00:48:41.576932315Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\"" Sep 13 00:48:41.577692 env[1203]: time="2025-09-13T00:48:41.577664388Z" level=info msg="StartContainer for \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\"" Sep 13 00:48:41.607961 systemd[1]: Started cri-containerd-dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61.scope. Sep 13 00:48:41.635761 systemd[1]: cri-containerd-dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61.scope: Deactivated successfully. Sep 13 00:48:41.749474 env[1203]: time="2025-09-13T00:48:41.749358576Z" level=info msg="StartContainer for \"186d921e46124019c62417259979a2ce9175a2cc2bc12a5cc58ccdbef99f2752\" returns successfully" Sep 13 00:48:41.751941 env[1203]: time="2025-09-13T00:48:41.751890795Z" level=info msg="StartContainer for \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\" returns successfully" Sep 13 00:48:41.966324 env[1203]: time="2025-09-13T00:48:41.966270754Z" level=info msg="shim disconnected" id=dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61 Sep 13 00:48:41.966324 env[1203]: time="2025-09-13T00:48:41.966312161Z" level=warning msg="cleaning up after shim disconnected" id=dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61 namespace=k8s.io Sep 13 00:48:41.966324 env[1203]: time="2025-09-13T00:48:41.966320587Z" level=info msg="cleaning up dead shim" Sep 13 00:48:41.974665 env[1203]: time="2025-09-13T00:48:41.974622666Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1887 runtime=io.containerd.runc.v2\n" Sep 13 00:48:42.453421 kubelet[1422]: E0913 00:48:42.453375 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:42.547384 kubelet[1422]: E0913 00:48:42.547358 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:42.549417 kubelet[1422]: E0913 00:48:42.549381 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:42.551282 env[1203]: time="2025-09-13T00:48:42.551233758Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:48:42.555143 kubelet[1422]: I0913 00:48:42.555085 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mv4bw" podStartSLOduration=3.554656537 podStartE2EDuration="16.5550582s" podCreationTimestamp="2025-09-13 00:48:26 +0000 UTC" firstStartedPulling="2025-09-13 00:48:28.485708538 +0000 UTC m=+3.456659692" lastFinishedPulling="2025-09-13 00:48:41.486110201 +0000 UTC m=+16.457061355" observedRunningTime="2025-09-13 00:48:42.554942734 +0000 UTC m=+17.525893878" watchObservedRunningTime="2025-09-13 00:48:42.5550582 +0000 UTC m=+17.526009354" Sep 13 00:48:42.567170 env[1203]: time="2025-09-13T00:48:42.567124850Z" level=info msg="CreateContainer within sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\"" Sep 13 00:48:42.567667 env[1203]: time="2025-09-13T00:48:42.567615830Z" level=info msg="StartContainer for \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\"" Sep 13 00:48:42.584222 systemd[1]: Started cri-containerd-5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a.scope. Sep 13 00:48:42.629728 env[1203]: time="2025-09-13T00:48:42.629678139Z" level=info msg="StartContainer for \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\" returns successfully" Sep 13 00:48:42.735273 kubelet[1422]: I0913 00:48:42.735141 1422 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:48:42.960519 kernel: Initializing XFRM netlink socket Sep 13 00:48:43.450110 systemd[1]: run-containerd-runc-k8s.io-5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a-runc.ic11tg.mount: Deactivated successfully. Sep 13 00:48:43.453722 kubelet[1422]: E0913 00:48:43.453674 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:43.553400 kubelet[1422]: E0913 00:48:43.553370 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:43.553664 kubelet[1422]: E0913 00:48:43.553638 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:43.565923 kubelet[1422]: I0913 00:48:43.565865 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hf98k" podStartSLOduration=7.6121045 podStartE2EDuration="17.565851649s" podCreationTimestamp="2025-09-13 00:48:26 +0000 UTC" firstStartedPulling="2025-09-13 00:48:28.483461003 +0000 UTC m=+3.454412157" lastFinishedPulling="2025-09-13 00:48:38.437208152 +0000 UTC m=+13.408159306" observedRunningTime="2025-09-13 00:48:43.565797096 +0000 UTC m=+18.536748281" watchObservedRunningTime="2025-09-13 00:48:43.565851649 +0000 UTC m=+18.536802803" Sep 13 00:48:44.454059 kubelet[1422]: E0913 00:48:44.453993 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:44.554438 kubelet[1422]: E0913 00:48:44.554401 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:44.990190 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:48:44.990303 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:48:44.992037 systemd-networkd[1029]: cilium_host: Link UP Sep 13 00:48:44.992165 systemd-networkd[1029]: cilium_net: Link UP Sep 13 00:48:44.992297 systemd-networkd[1029]: cilium_net: Gained carrier Sep 13 00:48:44.992420 systemd-networkd[1029]: cilium_host: Gained carrier Sep 13 00:48:45.061615 systemd-networkd[1029]: cilium_vxlan: Link UP Sep 13 00:48:45.061625 systemd-networkd[1029]: cilium_vxlan: Gained carrier Sep 13 00:48:45.082602 systemd-networkd[1029]: cilium_net: Gained IPv6LL Sep 13 00:48:45.440095 kubelet[1422]: E0913 00:48:45.439987 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:45.454641 kubelet[1422]: E0913 00:48:45.454583 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:45.489535 kernel: NET: Registered PF_ALG protocol family Sep 13 00:48:45.555403 kubelet[1422]: E0913 00:48:45.555365 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:46.034621 systemd-networkd[1029]: cilium_host: Gained IPv6LL Sep 13 00:48:46.198049 systemd-networkd[1029]: lxc_health: Link UP Sep 13 00:48:46.218523 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:48:46.213797 systemd-networkd[1029]: lxc_health: Gained carrier Sep 13 00:48:46.455705 kubelet[1422]: E0913 00:48:46.455670 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:46.995727 systemd-networkd[1029]: cilium_vxlan: Gained IPv6LL Sep 13 00:48:47.456857 kubelet[1422]: E0913 00:48:47.456782 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:47.701723 systemd-networkd[1029]: lxc_health: Gained IPv6LL Sep 13 00:48:47.769686 kubelet[1422]: E0913 00:48:47.769548 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:48.005358 systemd[1]: Created slice kubepods-besteffort-podc2680fb9_8b4b_40bc_ad29_ff201d81f38f.slice. Sep 13 00:48:48.027343 kubelet[1422]: I0913 00:48:48.027183 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml8ws\" (UniqueName: \"kubernetes.io/projected/c2680fb9-8b4b-40bc-ad29-ff201d81f38f-kube-api-access-ml8ws\") pod \"nginx-deployment-7fcdb87857-4rsr6\" (UID: \"c2680fb9-8b4b-40bc-ad29-ff201d81f38f\") " pod="default/nginx-deployment-7fcdb87857-4rsr6" Sep 13 00:48:48.308833 env[1203]: time="2025-09-13T00:48:48.308701799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4rsr6,Uid:c2680fb9-8b4b-40bc-ad29-ff201d81f38f,Namespace:default,Attempt:0,}" Sep 13 00:48:48.381984 systemd-networkd[1029]: lxceb7bfef19b26: Link UP Sep 13 00:48:48.391644 kernel: eth0: renamed from tmpe864e Sep 13 00:48:48.399111 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:48:48.399162 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceb7bfef19b26: link becomes ready Sep 13 00:48:48.399828 systemd-networkd[1029]: lxceb7bfef19b26: Gained carrier Sep 13 00:48:48.457178 kubelet[1422]: E0913 00:48:48.457118 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:48.560699 kubelet[1422]: E0913 00:48:48.560563 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:49.457834 kubelet[1422]: E0913 00:48:49.457777 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:49.562211 kubelet[1422]: E0913 00:48:49.562180 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:48:50.066776 systemd-networkd[1029]: lxceb7bfef19b26: Gained IPv6LL Sep 13 00:48:50.458843 kubelet[1422]: E0913 00:48:50.458788 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:50.463250 env[1203]: time="2025-09-13T00:48:50.463173670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:50.463250 env[1203]: time="2025-09-13T00:48:50.463213395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:50.463250 env[1203]: time="2025-09-13T00:48:50.463223695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:50.463561 env[1203]: time="2025-09-13T00:48:50.463347953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e864e07d3c59bb353d913b6e17ee05b9a8c68b4f0467590663c4106377a014f3 pid=2496 runtime=io.containerd.runc.v2 Sep 13 00:48:50.477197 systemd[1]: run-containerd-runc-k8s.io-e864e07d3c59bb353d913b6e17ee05b9a8c68b4f0467590663c4106377a014f3-runc.5E1K62.mount: Deactivated successfully. Sep 13 00:48:50.478656 systemd[1]: Started cri-containerd-e864e07d3c59bb353d913b6e17ee05b9a8c68b4f0467590663c4106377a014f3.scope. Sep 13 00:48:50.491571 systemd-resolved[1146]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:48:50.512252 env[1203]: time="2025-09-13T00:48:50.512201012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4rsr6,Uid:c2680fb9-8b4b-40bc-ad29-ff201d81f38f,Namespace:default,Attempt:0,} returns sandbox id \"e864e07d3c59bb353d913b6e17ee05b9a8c68b4f0467590663c4106377a014f3\"" Sep 13 00:48:50.513357 env[1203]: time="2025-09-13T00:48:50.513320633Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 13 00:48:51.459246 kubelet[1422]: E0913 00:48:51.459160 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:52.460043 kubelet[1422]: E0913 00:48:52.459966 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:53.302555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845656516.mount: Deactivated successfully. Sep 13 00:48:53.460179 kubelet[1422]: E0913 00:48:53.460115 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:54.460902 kubelet[1422]: E0913 00:48:54.460839 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:55.137175 env[1203]: time="2025-09-13T00:48:55.137097361Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:55.139228 env[1203]: time="2025-09-13T00:48:55.138958951Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:55.142755 env[1203]: time="2025-09-13T00:48:55.142703292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:55.144918 env[1203]: time="2025-09-13T00:48:55.144866656Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:55.145614 env[1203]: time="2025-09-13T00:48:55.145576224Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 13 00:48:55.147771 env[1203]: time="2025-09-13T00:48:55.147737985Z" level=info msg="CreateContainer within sandbox \"e864e07d3c59bb353d913b6e17ee05b9a8c68b4f0467590663c4106377a014f3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 13 00:48:55.162047 env[1203]: time="2025-09-13T00:48:55.161994496Z" level=info msg="CreateContainer within sandbox \"e864e07d3c59bb353d913b6e17ee05b9a8c68b4f0467590663c4106377a014f3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c04ff9309bbebf69f11a18390cd8d156b28383cc1c4c0c3d2f64951a13c106d7\"" Sep 13 00:48:55.162720 env[1203]: time="2025-09-13T00:48:55.162689498Z" level=info msg="StartContainer for \"c04ff9309bbebf69f11a18390cd8d156b28383cc1c4c0c3d2f64951a13c106d7\"" Sep 13 00:48:55.188236 systemd[1]: Started cri-containerd-c04ff9309bbebf69f11a18390cd8d156b28383cc1c4c0c3d2f64951a13c106d7.scope. Sep 13 00:48:55.338340 env[1203]: time="2025-09-13T00:48:55.338268445Z" level=info msg="StartContainer for \"c04ff9309bbebf69f11a18390cd8d156b28383cc1c4c0c3d2f64951a13c106d7\" returns successfully" Sep 13 00:48:55.461636 kubelet[1422]: E0913 00:48:55.461590 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:55.582744 kubelet[1422]: I0913 00:48:55.582659 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-4rsr6" podStartSLOduration=3.948997835 podStartE2EDuration="8.582639206s" podCreationTimestamp="2025-09-13 00:48:47 +0000 UTC" firstStartedPulling="2025-09-13 00:48:50.512955163 +0000 UTC m=+25.483906318" lastFinishedPulling="2025-09-13 00:48:55.146596535 +0000 UTC m=+30.117547689" observedRunningTime="2025-09-13 00:48:55.582159514 +0000 UTC m=+30.553110668" watchObservedRunningTime="2025-09-13 00:48:55.582639206 +0000 UTC m=+30.553590360" Sep 13 00:48:56.462184 kubelet[1422]: E0913 00:48:56.462094 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:56.789716 update_engine[1196]: I0913 00:48:56.789526 1196 update_attempter.cc:509] Updating boot flags... Sep 13 00:48:57.463319 kubelet[1422]: E0913 00:48:57.463251 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:58.464352 kubelet[1422]: E0913 00:48:58.464285 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:59.366141 systemd[1]: Created slice kubepods-besteffort-pod573f52a4_a35b_41e9_8b3f_3988befbf98d.slice. Sep 13 00:48:59.465459 kubelet[1422]: E0913 00:48:59.465387 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:48:59.483925 kubelet[1422]: I0913 00:48:59.483850 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/573f52a4-a35b-41e9-8b3f-3988befbf98d-data\") pod \"nfs-server-provisioner-0\" (UID: \"573f52a4-a35b-41e9-8b3f-3988befbf98d\") " pod="default/nfs-server-provisioner-0" Sep 13 00:48:59.483925 kubelet[1422]: I0913 00:48:59.483918 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq47f\" (UniqueName: \"kubernetes.io/projected/573f52a4-a35b-41e9-8b3f-3988befbf98d-kube-api-access-rq47f\") pod \"nfs-server-provisioner-0\" (UID: \"573f52a4-a35b-41e9-8b3f-3988befbf98d\") " pod="default/nfs-server-provisioner-0" Sep 13 00:48:59.669871 env[1203]: time="2025-09-13T00:48:59.669706265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:573f52a4-a35b-41e9-8b3f-3988befbf98d,Namespace:default,Attempt:0,}" Sep 13 00:48:59.708653 systemd-networkd[1029]: lxc73723b930282: Link UP Sep 13 00:48:59.714519 kernel: eth0: renamed from tmp5b30c Sep 13 00:48:59.721781 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:48:59.721899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc73723b930282: link becomes ready Sep 13 00:48:59.722398 systemd-networkd[1029]: lxc73723b930282: Gained carrier Sep 13 00:49:00.269748 env[1203]: time="2025-09-13T00:49:00.269636615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:00.269748 env[1203]: time="2025-09-13T00:49:00.269678635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:00.269748 env[1203]: time="2025-09-13T00:49:00.269688413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:00.269990 env[1203]: time="2025-09-13T00:49:00.269814082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b30c62907bfab9258a3e9e6d81f74580162761018c042564afa26b4344200f4 pid=2635 runtime=io.containerd.runc.v2 Sep 13 00:49:00.290256 systemd[1]: Started cri-containerd-5b30c62907bfab9258a3e9e6d81f74580162761018c042564afa26b4344200f4.scope. Sep 13 00:49:00.307708 systemd-resolved[1146]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:49:00.335992 env[1203]: time="2025-09-13T00:49:00.335943287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:573f52a4-a35b-41e9-8b3f-3988befbf98d,Namespace:default,Attempt:0,} returns sandbox id \"5b30c62907bfab9258a3e9e6d81f74580162761018c042564afa26b4344200f4\"" Sep 13 00:49:00.337626 env[1203]: time="2025-09-13T00:49:00.337582011Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 13 00:49:00.465770 kubelet[1422]: E0913 00:49:00.465695 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:01.331717 systemd-networkd[1029]: lxc73723b930282: Gained IPv6LL Sep 13 00:49:01.466873 kubelet[1422]: E0913 00:49:01.466790 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:02.467266 kubelet[1422]: E0913 00:49:02.467183 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:03.207100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366587102.mount: Deactivated successfully. Sep 13 00:49:03.467853 kubelet[1422]: E0913 00:49:03.467699 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:04.468440 kubelet[1422]: E0913 00:49:04.468366 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:05.440195 kubelet[1422]: E0913 00:49:05.440127 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:05.468826 kubelet[1422]: E0913 00:49:05.468762 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:05.532621 env[1203]: time="2025-09-13T00:49:05.532550757Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:05.535647 env[1203]: time="2025-09-13T00:49:05.535609064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:05.540504 env[1203]: time="2025-09-13T00:49:05.540452384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:05.542879 env[1203]: time="2025-09-13T00:49:05.542830838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:05.543646 env[1203]: time="2025-09-13T00:49:05.543611352Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Sep 13 00:49:05.546690 env[1203]: time="2025-09-13T00:49:05.546652397Z" level=info msg="CreateContainer within sandbox \"5b30c62907bfab9258a3e9e6d81f74580162761018c042564afa26b4344200f4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 13 00:49:05.562056 env[1203]: time="2025-09-13T00:49:05.561979231Z" level=info msg="CreateContainer within sandbox \"5b30c62907bfab9258a3e9e6d81f74580162761018c042564afa26b4344200f4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ee5c5fd6bef399550477575fc3cd7d0e116760e305738c9926c76eb1e7b74839\"" Sep 13 00:49:05.562717 env[1203]: time="2025-09-13T00:49:05.562672020Z" level=info msg="StartContainer for \"ee5c5fd6bef399550477575fc3cd7d0e116760e305738c9926c76eb1e7b74839\"" Sep 13 00:49:05.580458 systemd[1]: Started cri-containerd-ee5c5fd6bef399550477575fc3cd7d0e116760e305738c9926c76eb1e7b74839.scope. Sep 13 00:49:05.666402 env[1203]: time="2025-09-13T00:49:05.666289307Z" level=info msg="StartContainer for \"ee5c5fd6bef399550477575fc3cd7d0e116760e305738c9926c76eb1e7b74839\" returns successfully" Sep 13 00:49:06.469941 kubelet[1422]: E0913 00:49:06.469865 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:06.609797 kubelet[1422]: I0913 00:49:06.609709 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.401734406 podStartE2EDuration="7.609687533s" podCreationTimestamp="2025-09-13 00:48:59 +0000 UTC" firstStartedPulling="2025-09-13 00:49:00.337213603 +0000 UTC m=+35.308164757" lastFinishedPulling="2025-09-13 00:49:05.54516673 +0000 UTC m=+40.516117884" observedRunningTime="2025-09-13 00:49:06.609460234 +0000 UTC m=+41.580411388" watchObservedRunningTime="2025-09-13 00:49:06.609687533 +0000 UTC m=+41.580638677" Sep 13 00:49:07.470850 kubelet[1422]: E0913 00:49:07.470717 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:08.471348 kubelet[1422]: E0913 00:49:08.471242 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:09.472359 kubelet[1422]: E0913 00:49:09.472273 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:10.472936 kubelet[1422]: E0913 00:49:10.472856 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:11.473650 kubelet[1422]: E0913 00:49:11.473591 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:12.474078 kubelet[1422]: E0913 00:49:12.474012 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:13.474453 kubelet[1422]: E0913 00:49:13.474388 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:14.475354 kubelet[1422]: E0913 00:49:14.475290 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:15.476092 kubelet[1422]: E0913 00:49:15.476012 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:16.476756 kubelet[1422]: E0913 00:49:16.476688 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:17.229821 systemd[1]: Created slice kubepods-besteffort-podf361eb28_0a58_4fb7_a005_23996df76d7f.slice. Sep 13 00:49:17.295575 kubelet[1422]: I0913 00:49:17.295516 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eb294ca-4914-482d-a087-33663092c634\" (UniqueName: \"kubernetes.io/nfs/f361eb28-0a58-4fb7-a005-23996df76d7f-pvc-0eb294ca-4914-482d-a087-33663092c634\") pod \"test-pod-1\" (UID: \"f361eb28-0a58-4fb7-a005-23996df76d7f\") " pod="default/test-pod-1" Sep 13 00:49:17.295575 kubelet[1422]: I0913 00:49:17.295572 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprtr\" (UniqueName: \"kubernetes.io/projected/f361eb28-0a58-4fb7-a005-23996df76d7f-kube-api-access-fprtr\") pod \"test-pod-1\" (UID: \"f361eb28-0a58-4fb7-a005-23996df76d7f\") " pod="default/test-pod-1" Sep 13 00:49:17.419530 kernel: FS-Cache: Loaded Sep 13 00:49:17.459108 kernel: RPC: Registered named UNIX socket transport module. Sep 13 00:49:17.459241 kernel: RPC: Registered udp transport module. Sep 13 00:49:17.459283 kernel: RPC: Registered tcp transport module. Sep 13 00:49:17.459866 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 13 00:49:17.477131 kubelet[1422]: E0913 00:49:17.477085 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:17.516526 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 13 00:49:17.693581 kernel: NFS: Registering the id_resolver key type Sep 13 00:49:17.693748 kernel: Key type id_resolver registered Sep 13 00:49:17.693777 kernel: Key type id_legacy registered Sep 13 00:49:17.716478 nfsidmap[2753]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 13 00:49:17.719332 nfsidmap[2756]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 13 00:49:17.832957 env[1203]: time="2025-09-13T00:49:17.832909883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f361eb28-0a58-4fb7-a005-23996df76d7f,Namespace:default,Attempt:0,}" Sep 13 00:49:17.868946 systemd-networkd[1029]: lxcc9ca8675a793: Link UP Sep 13 00:49:17.879532 kernel: eth0: renamed from tmp8a220 Sep 13 00:49:17.887937 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:49:17.887987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc9ca8675a793: link becomes ready Sep 13 00:49:17.888114 systemd-networkd[1029]: lxcc9ca8675a793: Gained carrier Sep 13 00:49:18.063605 env[1203]: time="2025-09-13T00:49:18.063478954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:18.063605 env[1203]: time="2025-09-13T00:49:18.063548715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:18.063605 env[1203]: time="2025-09-13T00:49:18.063561980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:18.063824 env[1203]: time="2025-09-13T00:49:18.063708966Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a2207563464350b441c4d9b6d7f90523b400c06caf7930f10e72bafe448a2a5 pid=2791 runtime=io.containerd.runc.v2 Sep 13 00:49:18.075025 systemd[1]: Started cri-containerd-8a2207563464350b441c4d9b6d7f90523b400c06caf7930f10e72bafe448a2a5.scope. Sep 13 00:49:18.089264 systemd-resolved[1146]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:49:18.111829 env[1203]: time="2025-09-13T00:49:18.111773691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f361eb28-0a58-4fb7-a005-23996df76d7f,Namespace:default,Attempt:0,} returns sandbox id \"8a2207563464350b441c4d9b6d7f90523b400c06caf7930f10e72bafe448a2a5\"" Sep 13 00:49:18.113153 env[1203]: time="2025-09-13T00:49:18.113124603Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 13 00:49:18.478003 kubelet[1422]: E0913 00:49:18.477941 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:18.485655 env[1203]: time="2025-09-13T00:49:18.485599004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:18.487476 env[1203]: time="2025-09-13T00:49:18.487431391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:18.489052 env[1203]: time="2025-09-13T00:49:18.489022354Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:18.490987 env[1203]: time="2025-09-13T00:49:18.490955792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:18.491641 env[1203]: time="2025-09-13T00:49:18.491614962Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 13 00:49:18.494047 env[1203]: time="2025-09-13T00:49:18.494012071Z" level=info msg="CreateContainer within sandbox \"8a2207563464350b441c4d9b6d7f90523b400c06caf7930f10e72bafe448a2a5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 13 00:49:18.507878 env[1203]: time="2025-09-13T00:49:18.507829045Z" level=info msg="CreateContainer within sandbox \"8a2207563464350b441c4d9b6d7f90523b400c06caf7930f10e72bafe448a2a5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8a3b27953c2278a9a706c5d2476273edbbfb609e148784bbb443d6e1afc33bfd\"" Sep 13 00:49:18.508291 env[1203]: time="2025-09-13T00:49:18.508237173Z" level=info msg="StartContainer for \"8a3b27953c2278a9a706c5d2476273edbbfb609e148784bbb443d6e1afc33bfd\"" Sep 13 00:49:18.528846 systemd[1]: Started cri-containerd-8a3b27953c2278a9a706c5d2476273edbbfb609e148784bbb443d6e1afc33bfd.scope. Sep 13 00:49:18.554029 env[1203]: time="2025-09-13T00:49:18.553969269Z" level=info msg="StartContainer for \"8a3b27953c2278a9a706c5d2476273edbbfb609e148784bbb443d6e1afc33bfd\" returns successfully" Sep 13 00:49:18.629832 kubelet[1422]: I0913 00:49:18.629750 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.249814008 podStartE2EDuration="19.629724419s" podCreationTimestamp="2025-09-13 00:48:59 +0000 UTC" firstStartedPulling="2025-09-13 00:49:18.112785655 +0000 UTC m=+53.083736809" lastFinishedPulling="2025-09-13 00:49:18.492696076 +0000 UTC m=+53.463647220" observedRunningTime="2025-09-13 00:49:18.629669235 +0000 UTC m=+53.600620389" watchObservedRunningTime="2025-09-13 00:49:18.629724419 +0000 UTC m=+53.600675593" Sep 13 00:49:19.378713 systemd-networkd[1029]: lxcc9ca8675a793: Gained IPv6LL Sep 13 00:49:19.406219 systemd[1]: run-containerd-runc-k8s.io-8a3b27953c2278a9a706c5d2476273edbbfb609e148784bbb443d6e1afc33bfd-runc.01rEX3.mount: Deactivated successfully. Sep 13 00:49:19.478418 kubelet[1422]: E0913 00:49:19.478348 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:20.479183 kubelet[1422]: E0913 00:49:20.479072 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:21.479796 kubelet[1422]: E0913 00:49:21.479722 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:22.042766 systemd[1]: run-containerd-runc-k8s.io-5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a-runc.u80Hjp.mount: Deactivated successfully. Sep 13 00:49:22.058364 env[1203]: time="2025-09-13T00:49:22.058290201Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:49:22.063386 env[1203]: time="2025-09-13T00:49:22.063347570Z" level=info msg="StopContainer for \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\" with timeout 2 (s)" Sep 13 00:49:22.063629 env[1203]: time="2025-09-13T00:49:22.063606987Z" level=info msg="Stop container \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\" with signal terminated" Sep 13 00:49:22.069910 systemd-networkd[1029]: lxc_health: Link DOWN Sep 13 00:49:22.069919 systemd-networkd[1029]: lxc_health: Lost carrier Sep 13 00:49:22.096990 systemd[1]: cri-containerd-5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a.scope: Deactivated successfully. Sep 13 00:49:22.097312 systemd[1]: cri-containerd-5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a.scope: Consumed 6.751s CPU time. Sep 13 00:49:22.115202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a-rootfs.mount: Deactivated successfully. Sep 13 00:49:22.173480 env[1203]: time="2025-09-13T00:49:22.173399983Z" level=info msg="shim disconnected" id=5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a Sep 13 00:49:22.173480 env[1203]: time="2025-09-13T00:49:22.173469444Z" level=warning msg="cleaning up after shim disconnected" id=5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a namespace=k8s.io Sep 13 00:49:22.173480 env[1203]: time="2025-09-13T00:49:22.173498368Z" level=info msg="cleaning up dead shim" Sep 13 00:49:22.180173 env[1203]: time="2025-09-13T00:49:22.180118324Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2923 runtime=io.containerd.runc.v2\n" Sep 13 00:49:22.184698 env[1203]: time="2025-09-13T00:49:22.184654662Z" level=info msg="StopContainer for \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\" returns successfully" Sep 13 00:49:22.185396 env[1203]: time="2025-09-13T00:49:22.185366239Z" level=info msg="StopPodSandbox for \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\"" Sep 13 00:49:22.185493 env[1203]: time="2025-09-13T00:49:22.185431191Z" level=info msg="Container to stop \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:22.185493 env[1203]: time="2025-09-13T00:49:22.185457701Z" level=info msg="Container to stop \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:22.185493 env[1203]: time="2025-09-13T00:49:22.185473350Z" level=info msg="Container to stop \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:22.185614 env[1203]: time="2025-09-13T00:49:22.185501444Z" level=info msg="Container to stop \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:22.185614 env[1203]: time="2025-09-13T00:49:22.185511643Z" level=info msg="Container to stop \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:22.187749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61-shm.mount: Deactivated successfully. Sep 13 00:49:22.191665 systemd[1]: cri-containerd-ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61.scope: Deactivated successfully. Sep 13 00:49:22.211337 env[1203]: time="2025-09-13T00:49:22.211258806Z" level=info msg="shim disconnected" id=ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61 Sep 13 00:49:22.211337 env[1203]: time="2025-09-13T00:49:22.211327075Z" level=warning msg="cleaning up after shim disconnected" id=ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61 namespace=k8s.io Sep 13 00:49:22.211337 env[1203]: time="2025-09-13T00:49:22.211339628Z" level=info msg="cleaning up dead shim" Sep 13 00:49:22.218165 env[1203]: time="2025-09-13T00:49:22.218109926Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2954 runtime=io.containerd.runc.v2\n" Sep 13 00:49:22.218523 env[1203]: time="2025-09-13T00:49:22.218493147Z" level=info msg="TearDown network for sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" successfully" Sep 13 00:49:22.218572 env[1203]: time="2025-09-13T00:49:22.218524646Z" level=info msg="StopPodSandbox for \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" returns successfully" Sep 13 00:49:22.327730 kubelet[1422]: I0913 00:49:22.326337 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-host-proc-sys-kernel\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.327730 kubelet[1422]: I0913 00:49:22.326408 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-etc-cni-netd\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.327730 kubelet[1422]: I0913 00:49:22.326446 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-xtables-lock\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.327730 kubelet[1422]: I0913 00:49:22.326501 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmctb\" (UniqueName: \"kubernetes.io/projected/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-kube-api-access-dmctb\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.327730 kubelet[1422]: I0913 00:49:22.326523 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-run\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.327730 kubelet[1422]: I0913 00:49:22.326541 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cni-path\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328149 kubelet[1422]: I0913 00:49:22.326516 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328149 kubelet[1422]: I0913 00:49:22.326567 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-lib-modules\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328149 kubelet[1422]: I0913 00:49:22.326516 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328149 kubelet[1422]: I0913 00:49:22.326597 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-clustermesh-secrets\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328149 kubelet[1422]: I0913 00:49:22.326655 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-config-path\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328315 kubelet[1422]: I0913 00:49:22.326685 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-hostproc\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328315 kubelet[1422]: I0913 00:49:22.326708 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-hubble-tls\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328315 kubelet[1422]: I0913 00:49:22.326726 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-cgroup\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328315 kubelet[1422]: I0913 00:49:22.326741 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-bpf-maps\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328315 kubelet[1422]: I0913 00:49:22.326755 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-host-proc-sys-net\") pod \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\" (UID: \"e2abb93a-1d17-4a46-8a68-4bd0b985cfc8\") " Sep 13 00:49:22.328315 kubelet[1422]: I0913 00:49:22.326787 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-host-proc-sys-kernel\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.328315 kubelet[1422]: I0913 00:49:22.326795 1422 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-etc-cni-netd\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.328580 kubelet[1422]: I0913 00:49:22.326823 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328580 kubelet[1422]: I0913 00:49:22.327167 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328580 kubelet[1422]: I0913 00:49:22.327212 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328580 kubelet[1422]: I0913 00:49:22.327238 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328580 kubelet[1422]: I0913 00:49:22.327413 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328807 kubelet[1422]: I0913 00:49:22.327461 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328807 kubelet[1422]: I0913 00:49:22.327521 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328807 kubelet[1422]: I0913 00:49:22.327548 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:22.328807 kubelet[1422]: I0913 00:49:22.328682 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:49:22.330131 kubelet[1422]: I0913 00:49:22.330094 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-kube-api-access-dmctb" (OuterVolumeSpecName: "kube-api-access-dmctb") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "kube-api-access-dmctb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:22.330625 kubelet[1422]: I0913 00:49:22.330596 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:49:22.330833 kubelet[1422]: I0913 00:49:22.330804 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" (UID: "e2abb93a-1d17-4a46-8a68-4bd0b985cfc8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:22.429299 kubelet[1422]: I0913 00:49:22.427480 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-host-proc-sys-net\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429299 kubelet[1422]: I0913 00:49:22.427551 1422 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-xtables-lock\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429299 kubelet[1422]: I0913 00:49:22.427565 1422 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmctb\" (UniqueName: \"kubernetes.io/projected/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-kube-api-access-dmctb\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429299 kubelet[1422]: I0913 00:49:22.427577 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-run\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429299 kubelet[1422]: I0913 00:49:22.427590 1422 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cni-path\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429299 kubelet[1422]: I0913 00:49:22.427602 1422 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-lib-modules\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429299 kubelet[1422]: I0913 00:49:22.427615 1422 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-clustermesh-secrets\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429299 kubelet[1422]: I0913 00:49:22.427626 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-config-path\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429755 kubelet[1422]: I0913 00:49:22.427638 1422 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-hostproc\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429755 kubelet[1422]: I0913 00:49:22.427650 1422 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-hubble-tls\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429755 kubelet[1422]: I0913 00:49:22.427669 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-cilium-cgroup\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.429755 kubelet[1422]: I0913 00:49:22.427680 1422 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8-bpf-maps\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:22.480779 kubelet[1422]: E0913 00:49:22.480700 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:22.500466 systemd[1]: Removed slice kubepods-burstable-pode2abb93a_1d17_4a46_8a68_4bd0b985cfc8.slice. Sep 13 00:49:22.500598 systemd[1]: kubepods-burstable-pode2abb93a_1d17_4a46_8a68_4bd0b985cfc8.slice: Consumed 7.017s CPU time. Sep 13 00:49:22.633523 kubelet[1422]: I0913 00:49:22.632597 1422 scope.go:117] "RemoveContainer" containerID="5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a" Sep 13 00:49:22.634082 env[1203]: time="2025-09-13T00:49:22.634038714Z" level=info msg="RemoveContainer for \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\"" Sep 13 00:49:22.650661 env[1203]: time="2025-09-13T00:49:22.650587435Z" level=info msg="RemoveContainer for \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\" returns successfully" Sep 13 00:49:22.650954 kubelet[1422]: I0913 00:49:22.650926 1422 scope.go:117] "RemoveContainer" containerID="dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61" Sep 13 00:49:22.654967 env[1203]: time="2025-09-13T00:49:22.654923587Z" level=info msg="RemoveContainer for \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\"" Sep 13 00:49:22.660191 env[1203]: time="2025-09-13T00:49:22.660152327Z" level=info msg="RemoveContainer for \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\" returns successfully" Sep 13 00:49:22.660354 kubelet[1422]: I0913 00:49:22.660311 1422 scope.go:117] "RemoveContainer" containerID="5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8" Sep 13 00:49:22.661297 env[1203]: time="2025-09-13T00:49:22.661268575Z" level=info msg="RemoveContainer for \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\"" Sep 13 00:49:22.664638 env[1203]: time="2025-09-13T00:49:22.664584429Z" level=info msg="RemoveContainer for \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\" returns successfully" Sep 13 00:49:22.664860 kubelet[1422]: I0913 00:49:22.664724 1422 scope.go:117] "RemoveContainer" containerID="afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c" Sep 13 00:49:22.665729 env[1203]: time="2025-09-13T00:49:22.665694366Z" level=info msg="RemoveContainer for \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\"" Sep 13 00:49:22.669173 env[1203]: time="2025-09-13T00:49:22.669134443Z" level=info msg="RemoveContainer for \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\" returns successfully" Sep 13 00:49:22.669279 kubelet[1422]: I0913 00:49:22.669257 1422 scope.go:117] "RemoveContainer" containerID="fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea" Sep 13 00:49:22.670214 env[1203]: time="2025-09-13T00:49:22.670178575Z" level=info msg="RemoveContainer for \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\"" Sep 13 00:49:22.673437 env[1203]: time="2025-09-13T00:49:22.673385204Z" level=info msg="RemoveContainer for \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\" returns successfully" Sep 13 00:49:22.673584 kubelet[1422]: I0913 00:49:22.673553 1422 scope.go:117] "RemoveContainer" containerID="5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a" Sep 13 00:49:22.673855 env[1203]: time="2025-09-13T00:49:22.673762033Z" level=error msg="ContainerStatus for \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\": not found" Sep 13 00:49:22.674056 kubelet[1422]: E0913 00:49:22.674028 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\": not found" containerID="5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a" Sep 13 00:49:22.674190 kubelet[1422]: I0913 00:49:22.674072 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a"} err="failed to get container status \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5958468b2adefdb938387b97d89f4cbcf03f88470a17e1a9d403aae7b8389e4a\": not found" Sep 13 00:49:22.674233 kubelet[1422]: I0913 00:49:22.674191 1422 scope.go:117] "RemoveContainer" containerID="dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61" Sep 13 00:49:22.674361 env[1203]: time="2025-09-13T00:49:22.674328287Z" level=error msg="ContainerStatus for \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\": not found" Sep 13 00:49:22.674473 kubelet[1422]: E0913 00:49:22.674452 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\": not found" containerID="dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61" Sep 13 00:49:22.674540 kubelet[1422]: I0913 00:49:22.674478 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61"} err="failed to get container status \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd19e874414ab73454d9fa2952024397ac348c34f5a25d2a0f17da77f4fa7a61\": not found" Sep 13 00:49:22.674540 kubelet[1422]: I0913 00:49:22.674515 1422 scope.go:117] "RemoveContainer" containerID="5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8" Sep 13 00:49:22.674757 env[1203]: time="2025-09-13T00:49:22.674693333Z" level=error msg="ContainerStatus for \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\": not found" Sep 13 00:49:22.674969 kubelet[1422]: E0913 00:49:22.674929 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\": not found" containerID="5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8" Sep 13 00:49:22.675033 kubelet[1422]: I0913 00:49:22.674982 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8"} err="failed to get container status \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bd4de88fc883559d9ecad50ce74f933a2d811b1f36faa321107efb2f131dbc8\": not found" Sep 13 00:49:22.675033 kubelet[1422]: I0913 00:49:22.675022 1422 scope.go:117] "RemoveContainer" containerID="afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c" Sep 13 00:49:22.675258 env[1203]: time="2025-09-13T00:49:22.675218822Z" level=error msg="ContainerStatus for \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\": not found" Sep 13 00:49:22.675396 kubelet[1422]: E0913 00:49:22.675359 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\": not found" containerID="afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c" Sep 13 00:49:22.675468 kubelet[1422]: I0913 00:49:22.675402 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c"} err="failed to get container status \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\": rpc error: code = NotFound desc = an error occurred when try to find container \"afeaa1a327a6a99362ce43526e34b3ea3fab21d79ecdb6ab302a67af5f7a738c\": not found" Sep 13 00:49:22.675468 kubelet[1422]: I0913 00:49:22.675422 1422 scope.go:117] "RemoveContainer" containerID="fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea" Sep 13 00:49:22.675648 env[1203]: time="2025-09-13T00:49:22.675614394Z" level=error msg="ContainerStatus for \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\": not found" Sep 13 00:49:22.675759 kubelet[1422]: E0913 00:49:22.675735 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\": not found" containerID="fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea" Sep 13 00:49:22.675815 kubelet[1422]: I0913 00:49:22.675760 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea"} err="failed to get container status \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb3d96074cc4922ed75fc32a7dcd62d3b731b819f8d7d1103a43e610cc0dffea\": not found" Sep 13 00:49:23.038960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61-rootfs.mount: Deactivated successfully. Sep 13 00:49:23.039057 systemd[1]: var-lib-kubelet-pods-e2abb93a\x2d1d17\x2d4a46\x2d8a68\x2d4bd0b985cfc8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmctb.mount: Deactivated successfully. Sep 13 00:49:23.039112 systemd[1]: var-lib-kubelet-pods-e2abb93a\x2d1d17\x2d4a46\x2d8a68\x2d4bd0b985cfc8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:49:23.039163 systemd[1]: var-lib-kubelet-pods-e2abb93a\x2d1d17\x2d4a46\x2d8a68\x2d4bd0b985cfc8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:49:23.481503 kubelet[1422]: E0913 00:49:23.481418 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:24.287842 kubelet[1422]: I0913 00:49:24.287765 1422 memory_manager.go:355] "RemoveStaleState removing state" podUID="e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" containerName="cilium-agent" Sep 13 00:49:24.295190 systemd[1]: Created slice kubepods-besteffort-pod19458520_0a8e_40bc_8b75_13141bb5040f.slice. Sep 13 00:49:24.299128 systemd[1]: Created slice kubepods-burstable-podc377b474_2b5c_4e6c_b4eb_041dc042e6b1.slice. Sep 13 00:49:24.438281 kubelet[1422]: I0913 00:49:24.438192 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-run\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438281 kubelet[1422]: I0913 00:49:24.438253 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-xtables-lock\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438281 kubelet[1422]: I0913 00:49:24.438275 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-config-path\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438281 kubelet[1422]: I0913 00:49:24.438301 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-host-proc-sys-kernel\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438693 kubelet[1422]: I0913 00:49:24.438323 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-ipsec-secrets\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438693 kubelet[1422]: I0913 00:49:24.438349 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-hubble-tls\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438693 kubelet[1422]: I0913 00:49:24.438403 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-bpf-maps\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438693 kubelet[1422]: I0913 00:49:24.438445 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-cgroup\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438693 kubelet[1422]: I0913 00:49:24.438463 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-hostproc\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438693 kubelet[1422]: I0913 00:49:24.438518 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-host-proc-sys-net\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438912 kubelet[1422]: I0913 00:49:24.438539 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19458520-0a8e-40bc-8b75-13141bb5040f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fjmbb\" (UID: \"19458520-0a8e-40bc-8b75-13141bb5040f\") " pod="kube-system/cilium-operator-6c4d7847fc-fjmbb" Sep 13 00:49:24.438912 kubelet[1422]: I0913 00:49:24.438559 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2dg8\" (UniqueName: \"kubernetes.io/projected/19458520-0a8e-40bc-8b75-13141bb5040f-kube-api-access-j2dg8\") pod \"cilium-operator-6c4d7847fc-fjmbb\" (UID: \"19458520-0a8e-40bc-8b75-13141bb5040f\") " pod="kube-system/cilium-operator-6c4d7847fc-fjmbb" Sep 13 00:49:24.438912 kubelet[1422]: I0913 00:49:24.438577 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cni-path\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438912 kubelet[1422]: I0913 00:49:24.438595 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-etc-cni-netd\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.438912 kubelet[1422]: I0913 00:49:24.438672 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-lib-modules\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.439087 kubelet[1422]: I0913 00:49:24.438722 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-clustermesh-secrets\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.439087 kubelet[1422]: I0913 00:49:24.438755 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zzh\" (UniqueName: \"kubernetes.io/projected/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-kube-api-access-r4zzh\") pod \"cilium-zml7z\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " pod="kube-system/cilium-zml7z" Sep 13 00:49:24.481966 kubelet[1422]: E0913 00:49:24.481878 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:24.497620 kubelet[1422]: I0913 00:49:24.497571 1422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2abb93a-1d17-4a46-8a68-4bd0b985cfc8" path="/var/lib/kubelet/pods/e2abb93a-1d17-4a46-8a68-4bd0b985cfc8/volumes" Sep 13 00:49:24.870742 kubelet[1422]: E0913 00:49:24.870672 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:24.871414 env[1203]: time="2025-09-13T00:49:24.871352272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zml7z,Uid:c377b474-2b5c-4e6c-b4eb-041dc042e6b1,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:24.897916 kubelet[1422]: E0913 00:49:24.897864 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:24.898507 env[1203]: time="2025-09-13T00:49:24.898446674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fjmbb,Uid:19458520-0a8e-40bc-8b75-13141bb5040f,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:25.371775 env[1203]: time="2025-09-13T00:49:25.371704283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:25.372045 env[1203]: time="2025-09-13T00:49:25.371742324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:25.372045 env[1203]: time="2025-09-13T00:49:25.371753616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:25.372045 env[1203]: time="2025-09-13T00:49:25.371887928Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d pid=2981 runtime=io.containerd.runc.v2 Sep 13 00:49:25.379650 env[1203]: time="2025-09-13T00:49:25.379565706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:25.379650 env[1203]: time="2025-09-13T00:49:25.379608336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:25.379650 env[1203]: time="2025-09-13T00:49:25.379620228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:25.380646 env[1203]: time="2025-09-13T00:49:25.379763657Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49ab49cb33b18d8add665538206cc72184eb2bf7de06eb4c412a2c00ec6a0b3b pid=3003 runtime=io.containerd.runc.v2 Sep 13 00:49:25.386599 systemd[1]: Started cri-containerd-cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d.scope. Sep 13 00:49:25.400934 systemd[1]: Started cri-containerd-49ab49cb33b18d8add665538206cc72184eb2bf7de06eb4c412a2c00ec6a0b3b.scope. Sep 13 00:49:25.413795 env[1203]: time="2025-09-13T00:49:25.413743230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zml7z,Uid:c377b474-2b5c-4e6c-b4eb-041dc042e6b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d\"" Sep 13 00:49:25.415243 kubelet[1422]: E0913 00:49:25.414625 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:25.416982 env[1203]: time="2025-09-13T00:49:25.416942782Z" level=info msg="CreateContainer within sandbox \"cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:49:25.432020 env[1203]: time="2025-09-13T00:49:25.431917963Z" level=info msg="CreateContainer within sandbox \"cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82\"" Sep 13 00:49:25.432709 env[1203]: time="2025-09-13T00:49:25.432639850Z" level=info msg="StartContainer for \"9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82\"" Sep 13 00:49:25.439977 kubelet[1422]: E0913 00:49:25.439939 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:25.447954 env[1203]: time="2025-09-13T00:49:25.447911437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fjmbb,Uid:19458520-0a8e-40bc-8b75-13141bb5040f,Namespace:kube-system,Attempt:0,} returns sandbox id \"49ab49cb33b18d8add665538206cc72184eb2bf7de06eb4c412a2c00ec6a0b3b\"" Sep 13 00:49:25.448632 kubelet[1422]: E0913 00:49:25.448607 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:25.449704 env[1203]: time="2025-09-13T00:49:25.449673358Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:49:25.452010 systemd[1]: Started cri-containerd-9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82.scope. Sep 13 00:49:25.456811 env[1203]: time="2025-09-13T00:49:25.456768401Z" level=info msg="StopPodSandbox for \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\"" Sep 13 00:49:25.456929 env[1203]: time="2025-09-13T00:49:25.456855304Z" level=info msg="TearDown network for sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" successfully" Sep 13 00:49:25.456929 env[1203]: time="2025-09-13T00:49:25.456889699Z" level=info msg="StopPodSandbox for \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" returns successfully" Sep 13 00:49:25.457250 env[1203]: time="2025-09-13T00:49:25.457220901Z" level=info msg="RemovePodSandbox for \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\"" Sep 13 00:49:25.457320 env[1203]: time="2025-09-13T00:49:25.457247461Z" level=info msg="Forcibly stopping sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\"" Sep 13 00:49:25.457320 env[1203]: time="2025-09-13T00:49:25.457307264Z" level=info msg="TearDown network for sandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" successfully" Sep 13 00:49:25.463129 systemd[1]: cri-containerd-9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82.scope: Deactivated successfully. Sep 13 00:49:25.463442 env[1203]: time="2025-09-13T00:49:25.463396476Z" level=info msg="RemovePodSandbox \"ade7c509ed8425e9a5cf70d342a968f2247784e5fabab343f135fc8de138be61\" returns successfully" Sep 13 00:49:25.480258 env[1203]: time="2025-09-13T00:49:25.480197899Z" level=info msg="shim disconnected" id=9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82 Sep 13 00:49:25.480258 env[1203]: time="2025-09-13T00:49:25.480255026Z" level=warning msg="cleaning up after shim disconnected" id=9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82 namespace=k8s.io Sep 13 00:49:25.480258 env[1203]: time="2025-09-13T00:49:25.480264083Z" level=info msg="cleaning up dead shim" Sep 13 00:49:25.483039 kubelet[1422]: E0913 00:49:25.482987 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:25.487880 env[1203]: time="2025-09-13T00:49:25.487846572Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3080 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:49:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:49:25.488191 env[1203]: time="2025-09-13T00:49:25.488087564Z" level=error msg="copy shim log" error="read /proc/self/fd/62: file already closed" Sep 13 00:49:25.488369 env[1203]: time="2025-09-13T00:49:25.488304451Z" level=error msg="Failed to pipe stderr of container \"9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82\"" error="reading from a closed fifo" Sep 13 00:49:25.488424 env[1203]: time="2025-09-13T00:49:25.488319871Z" level=error msg="Failed to pipe stdout of container \"9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82\"" error="reading from a closed fifo" Sep 13 00:49:25.491260 env[1203]: time="2025-09-13T00:49:25.491211064Z" level=error msg="StartContainer for \"9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:49:25.491567 kubelet[1422]: E0913 00:49:25.491478 1422 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82" Sep 13 00:49:25.492890 kubelet[1422]: E0913 00:49:25.492863 1422 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 13 00:49:25.492890 kubelet[1422]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:49:25.492890 kubelet[1422]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:49:25.492890 kubelet[1422]: rm /hostbin/cilium-mount Sep 13 00:49:25.493027 kubelet[1422]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4zzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zml7z_kube-system(c377b474-2b5c-4e6c-b4eb-041dc042e6b1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:49:25.493027 kubelet[1422]: > logger="UnhandledError" Sep 13 00:49:25.494146 kubelet[1422]: E0913 00:49:25.494073 1422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zml7z" podUID="c377b474-2b5c-4e6c-b4eb-041dc042e6b1" Sep 13 00:49:25.642475 env[1203]: time="2025-09-13T00:49:25.642308708Z" level=info msg="StopPodSandbox for \"cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d\"" Sep 13 00:49:25.642475 env[1203]: time="2025-09-13T00:49:25.642373030Z" level=info msg="Container to stop \"9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:25.644715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d-shm.mount: Deactivated successfully. Sep 13 00:49:25.650512 systemd[1]: cri-containerd-cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d.scope: Deactivated successfully. Sep 13 00:49:25.670109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d-rootfs.mount: Deactivated successfully. Sep 13 00:49:25.674171 env[1203]: time="2025-09-13T00:49:25.674122982Z" level=info msg="shim disconnected" id=cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d Sep 13 00:49:25.674171 env[1203]: time="2025-09-13T00:49:25.674170231Z" level=warning msg="cleaning up after shim disconnected" id=cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d namespace=k8s.io Sep 13 00:49:25.674379 env[1203]: time="2025-09-13T00:49:25.674181361Z" level=info msg="cleaning up dead shim" Sep 13 00:49:25.680281 env[1203]: time="2025-09-13T00:49:25.680213056Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3111 runtime=io.containerd.runc.v2\n" Sep 13 00:49:25.680765 env[1203]: time="2025-09-13T00:49:25.680602448Z" level=info msg="TearDown network for sandbox \"cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d\" successfully" Sep 13 00:49:25.680765 env[1203]: time="2025-09-13T00:49:25.680754704Z" level=info msg="StopPodSandbox for \"cd941b8e2be81647ed0add216c96fcbc741676d02a11d418adc8c778a722992d\" returns successfully" Sep 13 00:49:25.746571 kubelet[1422]: I0913 00:49:25.746469 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-xtables-lock\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746571 kubelet[1422]: I0913 00:49:25.746542 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-hostproc\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746571 kubelet[1422]: I0913 00:49:25.746568 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-run\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746591 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-config-path\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746599 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746606 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-bpf-maps\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746635 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746645 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-lib-modules\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746654 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746660 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-host-proc-sys-kernel\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746678 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-hubble-tls\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746690 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-cgroup\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746702 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-etc-cni-netd\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746696 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-hostproc" (OuterVolumeSpecName: "hostproc") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746746 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cni-path" (OuterVolumeSpecName: "cni-path") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746723 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cni-path\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746769 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.746860 kubelet[1422]: I0913 00:49:25.746782 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.746819 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-ipsec-secrets\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.746845 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-host-proc-sys-net\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.746868 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-clustermesh-secrets\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.746893 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4zzh\" (UniqueName: \"kubernetes.io/projected/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-kube-api-access-r4zzh\") pod \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\" (UID: \"c377b474-2b5c-4e6c-b4eb-041dc042e6b1\") " Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.746959 1422 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-hostproc\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.746974 1422 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-xtables-lock\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.746985 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-run\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.746995 1422 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-bpf-maps\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.747006 1422 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-lib-modules\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.747016 1422 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cni-path\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.747232 kubelet[1422]: I0913 00:49:25.747026 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-host-proc-sys-kernel\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.748791 kubelet[1422]: I0913 00:49:25.747545 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.748791 kubelet[1422]: I0913 00:49:25.747547 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.748791 kubelet[1422]: I0913 00:49:25.747570 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:25.748791 kubelet[1422]: I0913 00:49:25.748694 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:49:25.749894 kubelet[1422]: I0913 00:49:25.749872 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:49:25.751165 systemd[1]: var-lib-kubelet-pods-c377b474\x2d2b5c\x2d4e6c\x2db4eb\x2d041dc042e6b1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:49:25.752133 kubelet[1422]: I0913 00:49:25.751192 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-kube-api-access-r4zzh" (OuterVolumeSpecName: "kube-api-access-r4zzh") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "kube-api-access-r4zzh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:25.752281 kubelet[1422]: I0913 00:49:25.752260 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:25.752451 kubelet[1422]: I0913 00:49:25.752422 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c377b474-2b5c-4e6c-b4eb-041dc042e6b1" (UID: "c377b474-2b5c-4e6c-b4eb-041dc042e6b1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:49:25.752983 systemd[1]: var-lib-kubelet-pods-c377b474\x2d2b5c\x2d4e6c\x2db4eb\x2d041dc042e6b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4zzh.mount: Deactivated successfully. Sep 13 00:49:25.753061 systemd[1]: var-lib-kubelet-pods-c377b474\x2d2b5c\x2d4e6c\x2db4eb\x2d041dc042e6b1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:49:25.753132 systemd[1]: var-lib-kubelet-pods-c377b474\x2d2b5c\x2d4e6c\x2db4eb\x2d041dc042e6b1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:49:25.847251 kubelet[1422]: I0913 00:49:25.847194 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-config-path\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.847251 kubelet[1422]: I0913 00:49:25.847236 1422 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-hubble-tls\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.847251 kubelet[1422]: I0913 00:49:25.847245 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-cgroup\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.847251 kubelet[1422]: I0913 00:49:25.847255 1422 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-etc-cni-netd\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.847251 kubelet[1422]: I0913 00:49:25.847263 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-cilium-ipsec-secrets\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.847251 kubelet[1422]: I0913 00:49:25.847272 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-host-proc-sys-net\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.847653 kubelet[1422]: I0913 00:49:25.847279 1422 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-clustermesh-secrets\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:25.847653 kubelet[1422]: I0913 00:49:25.847286 1422 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r4zzh\" (UniqueName: \"kubernetes.io/projected/c377b474-2b5c-4e6c-b4eb-041dc042e6b1-kube-api-access-r4zzh\") on node \"10.0.0.82\" DevicePath \"\"" Sep 13 00:49:26.442651 kubelet[1422]: E0913 00:49:26.442604 1422 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:49:26.483375 kubelet[1422]: E0913 00:49:26.483329 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:26.502759 systemd[1]: Removed slice kubepods-burstable-podc377b474_2b5c_4e6c_b4eb_041dc042e6b1.slice. Sep 13 00:49:26.646415 kubelet[1422]: I0913 00:49:26.646371 1422 scope.go:117] "RemoveContainer" containerID="9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82" Sep 13 00:49:26.647570 env[1203]: time="2025-09-13T00:49:26.647523358Z" level=info msg="RemoveContainer for \"9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82\"" Sep 13 00:49:26.811359 env[1203]: time="2025-09-13T00:49:26.811280954Z" level=info msg="RemoveContainer for \"9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82\" returns successfully" Sep 13 00:49:26.827404 kubelet[1422]: I0913 00:49:26.827348 1422 memory_manager.go:355] "RemoveStaleState removing state" podUID="c377b474-2b5c-4e6c-b4eb-041dc042e6b1" containerName="mount-cgroup" Sep 13 00:49:26.832661 systemd[1]: Created slice kubepods-burstable-podb7999113_f150_48bc_984d_15e1d498fbd6.slice. Sep 13 00:49:26.852315 kubelet[1422]: I0913 00:49:26.852275 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7999113-f150-48bc-984d-15e1d498fbd6-clustermesh-secrets\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852315 kubelet[1422]: I0913 00:49:26.852312 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7999113-f150-48bc-984d-15e1d498fbd6-cilium-config-path\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852359 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-cilium-run\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852405 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-hostproc\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852430 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b7999113-f150-48bc-984d-15e1d498fbd6-cilium-ipsec-secrets\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852446 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-host-proc-sys-net\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852459 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-cilium-cgroup\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852474 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-cni-path\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852508 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-etc-cni-netd\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852523 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-lib-modules\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852543 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-xtables-lock\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852556 kubelet[1422]: I0913 00:49:26.852557 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-host-proc-sys-kernel\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852794 kubelet[1422]: I0913 00:49:26.852572 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7999113-f150-48bc-984d-15e1d498fbd6-hubble-tls\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852794 kubelet[1422]: I0913 00:49:26.852586 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5nm2\" (UniqueName: \"kubernetes.io/projected/b7999113-f150-48bc-984d-15e1d498fbd6-kube-api-access-k5nm2\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:26.852794 kubelet[1422]: I0913 00:49:26.852601 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7999113-f150-48bc-984d-15e1d498fbd6-bpf-maps\") pod \"cilium-ccssd\" (UID: \"b7999113-f150-48bc-984d-15e1d498fbd6\") " pod="kube-system/cilium-ccssd" Sep 13 00:49:27.079109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827566226.mount: Deactivated successfully. Sep 13 00:49:27.141273 kubelet[1422]: E0913 00:49:27.141214 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:27.141861 env[1203]: time="2025-09-13T00:49:27.141818214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ccssd,Uid:b7999113-f150-48bc-984d-15e1d498fbd6,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:27.158588 env[1203]: time="2025-09-13T00:49:27.158503130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:27.158588 env[1203]: time="2025-09-13T00:49:27.158544387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:27.158588 env[1203]: time="2025-09-13T00:49:27.158555598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:27.159084 env[1203]: time="2025-09-13T00:49:27.159016043Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1 pid=3140 runtime=io.containerd.runc.v2 Sep 13 00:49:27.171410 systemd[1]: Started cri-containerd-4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1.scope. Sep 13 00:49:27.192906 env[1203]: time="2025-09-13T00:49:27.192855297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ccssd,Uid:b7999113-f150-48bc-984d-15e1d498fbd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\"" Sep 13 00:49:27.194266 kubelet[1422]: E0913 00:49:27.194014 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:27.196009 env[1203]: time="2025-09-13T00:49:27.195971140Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:49:27.214910 env[1203]: time="2025-09-13T00:49:27.214847263Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5\"" Sep 13 00:49:27.215391 env[1203]: time="2025-09-13T00:49:27.215336753Z" level=info msg="StartContainer for \"420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5\"" Sep 13 00:49:27.230533 systemd[1]: Started cri-containerd-420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5.scope. Sep 13 00:49:27.260428 env[1203]: time="2025-09-13T00:49:27.260358815Z" level=info msg="StartContainer for \"420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5\" returns successfully" Sep 13 00:49:27.270670 systemd[1]: cri-containerd-420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5.scope: Deactivated successfully. Sep 13 00:49:27.325939 env[1203]: time="2025-09-13T00:49:27.325863167Z" level=info msg="shim disconnected" id=420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5 Sep 13 00:49:27.325939 env[1203]: time="2025-09-13T00:49:27.325929642Z" level=warning msg="cleaning up after shim disconnected" id=420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5 namespace=k8s.io Sep 13 00:49:27.325939 env[1203]: time="2025-09-13T00:49:27.325942216Z" level=info msg="cleaning up dead shim" Sep 13 00:49:27.333199 env[1203]: time="2025-09-13T00:49:27.333086417Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3224 runtime=io.containerd.runc.v2\n" Sep 13 00:49:27.483553 kubelet[1422]: E0913 00:49:27.483444 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:27.651238 kubelet[1422]: E0913 00:49:27.651095 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:27.653136 env[1203]: time="2025-09-13T00:49:27.653093320Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:49:27.667886 env[1203]: time="2025-09-13T00:49:27.667809877Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8\"" Sep 13 00:49:27.668474 env[1203]: time="2025-09-13T00:49:27.668434140Z" level=info msg="StartContainer for \"6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8\"" Sep 13 00:49:27.683923 systemd[1]: Started cri-containerd-6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8.scope. Sep 13 00:49:27.712832 env[1203]: time="2025-09-13T00:49:27.712768420Z" level=info msg="StartContainer for \"6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8\" returns successfully" Sep 13 00:49:27.717865 systemd[1]: cri-containerd-6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8.scope: Deactivated successfully. Sep 13 00:49:27.756788 env[1203]: time="2025-09-13T00:49:27.756719431Z" level=info msg="shim disconnected" id=6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8 Sep 13 00:49:27.756788 env[1203]: time="2025-09-13T00:49:27.756768232Z" level=warning msg="cleaning up after shim disconnected" id=6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8 namespace=k8s.io Sep 13 00:49:27.756788 env[1203]: time="2025-09-13T00:49:27.756779273Z" level=info msg="cleaning up dead shim" Sep 13 00:49:27.764961 env[1203]: time="2025-09-13T00:49:27.764900731Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3285 runtime=io.containerd.runc.v2\n" Sep 13 00:49:27.917153 kubelet[1422]: I0913 00:49:27.916994 1422 setters.go:602] "Node became not ready" node="10.0.0.82" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:49:27Z","lastTransitionTime":"2025-09-13T00:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:49:28.389018 env[1203]: time="2025-09-13T00:49:28.388941910Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:28.390800 env[1203]: time="2025-09-13T00:49:28.390768401Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:28.392280 env[1203]: time="2025-09-13T00:49:28.392242440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:28.392661 env[1203]: time="2025-09-13T00:49:28.392631861Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:49:28.394565 env[1203]: time="2025-09-13T00:49:28.394528354Z" level=info msg="CreateContainer within sandbox \"49ab49cb33b18d8add665538206cc72184eb2bf7de06eb4c412a2c00ec6a0b3b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:49:28.406793 env[1203]: time="2025-09-13T00:49:28.406747528Z" level=info msg="CreateContainer within sandbox \"49ab49cb33b18d8add665538206cc72184eb2bf7de06eb4c412a2c00ec6a0b3b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"892fbe08c3c53f40a1a249493f0f48b2cb06550f8f26a2eaab272d63b75ce0bb\"" Sep 13 00:49:28.407217 env[1203]: time="2025-09-13T00:49:28.407164310Z" level=info msg="StartContainer for \"892fbe08c3c53f40a1a249493f0f48b2cb06550f8f26a2eaab272d63b75ce0bb\"" Sep 13 00:49:28.423658 systemd[1]: Started cri-containerd-892fbe08c3c53f40a1a249493f0f48b2cb06550f8f26a2eaab272d63b75ce0bb.scope. Sep 13 00:49:28.444640 env[1203]: time="2025-09-13T00:49:28.444589883Z" level=info msg="StartContainer for \"892fbe08c3c53f40a1a249493f0f48b2cb06550f8f26a2eaab272d63b75ce0bb\" returns successfully" Sep 13 00:49:28.484080 kubelet[1422]: E0913 00:49:28.484020 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:28.498214 kubelet[1422]: I0913 00:49:28.497854 1422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c377b474-2b5c-4e6c-b4eb-041dc042e6b1" path="/var/lib/kubelet/pods/c377b474-2b5c-4e6c-b4eb-041dc042e6b1/volumes" Sep 13 00:49:28.585691 kubelet[1422]: W0913 00:49:28.585619 1422 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc377b474_2b5c_4e6c_b4eb_041dc042e6b1.slice/cri-containerd-9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82.scope WatchSource:0}: container "9be534d6cd58ca2625b973f13071df13f5113701b06a6beabb1d1c6895ecdc82" in namespace "k8s.io": not found Sep 13 00:49:28.654667 kubelet[1422]: E0913 00:49:28.654542 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:28.656040 kubelet[1422]: E0913 00:49:28.656019 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:28.656384 env[1203]: time="2025-09-13T00:49:28.656343530Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:49:28.816916 kubelet[1422]: I0913 00:49:28.816834 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fjmbb" podStartSLOduration=1.872852912 podStartE2EDuration="4.816809721s" podCreationTimestamp="2025-09-13 00:49:24 +0000 UTC" firstStartedPulling="2025-09-13 00:49:25.449374407 +0000 UTC m=+60.420325561" lastFinishedPulling="2025-09-13 00:49:28.393331215 +0000 UTC m=+63.364282370" observedRunningTime="2025-09-13 00:49:28.8166178 +0000 UTC m=+63.787568954" watchObservedRunningTime="2025-09-13 00:49:28.816809721 +0000 UTC m=+63.787760875" Sep 13 00:49:28.828717 env[1203]: time="2025-09-13T00:49:28.828664941Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230\"" Sep 13 00:49:28.829253 env[1203]: time="2025-09-13T00:49:28.829221236Z" level=info msg="StartContainer for \"13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230\"" Sep 13 00:49:28.847152 systemd[1]: Started cri-containerd-13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230.scope. Sep 13 00:49:28.872573 env[1203]: time="2025-09-13T00:49:28.872514051Z" level=info msg="StartContainer for \"13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230\" returns successfully" Sep 13 00:49:28.875651 systemd[1]: cri-containerd-13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230.scope: Deactivated successfully. Sep 13 00:49:28.937052 env[1203]: time="2025-09-13T00:49:28.936905539Z" level=info msg="shim disconnected" id=13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230 Sep 13 00:49:28.937052 env[1203]: time="2025-09-13T00:49:28.936963348Z" level=warning msg="cleaning up after shim disconnected" id=13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230 namespace=k8s.io Sep 13 00:49:28.937052 env[1203]: time="2025-09-13T00:49:28.936973216Z" level=info msg="cleaning up dead shim" Sep 13 00:49:28.944439 env[1203]: time="2025-09-13T00:49:28.944397472Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3380 runtime=io.containerd.runc.v2\n" Sep 13 00:49:28.959286 systemd[1]: run-containerd-runc-k8s.io-892fbe08c3c53f40a1a249493f0f48b2cb06550f8f26a2eaab272d63b75ce0bb-runc.IvnyVd.mount: Deactivated successfully. Sep 13 00:49:29.484544 kubelet[1422]: E0913 00:49:29.484431 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:29.660888 kubelet[1422]: E0913 00:49:29.660852 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:29.661103 kubelet[1422]: E0913 00:49:29.661019 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:29.662709 env[1203]: time="2025-09-13T00:49:29.662655739Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:49:30.310814 env[1203]: time="2025-09-13T00:49:30.310704775Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4\"" Sep 13 00:49:30.311404 env[1203]: time="2025-09-13T00:49:30.311345537Z" level=info msg="StartContainer for \"d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4\"" Sep 13 00:49:30.330659 systemd[1]: Started cri-containerd-d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4.scope. Sep 13 00:49:30.413418 systemd[1]: cri-containerd-d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4.scope: Deactivated successfully. Sep 13 00:49:30.481865 env[1203]: time="2025-09-13T00:49:30.481782645Z" level=info msg="StartContainer for \"d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4\" returns successfully" Sep 13 00:49:30.485279 kubelet[1422]: E0913 00:49:30.485203 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:30.573919 env[1203]: time="2025-09-13T00:49:30.573754396Z" level=info msg="shim disconnected" id=d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4 Sep 13 00:49:30.573919 env[1203]: time="2025-09-13T00:49:30.573817615Z" level=warning msg="cleaning up after shim disconnected" id=d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4 namespace=k8s.io Sep 13 00:49:30.573919 env[1203]: time="2025-09-13T00:49:30.573827914Z" level=info msg="cleaning up dead shim" Sep 13 00:49:30.581592 env[1203]: time="2025-09-13T00:49:30.581541320Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3434 runtime=io.containerd.runc.v2\n" Sep 13 00:49:30.700776 kubelet[1422]: E0913 00:49:30.700734 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:30.702537 env[1203]: time="2025-09-13T00:49:30.702475058Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:49:30.721471 env[1203]: time="2025-09-13T00:49:30.721396995Z" level=info msg="CreateContainer within sandbox \"4178737fa1064c4e44f31c13b310b1ef2bf93a82fbc14bd12ab845ccf57991c1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c5327efa5395d72d5c4bfcf1645cbb3d8276bdc00ebe4f568eab9ff473bdb4a\"" Sep 13 00:49:30.722139 env[1203]: time="2025-09-13T00:49:30.722100767Z" level=info msg="StartContainer for \"6c5327efa5395d72d5c4bfcf1645cbb3d8276bdc00ebe4f568eab9ff473bdb4a\"" Sep 13 00:49:30.738011 systemd[1]: Started cri-containerd-6c5327efa5395d72d5c4bfcf1645cbb3d8276bdc00ebe4f568eab9ff473bdb4a.scope. Sep 13 00:49:30.773400 env[1203]: time="2025-09-13T00:49:30.773346982Z" level=info msg="StartContainer for \"6c5327efa5395d72d5c4bfcf1645cbb3d8276bdc00ebe4f568eab9ff473bdb4a\" returns successfully" Sep 13 00:49:31.126981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4-rootfs.mount: Deactivated successfully. Sep 13 00:49:31.180514 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:49:31.486016 kubelet[1422]: E0913 00:49:31.485946 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:31.697597 kubelet[1422]: W0913 00:49:31.697548 1422 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7999113_f150_48bc_984d_15e1d498fbd6.slice/cri-containerd-420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5.scope WatchSource:0}: task 420bfd829b23fa1d115a4053bee1703816419b9d19eb0ff5574954af11afd6e5 not found: not found Sep 13 00:49:31.703959 kubelet[1422]: E0913 00:49:31.703929 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:32.208690 kubelet[1422]: I0913 00:49:32.208621 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ccssd" podStartSLOduration=6.208604441 podStartE2EDuration="6.208604441s" podCreationTimestamp="2025-09-13 00:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:49:32.208385389 +0000 UTC m=+67.179336573" watchObservedRunningTime="2025-09-13 00:49:32.208604441 +0000 UTC m=+67.179555595" Sep 13 00:49:32.486600 kubelet[1422]: E0913 00:49:32.486425 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:33.142155 kubelet[1422]: E0913 00:49:33.142091 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:33.487632 kubelet[1422]: E0913 00:49:33.487572 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:34.315636 systemd-networkd[1029]: lxc_health: Link UP Sep 13 00:49:34.323561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:49:34.323923 systemd-networkd[1029]: lxc_health: Gained carrier Sep 13 00:49:34.488213 kubelet[1422]: E0913 00:49:34.488149 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:34.806734 kubelet[1422]: W0913 00:49:34.806107 1422 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7999113_f150_48bc_984d_15e1d498fbd6.slice/cri-containerd-6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8.scope WatchSource:0}: task 6939ac1ac38cfef7d0150f2abd03adab23c576e232829a0339d8cdddc5beecf8 not found: not found Sep 13 00:49:35.143439 kubelet[1422]: E0913 00:49:35.143320 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:35.489189 kubelet[1422]: E0913 00:49:35.489120 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:35.549301 systemd[1]: run-containerd-runc-k8s.io-6c5327efa5395d72d5c4bfcf1645cbb3d8276bdc00ebe4f568eab9ff473bdb4a-runc.dL12Cp.mount: Deactivated successfully. Sep 13 00:49:35.710379 kubelet[1422]: E0913 00:49:35.710341 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:35.762800 systemd-networkd[1029]: lxc_health: Gained IPv6LL Sep 13 00:49:36.489510 kubelet[1422]: E0913 00:49:36.489430 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:36.712206 kubelet[1422]: E0913 00:49:36.712166 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:49:37.490151 kubelet[1422]: E0913 00:49:37.490100 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:37.915122 kubelet[1422]: W0913 00:49:37.914956 1422 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7999113_f150_48bc_984d_15e1d498fbd6.slice/cri-containerd-13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230.scope WatchSource:0}: task 13babb49901f65f52abb8dbb703346c9380bc845d80217b695a44b44a06be230 not found: not found Sep 13 00:49:38.491773 kubelet[1422]: E0913 00:49:38.491688 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:39.492386 kubelet[1422]: E0913 00:49:39.492241 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:40.493351 kubelet[1422]: E0913 00:49:40.493249 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:41.022665 kubelet[1422]: W0913 00:49:41.022603 1422 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7999113_f150_48bc_984d_15e1d498fbd6.slice/cri-containerd-d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4.scope WatchSource:0}: task d1a5695e45eb66cbe575877611ee78b641c92f0950229c813b46c3539fd88ad4 not found: not found Sep 13 00:49:41.494026 kubelet[1422]: E0913 00:49:41.493983 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"