Sep 6 00:25:49.876902 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:25:49.876924 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:25:49.876936 kernel: BIOS-provided physical RAM map: Sep 6 00:25:49.876944 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:25:49.876951 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 6 00:25:49.876957 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 6 00:25:49.876966 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 6 00:25:49.876974 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 6 00:25:49.876981 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 6 00:25:49.876989 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 6 00:25:49.876996 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 6 00:25:49.877004 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 6 00:25:49.877011 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 6 00:25:49.877018 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 6 00:25:49.877027 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 6 00:25:49.877037 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 6 00:25:49.877045 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 6 00:25:49.877052 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 00:25:49.877060 kernel: NX (Execute Disable) protection: active Sep 6 00:25:49.877068 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 6 00:25:49.877076 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 6 00:25:49.877083 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 6 00:25:49.877091 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 6 00:25:49.877098 kernel: extended physical RAM map: Sep 6 00:25:49.877106 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:25:49.877115 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 6 00:25:49.877123 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 6 00:25:49.877131 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 6 00:25:49.877139 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 6 00:25:49.877147 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 6 00:25:49.877154 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 6 00:25:49.877162 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Sep 6 00:25:49.877170 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Sep 6 00:25:49.877178 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Sep 6 00:25:49.877186 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Sep 6 00:25:49.877193 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Sep 6 00:25:49.877203 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 6 00:25:49.877211 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 6 00:25:49.877219 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 6 00:25:49.877227 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 6 00:25:49.877238 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 6 00:25:49.877247 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 6 00:25:49.877255 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 00:25:49.877265 kernel: efi: EFI v2.70 by EDK II Sep 6 00:25:49.877274 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Sep 6 00:25:49.877315 kernel: random: crng init done Sep 6 00:25:49.877324 kernel: SMBIOS 2.8 present. Sep 6 00:25:49.877333 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 6 00:25:49.877341 kernel: Hypervisor detected: KVM Sep 6 00:25:49.877349 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:25:49.877358 kernel: kvm-clock: cpu 0, msr 2919f001, primary cpu clock Sep 6 00:25:49.877366 kernel: kvm-clock: using sched offset of 4118976907 cycles Sep 6 00:25:49.877378 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:25:49.877387 kernel: tsc: Detected 2794.750 MHz processor Sep 6 00:25:49.877396 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:25:49.877405 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:25:49.877413 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 6 00:25:49.877422 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:25:49.877431 kernel: Using GB pages for direct mapping Sep 6 00:25:49.877439 kernel: Secure boot disabled Sep 6 00:25:49.877448 kernel: ACPI: Early table checksum verification disabled Sep 6 00:25:49.877458 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 6 00:25:49.877467 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:25:49.877476 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:25:49.877484 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:25:49.877493 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 6 00:25:49.877501 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:25:49.877510 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:25:49.877518 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:25:49.877527 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:25:49.877537 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 6 00:25:49.877546 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 6 00:25:49.877555 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 6 00:25:49.877563 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 6 00:25:49.877572 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 6 00:25:49.877580 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 6 00:25:49.877589 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 6 00:25:49.877597 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 6 00:25:49.877606 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 6 00:25:49.877616 kernel: No NUMA configuration found Sep 6 00:25:49.877625 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 6 00:25:49.877633 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 6 00:25:49.877642 kernel: Zone ranges: Sep 6 00:25:49.877651 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:25:49.877659 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 6 00:25:49.877667 kernel: Normal empty Sep 6 00:25:49.877676 kernel: Movable zone start for each node Sep 6 00:25:49.877684 kernel: Early memory node ranges Sep 6 00:25:49.877707 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 6 00:25:49.877716 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 6 00:25:49.877725 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 6 00:25:49.877733 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 6 00:25:49.877742 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 6 00:25:49.877750 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 6 00:25:49.877759 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 6 00:25:49.877767 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:25:49.877776 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 6 00:25:49.877785 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 6 00:25:49.877795 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:25:49.877803 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 6 00:25:49.877812 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 6 00:25:49.877821 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 6 00:25:49.877829 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:25:49.877838 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:25:49.877846 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:25:49.877855 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:25:49.877863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:25:49.877873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:25:49.877882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:25:49.877891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:25:49.877899 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:25:49.877908 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:25:49.877916 kernel: TSC deadline timer available Sep 6 00:25:49.877925 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 6 00:25:49.877933 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 6 00:25:49.877942 kernel: kvm-guest: setup PV sched yield Sep 6 00:25:49.877952 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 6 00:25:49.877960 kernel: Booting paravirtualized kernel on KVM Sep 6 00:25:49.877975 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:25:49.877986 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 6 00:25:49.877995 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 6 00:25:49.878004 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 6 00:25:49.878013 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 6 00:25:49.878029 kernel: kvm-guest: setup async PF for cpu 0 Sep 6 00:25:49.878038 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Sep 6 00:25:49.878047 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:25:49.878056 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:25:49.878065 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 6 00:25:49.878077 kernel: Policy zone: DMA32 Sep 6 00:25:49.878087 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:25:49.878096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:25:49.878105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:25:49.878116 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:25:49.878125 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:25:49.878135 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 169308K reserved, 0K cma-reserved) Sep 6 00:25:49.878144 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:25:49.878153 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:25:49.878162 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:25:49.878171 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:25:49.878180 kernel: rcu: RCU event tracing is enabled. Sep 6 00:25:49.878190 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:25:49.878200 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:25:49.878210 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:25:49.878219 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:25:49.878229 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:25:49.878238 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 6 00:25:49.878247 kernel: Console: colour dummy device 80x25 Sep 6 00:25:49.878256 kernel: printk: console [ttyS0] enabled Sep 6 00:25:49.878265 kernel: ACPI: Core revision 20210730 Sep 6 00:25:49.878275 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:25:49.878301 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:25:49.878310 kernel: x2apic enabled Sep 6 00:25:49.878319 kernel: Switched APIC routing to physical x2apic. Sep 6 00:25:49.878328 kernel: kvm-guest: setup PV IPIs Sep 6 00:25:49.878338 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:25:49.878347 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 6 00:25:49.878357 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 6 00:25:49.878366 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 6 00:25:49.878375 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 6 00:25:49.878386 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 6 00:25:49.878430 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:25:49.878449 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:25:49.878458 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:25:49.878468 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 6 00:25:49.878477 kernel: active return thunk: retbleed_return_thunk Sep 6 00:25:49.878486 kernel: RETBleed: Mitigation: untrained return thunk Sep 6 00:25:49.878496 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:25:49.879503 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:25:49.879518 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:25:49.879527 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:25:49.879537 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:25:49.882906 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:25:49.882920 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:25:49.882931 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:25:49.882941 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:25:49.882951 kernel: LSM: Security Framework initializing Sep 6 00:25:49.882961 kernel: SELinux: Initializing. Sep 6 00:25:49.882976 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:25:49.882986 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:25:49.882997 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 6 00:25:49.883008 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 6 00:25:49.883018 kernel: ... version: 0 Sep 6 00:25:49.883029 kernel: ... bit width: 48 Sep 6 00:25:49.883039 kernel: ... generic registers: 6 Sep 6 00:25:49.883049 kernel: ... value mask: 0000ffffffffffff Sep 6 00:25:49.883059 kernel: ... max period: 00007fffffffffff Sep 6 00:25:49.883070 kernel: ... fixed-purpose events: 0 Sep 6 00:25:49.883080 kernel: ... event mask: 000000000000003f Sep 6 00:25:49.883090 kernel: signal: max sigframe size: 1776 Sep 6 00:25:49.883098 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:25:49.883108 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:25:49.883117 kernel: x86: Booting SMP configuration: Sep 6 00:25:49.883127 kernel: .... node #0, CPUs: #1 Sep 6 00:25:49.883138 kernel: kvm-clock: cpu 1, msr 2919f041, secondary cpu clock Sep 6 00:25:49.883149 kernel: kvm-guest: setup async PF for cpu 1 Sep 6 00:25:49.883160 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Sep 6 00:25:49.883171 kernel: #2 Sep 6 00:25:49.883182 kernel: kvm-clock: cpu 2, msr 2919f081, secondary cpu clock Sep 6 00:25:49.883193 kernel: kvm-guest: setup async PF for cpu 2 Sep 6 00:25:49.883202 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Sep 6 00:25:49.883211 kernel: #3 Sep 6 00:25:49.883220 kernel: kvm-clock: cpu 3, msr 2919f0c1, secondary cpu clock Sep 6 00:25:49.883229 kernel: kvm-guest: setup async PF for cpu 3 Sep 6 00:25:49.883239 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Sep 6 00:25:49.883248 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:25:49.883259 kernel: smpboot: Max logical packages: 1 Sep 6 00:25:49.883268 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 6 00:25:49.883277 kernel: devtmpfs: initialized Sep 6 00:25:49.883299 kernel: x86/mm: Memory block size: 128MB Sep 6 00:25:49.883310 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 6 00:25:49.883320 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 6 00:25:49.883331 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 6 00:25:49.883341 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 6 00:25:49.883352 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 6 00:25:49.883364 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:25:49.883374 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:25:49.883504 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:25:49.883514 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:25:49.883524 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:25:49.883535 kernel: audit: type=2000 audit(1757118349.369:1): state=initialized audit_enabled=0 res=1 Sep 6 00:25:49.883544 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:25:49.883552 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:25:49.883563 kernel: cpuidle: using governor menu Sep 6 00:25:49.883572 kernel: ACPI: bus type PCI registered Sep 6 00:25:49.883581 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:25:49.883590 kernel: dca service started, version 1.12.1 Sep 6 00:25:49.883599 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 6 00:25:49.883608 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 6 00:25:49.883617 kernel: PCI: Using configuration type 1 for base access Sep 6 00:25:49.883627 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:25:49.883638 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:25:49.883650 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:25:49.883660 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:25:49.883670 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:25:49.883680 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:25:49.883701 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:25:49.883712 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:25:49.883722 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:25:49.883732 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:25:49.883742 kernel: ACPI: Interpreter enabled Sep 6 00:25:49.883752 kernel: ACPI: PM: (supports S0 S3 S5) Sep 6 00:25:49.883764 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:25:49.883774 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:25:49.883784 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 6 00:25:49.883794 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:25:49.883957 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:25:49.884065 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 6 00:25:49.884171 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 6 00:25:49.884187 kernel: PCI host bridge to bus 0000:00 Sep 6 00:25:49.884308 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:25:49.884394 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:25:49.884474 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:25:49.884553 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 6 00:25:49.884643 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 6 00:25:49.884756 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 6 00:25:49.884851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:25:49.884971 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 6 00:25:49.885081 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 6 00:25:49.885183 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 6 00:25:49.885298 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 6 00:25:49.885400 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 6 00:25:49.885499 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 6 00:25:49.885590 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:25:49.885686 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:25:49.885775 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 6 00:25:49.885845 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 6 00:25:49.885913 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 6 00:25:49.886014 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:25:49.886105 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 6 00:25:49.886190 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 6 00:25:49.886312 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 6 00:25:49.886441 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:25:49.886582 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 6 00:25:49.886738 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 6 00:25:49.886846 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 6 00:25:49.886973 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 6 00:25:49.887105 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 6 00:25:49.887222 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 6 00:25:49.887339 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 6 00:25:49.887451 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 6 00:25:49.887557 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 6 00:25:49.887677 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 6 00:25:49.887825 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 6 00:25:49.887842 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:25:49.887867 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:25:49.887877 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:25:49.887887 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:25:49.887896 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 6 00:25:49.887906 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 6 00:25:49.887915 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 6 00:25:49.887944 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 6 00:25:49.887953 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 6 00:25:49.887963 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 6 00:25:49.887972 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 6 00:25:49.887982 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 6 00:25:49.888006 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 6 00:25:49.888016 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 6 00:25:49.888026 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 6 00:25:49.888036 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 6 00:25:49.888048 kernel: iommu: Default domain type: Translated Sep 6 00:25:49.888068 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:25:49.888182 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 6 00:25:49.888495 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:25:49.888626 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 6 00:25:49.888640 kernel: vgaarb: loaded Sep 6 00:25:49.888649 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:25:49.888659 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:25:49.888672 kernel: PTP clock support registered Sep 6 00:25:49.888704 kernel: Registered efivars operations Sep 6 00:25:49.888714 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:25:49.888723 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:25:49.888732 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 6 00:25:49.888739 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 6 00:25:49.888746 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Sep 6 00:25:49.888753 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Sep 6 00:25:49.888771 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 6 00:25:49.888778 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 6 00:25:49.888788 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:25:49.888796 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:25:49.888803 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:25:49.888810 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:25:49.888817 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:25:49.888824 kernel: pnp: PnP ACPI init Sep 6 00:25:49.888967 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 6 00:25:49.888991 kernel: pnp: PnP ACPI: found 6 devices Sep 6 00:25:49.889000 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:25:49.889009 kernel: NET: Registered PF_INET protocol family Sep 6 00:25:49.889019 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:25:49.889029 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:25:49.889039 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:25:49.889048 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:25:49.889057 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:25:49.889066 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:25:49.889078 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:25:49.889087 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:25:49.889097 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:25:49.889106 kernel: NET: Registered PF_XDP protocol family Sep 6 00:25:49.889212 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 6 00:25:49.889327 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 6 00:25:49.889402 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:25:49.889466 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:25:49.889531 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:25:49.889592 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 6 00:25:49.889653 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 6 00:25:49.889725 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 6 00:25:49.889735 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:25:49.889742 kernel: Initialise system trusted keyrings Sep 6 00:25:49.889751 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:25:49.889758 kernel: Key type asymmetric registered Sep 6 00:25:49.889765 kernel: Asymmetric key parser 'x509' registered Sep 6 00:25:49.889774 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:25:49.889782 kernel: io scheduler mq-deadline registered Sep 6 00:25:49.889798 kernel: io scheduler kyber registered Sep 6 00:25:49.889807 kernel: io scheduler bfq registered Sep 6 00:25:49.889814 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:25:49.889822 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 6 00:25:49.889830 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 6 00:25:49.889838 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 6 00:25:49.889845 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:25:49.889854 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:25:49.889862 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:25:49.889869 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:25:49.889877 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:25:49.889885 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:25:49.889963 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 6 00:25:49.890028 kernel: rtc_cmos 00:04: registered as rtc0 Sep 6 00:25:49.890090 kernel: rtc_cmos 00:04: setting system clock to 2025-09-06T00:25:49 UTC (1757118349) Sep 6 00:25:49.890156 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 6 00:25:49.890165 kernel: efifb: probing for efifb Sep 6 00:25:49.890173 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 6 00:25:49.890181 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 6 00:25:49.890188 kernel: efifb: scrolling: redraw Sep 6 00:25:49.890195 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 00:25:49.890203 kernel: Console: switching to colour frame buffer device 160x50 Sep 6 00:25:49.890210 kernel: fb0: EFI VGA frame buffer device Sep 6 00:25:49.890218 kernel: pstore: Registered efi as persistent store backend Sep 6 00:25:49.890227 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:25:49.890235 kernel: Segment Routing with IPv6 Sep 6 00:25:49.890245 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:25:49.890257 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:25:49.890267 kernel: Key type dns_resolver registered Sep 6 00:25:49.890278 kernel: IPI shorthand broadcast: enabled Sep 6 00:25:49.890302 kernel: sched_clock: Marking stable (518507321, 126583233)->(661592796, -16502242) Sep 6 00:25:49.890312 kernel: registered taskstats version 1 Sep 6 00:25:49.890320 kernel: Loading compiled-in X.509 certificates Sep 6 00:25:49.890328 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:25:49.890335 kernel: Key type .fscrypt registered Sep 6 00:25:49.890343 kernel: Key type fscrypt-provisioning registered Sep 6 00:25:49.890358 kernel: pstore: Using crash dump compression: deflate Sep 6 00:25:49.890367 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:25:49.890377 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:25:49.890385 kernel: ima: No architecture policies found Sep 6 00:25:49.890392 kernel: clk: Disabling unused clocks Sep 6 00:25:49.890400 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:25:49.890408 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:25:49.890416 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:25:49.890423 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:25:49.890431 kernel: Run /init as init process Sep 6 00:25:49.890439 kernel: with arguments: Sep 6 00:25:49.890449 kernel: /init Sep 6 00:25:49.890456 kernel: with environment: Sep 6 00:25:49.890464 kernel: HOME=/ Sep 6 00:25:49.890471 kernel: TERM=linux Sep 6 00:25:49.890478 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:25:49.890488 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:25:49.890498 systemd[1]: Detected virtualization kvm. Sep 6 00:25:49.890507 systemd[1]: Detected architecture x86-64. Sep 6 00:25:49.890516 systemd[1]: Running in initrd. Sep 6 00:25:49.890523 systemd[1]: No hostname configured, using default hostname. Sep 6 00:25:49.890531 systemd[1]: Hostname set to . Sep 6 00:25:49.890539 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:25:49.890547 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:25:49.890555 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:25:49.890563 systemd[1]: Reached target cryptsetup.target. Sep 6 00:25:49.890572 systemd[1]: Reached target paths.target. Sep 6 00:25:49.890584 systemd[1]: Reached target slices.target. Sep 6 00:25:49.890593 systemd[1]: Reached target swap.target. Sep 6 00:25:49.890603 systemd[1]: Reached target timers.target. Sep 6 00:25:49.890614 systemd[1]: Listening on iscsid.socket. Sep 6 00:25:49.890623 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:25:49.890634 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:25:49.890644 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:25:49.890654 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:25:49.893358 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:25:49.893376 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:25:49.893384 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:25:49.893392 systemd[1]: Reached target sockets.target. Sep 6 00:25:49.893401 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:25:49.893409 systemd[1]: Finished network-cleanup.service. Sep 6 00:25:49.893417 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:25:49.893426 systemd[1]: Starting systemd-journald.service... Sep 6 00:25:49.893434 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:25:49.893445 systemd[1]: Starting systemd-resolved.service... Sep 6 00:25:49.893453 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:25:49.893462 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:25:49.893470 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:25:49.893478 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:25:49.893487 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:25:49.893495 kernel: audit: type=1130 audit(1757118349.879:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.893504 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:25:49.893512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:25:49.893526 systemd-journald[198]: Journal started Sep 6 00:25:49.893582 systemd-journald[198]: Runtime Journal (/run/log/journal/2c9edaf827624617b3790a234dec266d) is 6.0M, max 48.4M, 42.4M free. Sep 6 00:25:49.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.875740 systemd-modules-load[199]: Inserted module 'overlay' Sep 6 00:25:49.904348 systemd[1]: Started systemd-journald.service. Sep 6 00:25:49.904432 kernel: audit: type=1130 audit(1757118349.895:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.904449 kernel: audit: type=1130 audit(1757118349.900:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.901452 systemd-resolved[200]: Positive Trust Anchors: Sep 6 00:25:49.901462 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:25:49.901503 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:25:49.904644 systemd-resolved[200]: Defaulting to hostname 'linux'. Sep 6 00:25:49.905766 systemd[1]: Started systemd-resolved.service. Sep 6 00:25:49.906077 systemd[1]: Reached target nss-lookup.target. Sep 6 00:25:49.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.911317 kernel: audit: type=1130 audit(1757118349.905:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.922327 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:25:49.923685 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:25:49.929713 kernel: audit: type=1130 audit(1757118349.924:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.929750 kernel: Bridge firewalling registered Sep 6 00:25:49.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.925646 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:25:49.928595 systemd-modules-load[199]: Inserted module 'br_netfilter' Sep 6 00:25:49.938332 dracut-cmdline[218]: dracut-dracut-053 Sep 6 00:25:49.941237 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:25:49.947316 kernel: SCSI subsystem initialized Sep 6 00:25:49.960361 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:25:49.960440 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:25:49.960455 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:25:49.963217 systemd-modules-load[199]: Inserted module 'dm_multipath' Sep 6 00:25:49.964002 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:25:49.968712 kernel: audit: type=1130 audit(1757118349.964:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.965478 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:25:49.975401 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:25:49.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:49.980312 kernel: audit: type=1130 audit(1757118349.975:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:50.023346 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:25:50.045337 kernel: iscsi: registered transport (tcp) Sep 6 00:25:50.074553 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:25:50.074655 kernel: QLogic iSCSI HBA Driver Sep 6 00:25:50.112954 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:25:50.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:50.114156 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:25:50.118930 kernel: audit: type=1130 audit(1757118350.112:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:50.167328 kernel: raid6: avx2x4 gen() 19400 MB/s Sep 6 00:25:50.184317 kernel: raid6: avx2x4 xor() 5681 MB/s Sep 6 00:25:50.201317 kernel: raid6: avx2x2 gen() 19804 MB/s Sep 6 00:25:50.218317 kernel: raid6: avx2x2 xor() 13065 MB/s Sep 6 00:25:50.235321 kernel: raid6: avx2x1 gen() 16732 MB/s Sep 6 00:25:50.252328 kernel: raid6: avx2x1 xor() 10482 MB/s Sep 6 00:25:50.269330 kernel: raid6: sse2x4 gen() 10053 MB/s Sep 6 00:25:50.286331 kernel: raid6: sse2x4 xor() 4583 MB/s Sep 6 00:25:50.303330 kernel: raid6: sse2x2 gen() 10489 MB/s Sep 6 00:25:50.320323 kernel: raid6: sse2x2 xor() 6625 MB/s Sep 6 00:25:50.337319 kernel: raid6: sse2x1 gen() 8182 MB/s Sep 6 00:25:50.355048 kernel: raid6: sse2x1 xor() 5298 MB/s Sep 6 00:25:50.355107 kernel: raid6: using algorithm avx2x2 gen() 19804 MB/s Sep 6 00:25:50.355117 kernel: raid6: .... xor() 13065 MB/s, rmw enabled Sep 6 00:25:50.355878 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:25:50.372314 kernel: xor: automatically using best checksumming function avx Sep 6 00:25:50.468338 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:25:50.476774 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:25:50.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:50.478000 audit: BPF prog-id=7 op=LOAD Sep 6 00:25:50.481000 audit: BPF prog-id=8 op=LOAD Sep 6 00:25:50.482306 kernel: audit: type=1130 audit(1757118350.476:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:50.482555 systemd[1]: Starting systemd-udevd.service... Sep 6 00:25:50.495269 systemd-udevd[402]: Using default interface naming scheme 'v252'. Sep 6 00:25:50.499042 systemd[1]: Started systemd-udevd.service. Sep 6 00:25:50.500040 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:25:50.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:50.510744 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 6 00:25:50.538187 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:25:50.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:50.540394 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:25:50.575628 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:25:50.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:50.612307 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:25:50.616763 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:25:50.639300 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:25:50.639326 kernel: AES CTR mode by8 optimization enabled Sep 6 00:25:50.639339 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:25:50.639352 kernel: GPT:9289727 != 19775487 Sep 6 00:25:50.639365 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:25:50.639377 kernel: GPT:9289727 != 19775487 Sep 6 00:25:50.639389 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:25:50.639406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:25:50.645325 kernel: libata version 3.00 loaded. Sep 6 00:25:50.657896 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (449) Sep 6 00:25:50.657983 kernel: ahci 0000:00:1f.2: version 3.0 Sep 6 00:25:50.679374 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 6 00:25:50.679392 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 6 00:25:50.679484 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 6 00:25:50.679560 kernel: scsi host0: ahci Sep 6 00:25:50.679654 kernel: scsi host1: ahci Sep 6 00:25:50.679750 kernel: scsi host2: ahci Sep 6 00:25:50.679833 kernel: scsi host3: ahci Sep 6 00:25:50.679913 kernel: scsi host4: ahci Sep 6 00:25:50.679998 kernel: scsi host5: ahci Sep 6 00:25:50.680078 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 6 00:25:50.680089 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 6 00:25:50.680101 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 6 00:25:50.680109 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 6 00:25:50.680118 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 6 00:25:50.680127 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 6 00:25:50.671314 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:25:50.677774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:25:50.690011 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:25:50.693144 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:25:50.700212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:25:50.703440 systemd[1]: Starting disk-uuid.service... Sep 6 00:25:50.711002 disk-uuid[537]: Primary Header is updated. Sep 6 00:25:50.711002 disk-uuid[537]: Secondary Entries is updated. Sep 6 00:25:50.711002 disk-uuid[537]: Secondary Header is updated. Sep 6 00:25:50.715323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:25:50.718335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:25:50.986377 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 6 00:25:50.986458 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 6 00:25:50.987319 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 6 00:25:50.989531 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 6 00:25:50.989627 kernel: ata3.00: applying bridge limits Sep 6 00:25:50.990319 kernel: ata3.00: configured for UDMA/100 Sep 6 00:25:50.991319 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 6 00:25:50.996321 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 00:25:50.996371 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 00:25:50.997329 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 00:25:51.028831 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 6 00:25:51.046283 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 00:25:51.046336 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 6 00:25:51.720314 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:25:51.720383 disk-uuid[538]: The operation has completed successfully. Sep 6 00:25:51.750708 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:25:51.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:51.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:51.750823 systemd[1]: Finished disk-uuid.service. Sep 6 00:25:51.761325 systemd[1]: Starting verity-setup.service... Sep 6 00:25:51.777315 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 6 00:25:51.805408 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:25:51.808256 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:25:51.811242 systemd[1]: Finished verity-setup.service. Sep 6 00:25:51.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:51.887330 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:25:51.887963 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:25:51.889107 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:25:51.890055 systemd[1]: Starting ignition-setup.service... Sep 6 00:25:51.893114 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:25:51.902253 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:25:51.902335 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:25:51.902352 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:25:51.913477 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:25:51.922834 systemd[1]: Finished ignition-setup.service. Sep 6 00:25:51.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:51.925077 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:25:51.973222 ignition[643]: Ignition 2.14.0 Sep 6 00:25:51.973237 ignition[643]: Stage: fetch-offline Sep 6 00:25:51.973340 ignition[643]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:25:51.973356 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:25:51.973499 ignition[643]: parsed url from cmdline: "" Sep 6 00:25:51.973504 ignition[643]: no config URL provided Sep 6 00:25:51.973512 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:25:51.973522 ignition[643]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:25:51.980988 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:25:51.973553 ignition[643]: op(1): [started] loading QEMU firmware config module Sep 6 00:25:51.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:51.985000 audit: BPF prog-id=9 op=LOAD Sep 6 00:25:51.973560 ignition[643]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:25:51.986270 systemd[1]: Starting systemd-networkd.service... Sep 6 00:25:51.981608 ignition[643]: op(1): [finished] loading QEMU firmware config module Sep 6 00:25:51.981632 ignition[643]: QEMU firmware config was not found. Ignoring... Sep 6 00:25:52.028877 ignition[643]: parsing config with SHA512: d6c9426fff84d0eaee9c95b16695f2e03c0e5ca18e043729f00a0b6dcbefe11f7de613bf3e892b60865cf4128ccf43906361110109397bb6235b4b4d218b5132 Sep 6 00:25:52.035725 unknown[643]: fetched base config from "system" Sep 6 00:25:52.035925 unknown[643]: fetched user config from "qemu" Sep 6 00:25:52.037309 ignition[643]: fetch-offline: fetch-offline passed Sep 6 00:25:52.037365 ignition[643]: Ignition finished successfully Sep 6 00:25:52.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.038483 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:25:52.059353 systemd-networkd[722]: lo: Link UP Sep 6 00:25:52.059366 systemd-networkd[722]: lo: Gained carrier Sep 6 00:25:52.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.059939 systemd-networkd[722]: Enumeration completed Sep 6 00:25:52.060108 systemd[1]: Started systemd-networkd.service. Sep 6 00:25:52.060249 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:25:52.061514 systemd-networkd[722]: eth0: Link UP Sep 6 00:25:52.061520 systemd-networkd[722]: eth0: Gained carrier Sep 6 00:25:52.061944 systemd[1]: Reached target network.target. Sep 6 00:25:52.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.063939 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:25:52.064670 systemd[1]: Starting ignition-kargs.service... Sep 6 00:25:52.066417 systemd[1]: Starting iscsiuio.service... Sep 6 00:25:52.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.080942 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:25:52.080942 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:25:52.080942 iscsid[732]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:25:52.080942 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:25:52.080942 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:25:52.080942 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:25:52.080942 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:25:52.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.071976 systemd[1]: Started iscsiuio.service. Sep 6 00:25:52.080734 ignition[724]: Ignition 2.14.0 Sep 6 00:25:52.074231 systemd[1]: Starting iscsid.service... Sep 6 00:25:52.080742 ignition[724]: Stage: kargs Sep 6 00:25:52.078052 systemd[1]: Started iscsid.service. Sep 6 00:25:52.080897 ignition[724]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:25:52.079885 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:25:52.080910 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:25:52.080158 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:25:52.082180 ignition[724]: kargs: kargs passed Sep 6 00:25:52.087086 systemd[1]: Finished ignition-kargs.service. Sep 6 00:25:52.082231 ignition[724]: Ignition finished successfully Sep 6 00:25:52.092037 systemd[1]: Starting ignition-disks.service... Sep 6 00:25:52.101103 ignition[742]: Ignition 2.14.0 Sep 6 00:25:52.094557 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:25:52.101111 ignition[742]: Stage: disks Sep 6 00:25:52.097217 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:25:52.101234 ignition[742]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:25:52.099880 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:25:52.101245 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:25:52.101828 systemd[1]: Reached target remote-fs.target. Sep 6 00:25:52.102573 ignition[742]: disks: disks passed Sep 6 00:25:52.119880 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:25:52.102628 ignition[742]: Ignition finished successfully Sep 6 00:25:52.125907 systemd[1]: Finished ignition-disks.service. Sep 6 00:25:52.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.128087 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:25:52.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.130275 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:25:52.132598 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:25:52.134669 systemd[1]: Reached target local-fs.target. Sep 6 00:25:52.136553 systemd[1]: Reached target sysinit.target. Sep 6 00:25:52.138409 systemd[1]: Reached target basic.target. Sep 6 00:25:52.141390 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:25:52.154725 systemd-fsck[756]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:25:52.160675 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:25:52.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.161806 systemd[1]: Mounting sysroot.mount... Sep 6 00:25:52.169316 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:25:52.169910 systemd[1]: Mounted sysroot.mount. Sep 6 00:25:52.171843 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:25:52.175007 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:25:52.177192 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:25:52.177245 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:25:52.178933 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:25:52.184085 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:25:52.186818 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:25:52.192674 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:25:52.197001 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:25:52.202540 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:25:52.207605 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:25:52.240041 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:25:52.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.242083 systemd[1]: Starting ignition-mount.service... Sep 6 00:25:52.243817 systemd[1]: Starting sysroot-boot.service... Sep 6 00:25:52.251931 bash[808]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:25:52.262475 ignition[809]: INFO : Ignition 2.14.0 Sep 6 00:25:52.262475 ignition[809]: INFO : Stage: mount Sep 6 00:25:52.266039 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:25:52.266039 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:25:52.266039 ignition[809]: INFO : mount: mount passed Sep 6 00:25:52.266039 ignition[809]: INFO : Ignition finished successfully Sep 6 00:25:52.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:52.264935 systemd[1]: Finished ignition-mount.service. Sep 6 00:25:52.266334 systemd[1]: Finished sysroot-boot.service. Sep 6 00:25:52.820776 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:25:52.828406 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Sep 6 00:25:52.830877 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:25:52.830910 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:25:52.830924 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:25:52.836281 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:25:52.839462 systemd[1]: Starting ignition-files.service... Sep 6 00:25:52.856129 ignition[837]: INFO : Ignition 2.14.0 Sep 6 00:25:52.856129 ignition[837]: INFO : Stage: files Sep 6 00:25:52.858413 ignition[837]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:25:52.858413 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:25:52.862396 ignition[837]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:25:52.864104 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:25:52.864104 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:25:52.868717 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:25:52.870622 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:25:52.873021 unknown[837]: wrote ssh authorized keys file for user: core Sep 6 00:25:52.874459 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:25:52.876589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:25:52.878952 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:25:52.937596 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:25:53.481311 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:25:53.483783 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:25:53.486071 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:25:53.740196 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:25:53.879752 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:25:53.879752 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:25:53.884876 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:25:54.122499 systemd-networkd[722]: eth0: Gained IPv6LL Sep 6 00:25:54.163671 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:25:54.556191 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:25:54.556191 ignition[837]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:25:54.560330 ignition[837]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:25:54.585270 ignition[837]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:25:54.587840 ignition[837]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:25:54.587840 ignition[837]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:25:54.587840 ignition[837]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:25:54.587840 ignition[837]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:25:54.587840 ignition[837]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:25:54.587840 ignition[837]: INFO : files: files passed Sep 6 00:25:54.587840 ignition[837]: INFO : Ignition finished successfully Sep 6 00:25:54.615953 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 6 00:25:54.615978 kernel: audit: type=1130 audit(1757118354.587:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.615990 kernel: audit: type=1130 audit(1757118354.598:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.616000 kernel: audit: type=1130 audit(1757118354.604:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.616010 kernel: audit: type=1131 audit(1757118354.604:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.586854 systemd[1]: Finished ignition-files.service. Sep 6 00:25:54.588656 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:25:54.594058 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:25:54.620156 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 6 00:25:54.594930 systemd[1]: Starting ignition-quench.service... Sep 6 00:25:54.622577 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:25:54.596441 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:25:54.598912 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:25:54.598985 systemd[1]: Finished ignition-quench.service. Sep 6 00:25:54.634962 kernel: audit: type=1130 audit(1757118354.626:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.634979 kernel: audit: type=1131 audit(1757118354.626:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.604396 systemd[1]: Reached target ignition-complete.target. Sep 6 00:25:54.613809 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:25:54.625163 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:25:54.625247 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:25:54.627134 systemd[1]: Reached target initrd-fs.target. Sep 6 00:25:54.634995 systemd[1]: Reached target initrd.target. Sep 6 00:25:54.635811 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:25:54.636618 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:25:54.647693 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:25:54.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.650161 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:25:54.654456 kernel: audit: type=1130 audit(1757118354.649:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.661207 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:25:54.662965 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:25:54.664002 systemd[1]: Stopped target timers.target. Sep 6 00:25:54.665539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:25:54.671549 kernel: audit: type=1131 audit(1757118354.666:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.665714 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:25:54.667183 systemd[1]: Stopped target initrd.target. Sep 6 00:25:54.671697 systemd[1]: Stopped target basic.target. Sep 6 00:25:54.673357 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:25:54.674944 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:25:54.676511 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:25:54.678256 systemd[1]: Stopped target remote-fs.target. Sep 6 00:25:54.679882 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:25:54.681547 systemd[1]: Stopped target sysinit.target. Sep 6 00:25:54.683106 systemd[1]: Stopped target local-fs.target. Sep 6 00:25:54.684755 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:25:54.686329 systemd[1]: Stopped target swap.target. Sep 6 00:25:54.693704 kernel: audit: type=1131 audit(1757118354.689:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.687763 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:25:54.687876 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:25:54.699939 kernel: audit: type=1131 audit(1757118354.695:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.689439 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:25:54.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.693740 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:25:54.693827 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:25:54.695613 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:25:54.695698 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:25:54.700066 systemd[1]: Stopped target paths.target. Sep 6 00:25:54.701552 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:25:54.705351 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:25:54.706988 systemd[1]: Stopped target slices.target. Sep 6 00:25:54.708773 systemd[1]: Stopped target sockets.target. Sep 6 00:25:54.710389 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:25:54.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.710453 systemd[1]: Closed iscsid.socket. Sep 6 00:25:54.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.711829 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:25:54.711917 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:25:54.713663 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:25:54.713743 systemd[1]: Stopped ignition-files.service. Sep 6 00:25:54.716075 systemd[1]: Stopping ignition-mount.service... Sep 6 00:25:54.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.717910 systemd[1]: Stopping iscsiuio.service... Sep 6 00:25:54.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.720027 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:25:54.720924 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:25:54.727819 ignition[878]: INFO : Ignition 2.14.0 Sep 6 00:25:54.727819 ignition[878]: INFO : Stage: umount Sep 6 00:25:54.727819 ignition[878]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:25:54.727819 ignition[878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:25:54.727819 ignition[878]: INFO : umount: umount passed Sep 6 00:25:54.727819 ignition[878]: INFO : Ignition finished successfully Sep 6 00:25:54.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.721079 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:25:54.722773 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:25:54.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.722892 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:25:54.726232 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:25:54.726385 systemd[1]: Stopped iscsiuio.service. Sep 6 00:25:54.728144 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:25:54.728211 systemd[1]: Stopped ignition-mount.service. Sep 6 00:25:54.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.729816 systemd[1]: Stopped target network.target. Sep 6 00:25:54.731052 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:25:54.731080 systemd[1]: Closed iscsiuio.socket. Sep 6 00:25:54.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.732810 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:25:54.732839 systemd[1]: Stopped ignition-disks.service. Sep 6 00:25:54.734945 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:25:54.734975 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:25:54.754000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:25:54.736600 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:25:54.736630 systemd[1]: Stopped ignition-setup.service. Sep 6 00:25:54.738313 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:25:54.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.738435 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:25:54.739423 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:25:54.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.739849 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:25:54.739918 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:25:54.744332 systemd-networkd[722]: eth0: DHCPv6 lease lost Sep 6 00:25:54.765000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:25:54.745489 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:25:54.745616 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:25:54.748977 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:25:54.749067 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:25:54.753427 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:25:54.753461 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:25:54.755899 systemd[1]: Stopping network-cleanup.service... Sep 6 00:25:54.757735 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:25:54.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.757779 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:25:54.759532 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:25:54.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.759578 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:25:54.761167 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:25:54.761205 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:25:54.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.767256 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:25:54.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.769373 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:25:54.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.773091 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:25:54.773211 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:25:54.775844 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:25:54.775919 systemd[1]: Stopped network-cleanup.service. Sep 6 00:25:54.777499 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:25:54.777528 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:25:54.779224 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:25:54.779248 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:25:54.780806 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:25:54.780838 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:25:54.782621 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:25:54.782650 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:25:54.784241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:25:54.784272 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:25:54.784996 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:25:54.785194 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:25:54.785245 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 00:25:54.786800 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:25:54.786829 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:25:54.788339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:25:54.788368 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:25:54.790059 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 00:25:54.790413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:25:54.790477 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:25:54.844063 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:25:54.844162 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:25:54.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.846008 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:25:54.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:54.847447 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:25:54.847484 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:25:54.848197 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:25:54.865054 systemd[1]: Switching root. Sep 6 00:25:54.884013 iscsid[732]: iscsid shutting down. Sep 6 00:25:54.884871 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Sep 6 00:25:54.884917 systemd-journald[198]: Journal stopped Sep 6 00:25:57.984875 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:25:57.984923 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:25:57.984934 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:25:57.984946 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:25:57.984958 kernel: SELinux: policy capability open_perms=1 Sep 6 00:25:57.984967 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:25:57.984977 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:25:57.984988 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:25:57.984997 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:25:57.985007 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:25:57.985016 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:25:57.985027 systemd[1]: Successfully loaded SELinux policy in 43.945ms. Sep 6 00:25:57.985043 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.659ms. Sep 6 00:25:57.985056 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:25:57.985067 systemd[1]: Detected virtualization kvm. Sep 6 00:25:57.985077 systemd[1]: Detected architecture x86-64. Sep 6 00:25:57.985087 systemd[1]: Detected first boot. Sep 6 00:25:57.985099 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:25:57.985110 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:25:57.985126 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:25:57.985140 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:25:57.985155 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:25:57.985170 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:25:57.985182 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:25:57.985192 systemd[1]: Stopped iscsid.service. Sep 6 00:25:57.985204 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:25:57.985214 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:25:57.985224 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:25:57.985236 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:25:57.985250 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:25:57.985264 systemd[1]: Created slice system-getty.slice. Sep 6 00:25:57.985278 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:25:57.985307 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:25:57.985320 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:25:57.985334 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:25:57.985344 systemd[1]: Created slice user.slice. Sep 6 00:25:57.985355 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:25:57.985369 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:25:57.985381 systemd[1]: Set up automount boot.automount. Sep 6 00:25:57.985395 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:25:57.985409 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:25:57.985423 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:25:57.985437 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:25:57.985447 systemd[1]: Reached target integritysetup.target. Sep 6 00:25:57.985457 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:25:57.985470 systemd[1]: Reached target remote-fs.target. Sep 6 00:25:57.985490 systemd[1]: Reached target slices.target. Sep 6 00:25:57.985500 systemd[1]: Reached target swap.target. Sep 6 00:25:57.985511 systemd[1]: Reached target torcx.target. Sep 6 00:25:57.985523 systemd[1]: Reached target veritysetup.target. Sep 6 00:25:57.985535 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:25:57.985551 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:25:57.985568 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:25:57.985582 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:25:57.985595 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:25:57.985607 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:25:57.985619 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:25:57.985634 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:25:57.985645 systemd[1]: Mounting media.mount... Sep 6 00:25:57.985659 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:57.985676 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:25:57.985690 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:25:57.985700 systemd[1]: Mounting tmp.mount... Sep 6 00:25:57.985710 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:25:57.985722 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:25:57.985735 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:25:57.985746 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:25:57.985755 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:25:57.985766 systemd[1]: Starting modprobe@drm.service... Sep 6 00:25:57.985783 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:25:57.985795 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:25:57.985808 systemd[1]: Starting modprobe@loop.service... Sep 6 00:25:57.985823 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:25:57.985837 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:25:57.985851 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:25:57.985864 kernel: loop: module loaded Sep 6 00:25:57.985875 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:25:57.985885 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:25:57.985898 systemd[1]: Stopped systemd-journald.service. Sep 6 00:25:57.985911 kernel: fuse: init (API version 7.34) Sep 6 00:25:57.985921 systemd[1]: Starting systemd-journald.service... Sep 6 00:25:57.985931 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:25:57.985941 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:25:57.985951 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:25:57.985961 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:25:57.985971 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:25:57.985984 systemd-journald[993]: Journal started Sep 6 00:25:57.986024 systemd-journald[993]: Runtime Journal (/run/log/journal/2c9edaf827624617b3790a234dec266d) is 6.0M, max 48.4M, 42.4M free. Sep 6 00:25:54.949000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:25:55.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:25:55.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:25:55.373000 audit: BPF prog-id=10 op=LOAD Sep 6 00:25:55.373000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:25:55.373000 audit: BPF prog-id=11 op=LOAD Sep 6 00:25:55.373000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:25:55.414000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:25:55.414000 audit[912]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:25:55.414000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:25:55.415000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:25:55.415000 audit[912]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859a5 a2=1ed a3=0 items=2 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:25:55.415000 audit: CWD cwd="/" Sep 6 00:25:55.415000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:55.415000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:55.415000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:25:57.859000 audit: BPF prog-id=12 op=LOAD Sep 6 00:25:57.859000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:25:57.859000 audit: BPF prog-id=13 op=LOAD Sep 6 00:25:57.859000 audit: BPF prog-id=14 op=LOAD Sep 6 00:25:57.859000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:25:57.859000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:25:57.859000 audit: BPF prog-id=15 op=LOAD Sep 6 00:25:57.859000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:25:57.859000 audit: BPF prog-id=16 op=LOAD Sep 6 00:25:57.860000 audit: BPF prog-id=17 op=LOAD Sep 6 00:25:57.860000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:25:57.860000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:25:57.860000 audit: BPF prog-id=18 op=LOAD Sep 6 00:25:57.860000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:25:57.860000 audit: BPF prog-id=19 op=LOAD Sep 6 00:25:57.861000 audit: BPF prog-id=20 op=LOAD Sep 6 00:25:57.861000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:25:57.861000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:25:57.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.870000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:25:57.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.968000 audit: BPF prog-id=21 op=LOAD Sep 6 00:25:57.969000 audit: BPF prog-id=22 op=LOAD Sep 6 00:25:57.969000 audit: BPF prog-id=23 op=LOAD Sep 6 00:25:57.969000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:25:57.969000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:25:57.983000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:25:57.983000 audit[993]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff0b658f90 a2=4000 a3=7fff0b65902c items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:25:57.983000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:25:57.857638 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:25:55.412517 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:25:57.857648 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:25:55.412801 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:25:57.861595 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:25:55.412817 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:25:55.412844 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:25:55.412853 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:25:55.412879 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:25:55.412890 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:25:57.987845 systemd[1]: Stopped verity-setup.service. Sep 6 00:25:55.413086 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:25:55.413122 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:25:57.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:55.413139 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:25:55.413919 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:25:55.413962 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:25:55.413982 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:25:55.413998 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:25:55.414018 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:25:55.414030 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:25:57.604222 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:25:57.604473 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:25:57.604572 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:25:57.604719 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:25:57.604762 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:25:57.604812 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:25:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:25:57.990313 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:57.992857 systemd[1]: Started systemd-journald.service. Sep 6 00:25:57.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:57.993404 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:25:57.994263 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:25:57.995140 systemd[1]: Mounted media.mount. Sep 6 00:25:57.996009 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:25:57.996986 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:25:57.997988 systemd[1]: Mounted tmp.mount. Sep 6 00:25:57.998939 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:25:57.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.000090 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:25:58.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.001238 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:25:58.001422 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:25:58.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.002602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:25:58.002771 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:25:58.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.003910 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:25:58.004071 systemd[1]: Finished modprobe@drm.service. Sep 6 00:25:58.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.005190 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:25:58.005363 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:25:58.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.006582 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:25:58.006742 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:25:58.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.007853 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:25:58.007994 systemd[1]: Finished modprobe@loop.service. Sep 6 00:25:58.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.009149 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:25:58.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.010409 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:25:58.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.011641 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:25:58.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.012935 systemd[1]: Reached target network-pre.target. Sep 6 00:25:58.014921 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:25:58.016736 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:25:58.017902 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:25:58.019576 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:25:58.021702 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:25:58.023230 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:25:58.025496 systemd-journald[993]: Time spent on flushing to /var/log/journal/2c9edaf827624617b3790a234dec266d is 19.479ms for 1171 entries. Sep 6 00:25:58.025496 systemd-journald[993]: System Journal (/var/log/journal/2c9edaf827624617b3790a234dec266d) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:25:58.069104 systemd-journald[993]: Received client request to flush runtime journal. Sep 6 00:25:58.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.024305 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:25:58.027226 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:25:58.028347 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:25:58.030571 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:25:58.033933 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:25:58.035385 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:25:58.044828 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:25:58.046147 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:25:58.053648 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:25:58.055226 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:25:58.057709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:25:58.066090 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:25:58.068224 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:25:58.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.070008 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:25:58.076836 udevadm[1019]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:25:58.081843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:25:58.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.534100 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:25:58.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.535000 audit: BPF prog-id=24 op=LOAD Sep 6 00:25:58.535000 audit: BPF prog-id=25 op=LOAD Sep 6 00:25:58.535000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:25:58.535000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:25:58.536765 systemd[1]: Starting systemd-udevd.service... Sep 6 00:25:58.552772 systemd-udevd[1020]: Using default interface naming scheme 'v252'. Sep 6 00:25:58.566163 systemd[1]: Started systemd-udevd.service. Sep 6 00:25:58.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.568000 audit: BPF prog-id=26 op=LOAD Sep 6 00:25:58.569664 systemd[1]: Starting systemd-networkd.service... Sep 6 00:25:58.574000 audit: BPF prog-id=27 op=LOAD Sep 6 00:25:58.574000 audit: BPF prog-id=28 op=LOAD Sep 6 00:25:58.574000 audit: BPF prog-id=29 op=LOAD Sep 6 00:25:58.575246 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:25:58.604309 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:25:58.612624 systemd[1]: Started systemd-userdbd.service. Sep 6 00:25:58.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.623583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:25:58.642315 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:25:58.649317 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:25:58.665217 systemd-networkd[1030]: lo: Link UP Sep 6 00:25:58.665228 systemd-networkd[1030]: lo: Gained carrier Sep 6 00:25:58.665756 systemd-networkd[1030]: Enumeration completed Sep 6 00:25:58.665932 systemd[1]: Started systemd-networkd.service. Sep 6 00:25:58.666228 systemd-networkd[1030]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:25:58.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.667702 systemd-networkd[1030]: eth0: Link UP Sep 6 00:25:58.667782 systemd-networkd[1030]: eth0: Gained carrier Sep 6 00:25:58.668000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:25:58.668000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563c085aa980 a1=338ec a2=7f07b49d8bc5 a3=5 items=110 ppid=1020 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:25:58.668000 audit: CWD cwd="/" Sep 6 00:25:58.668000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=1 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=2 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=3 name=(null) inode=14724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=4 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=5 name=(null) inode=14725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=6 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=7 name=(null) inode=14726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=8 name=(null) inode=14726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=9 name=(null) inode=14727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=10 name=(null) inode=14726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=11 name=(null) inode=14728 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=12 name=(null) inode=14726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=13 name=(null) inode=14729 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=14 name=(null) inode=14726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=15 name=(null) inode=14730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=16 name=(null) inode=14726 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=17 name=(null) inode=14731 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=18 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=19 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=20 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=21 name=(null) inode=14733 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=22 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=23 name=(null) inode=14734 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=24 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=25 name=(null) inode=14735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=26 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=27 name=(null) inode=14736 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=28 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=29 name=(null) inode=14737 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=30 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=31 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=32 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=33 name=(null) inode=14739 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=34 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=35 name=(null) inode=14740 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=36 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=37 name=(null) inode=14741 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=38 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=39 name=(null) inode=14742 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=40 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=41 name=(null) inode=14743 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=42 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=43 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=44 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=45 name=(null) inode=14745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=46 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=47 name=(null) inode=14746 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=48 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=49 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=50 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=51 name=(null) inode=14748 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=52 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=53 name=(null) inode=14749 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=55 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=56 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=57 name=(null) inode=14751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=58 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=59 name=(null) inode=14752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=60 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=61 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=62 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=63 name=(null) inode=14754 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=64 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=65 name=(null) inode=14755 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=66 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=67 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=68 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=69 name=(null) inode=14757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=70 name=(null) inode=14753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=71 name=(null) inode=14758 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=72 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=73 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=74 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=75 name=(null) inode=14760 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=76 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=77 name=(null) inode=14761 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=78 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=79 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=80 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=81 name=(null) inode=14763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=82 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=83 name=(null) inode=14764 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=84 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=85 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=86 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=87 name=(null) inode=14766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=88 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=89 name=(null) inode=14767 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=90 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=91 name=(null) inode=14768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=92 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=93 name=(null) inode=14769 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=94 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=95 name=(null) inode=14770 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=96 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=97 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=98 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=99 name=(null) inode=14772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=100 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=101 name=(null) inode=14773 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=102 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=103 name=(null) inode=14774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=104 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=105 name=(null) inode=14775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=106 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=107 name=(null) inode=14776 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PATH item=109 name=(null) inode=14221 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:25:58.668000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:25:58.687042 systemd-networkd[1030]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:25:58.697411 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 6 00:25:58.703485 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 6 00:25:58.703607 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 6 00:25:58.703718 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 6 00:25:58.710308 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:25:58.716308 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:25:58.768731 kernel: kvm: Nested Virtualization enabled Sep 6 00:25:58.768829 kernel: SVM: kvm: Nested Paging enabled Sep 6 00:25:58.768844 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 6 00:25:58.768876 kernel: SVM: Virtual GIF supported Sep 6 00:25:58.788312 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:25:58.810686 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:25:58.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.812829 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:25:58.823413 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:25:58.857412 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:25:58.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.858553 systemd[1]: Reached target cryptsetup.target. Sep 6 00:25:58.860420 systemd[1]: Starting lvm2-activation.service... Sep 6 00:25:58.865089 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:25:58.894607 systemd[1]: Finished lvm2-activation.service. Sep 6 00:25:58.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.895644 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:25:58.896480 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:25:58.896503 systemd[1]: Reached target local-fs.target. Sep 6 00:25:58.897300 systemd[1]: Reached target machines.target. Sep 6 00:25:58.899120 systemd[1]: Starting ldconfig.service... Sep 6 00:25:58.900042 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:25:58.900084 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:25:58.900856 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:25:58.902397 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:25:58.906480 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:25:58.909077 systemd[1]: Starting systemd-sysext.service... Sep 6 00:25:58.909523 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1057 (bootctl) Sep 6 00:25:58.910734 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:25:58.915031 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:25:58.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.919628 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:25:58.923963 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:25:58.924133 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:25:58.934312 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:25:58.964669 systemd-fsck[1065]: fsck.fat 4.2 (2021-01-31) Sep 6 00:25:58.964669 systemd-fsck[1065]: /dev/vda1: 791 files, 120781/258078 clusters Sep 6 00:25:58.965880 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:25:58.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:58.968940 systemd[1]: Mounting boot.mount... Sep 6 00:25:59.135202 systemd[1]: Mounted boot.mount. Sep 6 00:25:59.185314 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:25:59.190097 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:25:59.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.202310 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:25:59.214872 (sd-sysext)[1070]: Using extensions 'kubernetes'. Sep 6 00:25:59.215196 (sd-sysext)[1070]: Merged extensions into '/usr'. Sep 6 00:25:59.240963 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:59.242590 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:25:59.243825 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.245070 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:25:59.247295 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:25:59.249680 systemd[1]: Starting modprobe@loop.service... Sep 6 00:25:59.250847 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.250999 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:25:59.251143 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:59.252348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:25:59.252497 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:25:59.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.256025 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:25:59.257357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:25:59.257495 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:25:59.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.259113 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:25:59.259240 systemd[1]: Finished modprobe@loop.service. Sep 6 00:25:59.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.260986 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:25:59.261113 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.262132 systemd[1]: Finished systemd-sysext.service. Sep 6 00:25:59.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.264579 systemd[1]: Starting ensure-sysext.service... Sep 6 00:25:59.266731 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:25:59.271586 systemd[1]: Reloading. Sep 6 00:25:59.278384 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:25:59.279618 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:25:59.281469 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:25:59.291127 ldconfig[1056]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:25:59.331823 /usr/lib/systemd/system-generators/torcx-generator[1096]: time="2025-09-06T00:25:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:25:59.332276 /usr/lib/systemd/system-generators/torcx-generator[1096]: time="2025-09-06T00:25:59Z" level=info msg="torcx already run" Sep 6 00:25:59.406957 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:25:59.406979 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:25:59.424707 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:25:59.481627 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:25:59.483000 audit: BPF prog-id=30 op=LOAD Sep 6 00:25:59.483000 audit: BPF prog-id=31 op=LOAD Sep 6 00:25:59.483000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:25:59.483000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:25:59.484000 audit: BPF prog-id=32 op=LOAD Sep 6 00:25:59.484000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:25:59.484000 audit: BPF prog-id=33 op=LOAD Sep 6 00:25:59.484000 audit: BPF prog-id=34 op=LOAD Sep 6 00:25:59.484000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:25:59.484000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:25:59.485000 audit: BPF prog-id=35 op=LOAD Sep 6 00:25:59.485000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:25:59.485000 audit: BPF prog-id=36 op=LOAD Sep 6 00:25:59.485000 audit: BPF prog-id=37 op=LOAD Sep 6 00:25:59.485000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:25:59.485000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:25:59.487000 audit: BPF prog-id=38 op=LOAD Sep 6 00:25:59.487000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:25:59.492366 systemd[1]: Finished ldconfig.service. Sep 6 00:25:59.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.493754 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:25:59.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.496259 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:25:59.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.500635 systemd[1]: Starting audit-rules.service... Sep 6 00:25:59.502652 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:25:59.504982 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:25:59.507000 audit: BPF prog-id=39 op=LOAD Sep 6 00:25:59.508760 systemd[1]: Starting systemd-resolved.service... Sep 6 00:25:59.510000 audit: BPF prog-id=40 op=LOAD Sep 6 00:25:59.511647 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:25:59.514048 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:25:59.515642 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:25:59.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.519000 audit[1151]: SYSTEM_BOOT pid=1151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.526863 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:59.527201 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.529423 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:25:59.531795 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:25:59.534258 systemd[1]: Starting modprobe@loop.service... Sep 6 00:25:59.535459 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.535621 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:25:59.535746 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:25:59.535847 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:59.537182 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:25:59.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:25:59.538000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:25:59.538000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1499b0d0 a2=420 a3=0 items=0 ppid=1139 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:25:59.538000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:25:59.540607 augenrules[1162]: No rules Sep 6 00:25:59.539021 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:25:59.541508 systemd[1]: Finished audit-rules.service. Sep 6 00:25:59.543153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:25:59.543509 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:25:59.545081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:25:59.545224 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:25:59.546920 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:25:59.547059 systemd[1]: Finished modprobe@loop.service. Sep 6 00:25:59.551422 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:59.551750 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.553457 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:25:59.555812 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:25:59.557946 systemd[1]: Starting modprobe@loop.service... Sep 6 00:25:59.559029 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.559186 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:25:59.560772 systemd[1]: Starting systemd-update-done.service... Sep 6 00:25:59.561761 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:25:59.561890 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:59.563277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:25:59.563959 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:25:59.565650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:25:59.565806 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:25:59.567319 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:25:59.567489 systemd[1]: Finished modprobe@loop.service. Sep 6 00:25:59.569065 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:25:59.569239 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.572672 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:59.572990 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.575134 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:25:59.577615 systemd[1]: Starting modprobe@drm.service... Sep 6 00:25:59.579530 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:25:59.579685 systemd-resolved[1146]: Positive Trust Anchors: Sep 6 00:25:59.579693 systemd-resolved[1146]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:25:59.579718 systemd-resolved[1146]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:25:59.581695 systemd[1]: Starting modprobe@loop.service... Sep 6 00:25:59.582754 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:25:59.582918 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:25:59.584131 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:25:59.585274 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:25:59.585417 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:25:59.586360 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:25:59.587928 systemd[1]: Finished systemd-update-done.service. Sep 6 00:26:00.000671 systemd-timesyncd[1150]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:26:00.000677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:26:00.000816 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:26:00.001132 systemd-timesyncd[1150]: Initial clock synchronization to Sat 2025-09-06 00:26:00.000563 UTC. Sep 6 00:26:00.002290 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:26:00.002421 systemd[1]: Finished modprobe@drm.service. Sep 6 00:26:00.003678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:26:00.003775 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:26:00.005093 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:26:00.005188 systemd[1]: Finished modprobe@loop.service. Sep 6 00:26:00.005720 systemd-resolved[1146]: Defaulting to hostname 'linux'. Sep 6 00:26:00.006699 systemd[1]: Reached target time-set.target. Sep 6 00:26:00.007677 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:26:00.007702 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:26:00.007804 systemd[1]: Started systemd-resolved.service. Sep 6 00:26:00.008812 systemd[1]: Finished ensure-sysext.service. Sep 6 00:26:00.010327 systemd[1]: Reached target network.target. Sep 6 00:26:00.011103 systemd[1]: Reached target nss-lookup.target. Sep 6 00:26:00.011894 systemd[1]: Reached target sysinit.target. Sep 6 00:26:00.012710 systemd[1]: Started motdgen.path. Sep 6 00:26:00.013390 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:26:00.014574 systemd[1]: Started logrotate.timer. Sep 6 00:26:00.015305 systemd[1]: Started mdadm.timer. Sep 6 00:26:00.015968 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:26:00.016796 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:26:00.016815 systemd[1]: Reached target paths.target. Sep 6 00:26:00.017536 systemd[1]: Reached target timers.target. Sep 6 00:26:00.018560 systemd[1]: Listening on dbus.socket. Sep 6 00:26:00.020139 systemd[1]: Starting docker.socket... Sep 6 00:26:00.023118 systemd[1]: Listening on sshd.socket. Sep 6 00:26:00.023941 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:26:00.024273 systemd[1]: Listening on docker.socket. Sep 6 00:26:00.025057 systemd[1]: Reached target sockets.target. Sep 6 00:26:00.025831 systemd[1]: Reached target basic.target. Sep 6 00:26:00.026610 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:26:00.026635 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:26:00.027425 systemd[1]: Starting containerd.service... Sep 6 00:26:00.029069 systemd[1]: Starting dbus.service... Sep 6 00:26:00.030936 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:26:00.032906 systemd[1]: Starting extend-filesystems.service... Sep 6 00:26:00.034724 jq[1182]: false Sep 6 00:26:00.034203 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:26:00.035288 systemd[1]: Starting motdgen.service... Sep 6 00:26:00.037258 systemd[1]: Starting prepare-helm.service... Sep 6 00:26:00.039506 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:26:00.042112 systemd[1]: Starting sshd-keygen.service... Sep 6 00:26:00.048542 systemd[1]: Starting systemd-logind.service... Sep 6 00:26:00.049660 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:26:00.049724 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:26:00.050282 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:26:00.051221 systemd[1]: Starting update-engine.service... Sep 6 00:26:00.053742 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:26:00.054865 dbus-daemon[1181]: [system] SELinux support is enabled Sep 6 00:26:00.056177 systemd[1]: Started dbus.service. Sep 6 00:26:00.065812 jq[1201]: true Sep 6 00:26:00.060446 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:26:00.060645 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:26:00.060936 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:26:00.061093 systemd[1]: Finished motdgen.service. Sep 6 00:26:00.063063 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:26:00.063239 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:26:00.070305 jq[1205]: true Sep 6 00:26:00.073915 extend-filesystems[1183]: Found loop1 Sep 6 00:26:00.074969 extend-filesystems[1183]: Found sr0 Sep 6 00:26:00.074969 extend-filesystems[1183]: Found vda Sep 6 00:26:00.074969 extend-filesystems[1183]: Found vda1 Sep 6 00:26:00.074969 extend-filesystems[1183]: Found vda2 Sep 6 00:26:00.074969 extend-filesystems[1183]: Found vda3 Sep 6 00:26:00.074969 extend-filesystems[1183]: Found usr Sep 6 00:26:00.074969 extend-filesystems[1183]: Found vda4 Sep 6 00:26:00.074969 extend-filesystems[1183]: Found vda6 Sep 6 00:26:00.074969 extend-filesystems[1183]: Found vda7 Sep 6 00:26:00.074969 extend-filesystems[1183]: Found vda9 Sep 6 00:26:00.074969 extend-filesystems[1183]: Checking size of /dev/vda9 Sep 6 00:26:00.082886 tar[1203]: linux-amd64/helm Sep 6 00:26:00.081464 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:26:00.081493 systemd[1]: Reached target system-config.target. Sep 6 00:26:00.086793 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:26:00.086819 systemd[1]: Reached target user-config.target. Sep 6 00:26:00.101012 extend-filesystems[1183]: Resized partition /dev/vda9 Sep 6 00:26:00.105034 extend-filesystems[1228]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:26:00.108307 update_engine[1199]: I0906 00:26:00.107154 1199 main.cc:92] Flatcar Update Engine starting Sep 6 00:26:00.110353 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:26:00.116862 systemd[1]: Started update-engine.service. Sep 6 00:26:00.119742 systemd[1]: Started locksmithd.service. Sep 6 00:26:00.130350 update_engine[1199]: I0906 00:26:00.116937 1199 update_check_scheduler.cc:74] Next update check in 2m4s Sep 6 00:26:00.128378 systemd-logind[1197]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:26:00.128394 systemd-logind[1197]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:26:00.129288 systemd-logind[1197]: New seat seat0. Sep 6 00:26:00.132861 systemd[1]: Started systemd-logind.service. Sep 6 00:26:00.133917 env[1206]: time="2025-09-06T00:26:00.133843396Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:26:00.140355 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:26:00.162538 env[1206]: time="2025-09-06T00:26:00.162446206Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:26:00.169801 env[1206]: time="2025-09-06T00:26:00.168772889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:26:00.169568 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:26:00.169924 extend-filesystems[1228]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:26:00.169924 extend-filesystems[1228]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:26:00.169924 extend-filesystems[1228]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:26:00.169708 systemd[1]: Finished extend-filesystems.service. Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.174787776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.174846176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.175196242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.175216771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.175231448Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.175243150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.175348578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.175666714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.175817337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:26:00.177248 env[1206]: time="2025-09-06T00:26:00.175850118Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:26:00.177543 extend-filesystems[1183]: Resized filesystem in /dev/vda9 Sep 6 00:26:00.178601 env[1206]: time="2025-09-06T00:26:00.175901775Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:26:00.178601 env[1206]: time="2025-09-06T00:26:00.175931100Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:26:00.184616 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:26:00.185519 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188646482Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188696626Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188714619Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188798146Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188819366Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188888846Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188910136Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188928570Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188944811Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188961993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188978194Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.188994805Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.189124187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:26:00.189446 env[1206]: time="2025-09-06T00:26:00.189210499Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:26:00.189762 env[1206]: time="2025-09-06T00:26:00.189581254Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:26:00.189762 env[1206]: time="2025-09-06T00:26:00.189644483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189762 env[1206]: time="2025-09-06T00:26:00.189661274Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:26:00.189762 env[1206]: time="2025-09-06T00:26:00.189725885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189762 env[1206]: time="2025-09-06T00:26:00.189742717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189762 env[1206]: time="2025-09-06T00:26:00.189758847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189876 env[1206]: time="2025-09-06T00:26:00.189775428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189876 env[1206]: time="2025-09-06T00:26:00.189789765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189876 env[1206]: time="2025-09-06T00:26:00.189803281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189876 env[1206]: time="2025-09-06T00:26:00.189816135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189876 env[1206]: time="2025-09-06T00:26:00.189827927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.189876 env[1206]: time="2025-09-06T00:26:00.189844418Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:26:00.193409 env[1206]: time="2025-09-06T00:26:00.193364307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.193470 env[1206]: time="2025-09-06T00:26:00.193410654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.193470 env[1206]: time="2025-09-06T00:26:00.193429720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.193470 env[1206]: time="2025-09-06T00:26:00.193445019Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:26:00.193557 env[1206]: time="2025-09-06T00:26:00.193465788Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:26:00.193557 env[1206]: time="2025-09-06T00:26:00.193492428Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:26:00.193557 env[1206]: time="2025-09-06T00:26:00.193514379Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:26:00.193557 env[1206]: time="2025-09-06T00:26:00.193555175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:26:00.193877 env[1206]: time="2025-09-06T00:26:00.193803711Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:26:00.194627 env[1206]: time="2025-09-06T00:26:00.193884002Z" level=info msg="Connect containerd service" Sep 6 00:26:00.194627 env[1206]: time="2025-09-06T00:26:00.193945167Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:26:00.194737 env[1206]: time="2025-09-06T00:26:00.194708999Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.194862897Z" level=info msg="Start subscribing containerd event" Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.194957194Z" level=info msg="Start recovering state" Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.195010153Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.195035301Z" level=info msg="Start event monitor" Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.195052473Z" level=info msg="Start snapshots syncer" Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.195070567Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.195057432Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.195081227Z" level=info msg="Start streaming server" Sep 6 00:26:00.195528 env[1206]: time="2025-09-06T00:26:00.195367473Z" level=info msg="containerd successfully booted in 0.062267s" Sep 6 00:26:00.195209 systemd[1]: Started containerd.service. Sep 6 00:26:00.212940 locksmithd[1233]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:26:00.229502 systemd-networkd[1030]: eth0: Gained IPv6LL Sep 6 00:26:00.231178 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:26:00.232605 systemd[1]: Reached target network-online.target. Sep 6 00:26:00.235075 systemd[1]: Starting kubelet.service... Sep 6 00:26:00.658180 tar[1203]: linux-amd64/LICENSE Sep 6 00:26:00.658360 tar[1203]: linux-amd64/README.md Sep 6 00:26:00.664072 systemd[1]: Finished prepare-helm.service. Sep 6 00:26:01.180692 sshd_keygen[1200]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:26:01.209117 systemd[1]: Finished sshd-keygen.service. Sep 6 00:26:01.212026 systemd[1]: Starting issuegen.service... Sep 6 00:26:01.218500 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:26:01.218710 systemd[1]: Finished issuegen.service. Sep 6 00:26:01.221596 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:26:01.229583 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:26:01.232703 systemd[1]: Started getty@tty1.service. Sep 6 00:26:01.235273 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:26:01.236593 systemd[1]: Reached target getty.target. Sep 6 00:26:01.585542 systemd[1]: Started kubelet.service. Sep 6 00:26:01.586972 systemd[1]: Reached target multi-user.target. Sep 6 00:26:01.589486 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:26:01.597802 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:26:01.598023 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:26:01.599318 systemd[1]: Startup finished in 728ms (kernel) + 5.174s (initrd) + 6.284s (userspace) = 12.187s. Sep 6 00:26:02.434377 kubelet[1263]: E0906 00:26:02.434254 1263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:26:02.436273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:26:02.436415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:26:02.436810 systemd[1]: kubelet.service: Consumed 2.033s CPU time. Sep 6 00:26:03.324102 systemd[1]: Created slice system-sshd.slice. Sep 6 00:26:03.325647 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:41350.service. Sep 6 00:26:03.370962 sshd[1272]: Accepted publickey for core from 10.0.0.1 port 41350 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:26:03.372892 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:03.384170 systemd-logind[1197]: New session 1 of user core. Sep 6 00:26:03.385481 systemd[1]: Created slice user-500.slice. Sep 6 00:26:03.386942 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:26:03.398539 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:26:03.400557 systemd[1]: Starting user@500.service... Sep 6 00:26:03.404131 (systemd)[1275]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:03.507886 systemd[1275]: Queued start job for default target default.target. Sep 6 00:26:03.508534 systemd[1275]: Reached target paths.target. Sep 6 00:26:03.508566 systemd[1275]: Reached target sockets.target. Sep 6 00:26:03.508583 systemd[1275]: Reached target timers.target. Sep 6 00:26:03.508599 systemd[1275]: Reached target basic.target. Sep 6 00:26:03.508654 systemd[1275]: Reached target default.target. Sep 6 00:26:03.508687 systemd[1275]: Startup finished in 97ms. Sep 6 00:26:03.508881 systemd[1]: Started user@500.service. Sep 6 00:26:03.510394 systemd[1]: Started session-1.scope. Sep 6 00:26:03.565979 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:41362.service. Sep 6 00:26:03.605500 sshd[1284]: Accepted publickey for core from 10.0.0.1 port 41362 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:26:03.607222 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:03.611972 systemd-logind[1197]: New session 2 of user core. Sep 6 00:26:03.613419 systemd[1]: Started session-2.scope. Sep 6 00:26:03.674811 sshd[1284]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:03.678428 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:41362.service: Deactivated successfully. Sep 6 00:26:03.679235 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:26:03.679967 systemd-logind[1197]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:26:03.681597 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:41378.service. Sep 6 00:26:03.682601 systemd-logind[1197]: Removed session 2. Sep 6 00:26:03.721810 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 41378 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:26:03.723366 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:03.728057 systemd-logind[1197]: New session 3 of user core. Sep 6 00:26:03.729219 systemd[1]: Started session-3.scope. Sep 6 00:26:03.783647 sshd[1290]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:03.787330 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:41378.service: Deactivated successfully. Sep 6 00:26:03.788136 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:26:03.788910 systemd-logind[1197]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:26:03.790502 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:41380.service. Sep 6 00:26:03.791748 systemd-logind[1197]: Removed session 3. Sep 6 00:26:03.831630 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 41380 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:26:03.833250 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:03.839006 systemd-logind[1197]: New session 4 of user core. Sep 6 00:26:03.841102 systemd[1]: Started session-4.scope. Sep 6 00:26:03.902122 sshd[1296]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:03.906256 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:41380.service: Deactivated successfully. Sep 6 00:26:03.906911 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:26:03.907616 systemd-logind[1197]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:26:03.908832 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:41388.service. Sep 6 00:26:03.909777 systemd-logind[1197]: Removed session 4. Sep 6 00:26:03.955562 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 41388 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:26:03.957481 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:26:03.962256 systemd-logind[1197]: New session 5 of user core. Sep 6 00:26:03.963064 systemd[1]: Started session-5.scope. Sep 6 00:26:04.029299 sudo[1305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:26:04.029648 sudo[1305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:26:04.098957 systemd[1]: Starting docker.service... Sep 6 00:26:04.258273 env[1317]: time="2025-09-06T00:26:04.258101358Z" level=info msg="Starting up" Sep 6 00:26:04.260159 env[1317]: time="2025-09-06T00:26:04.260080940Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:26:04.260159 env[1317]: time="2025-09-06T00:26:04.260118561Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:26:04.260159 env[1317]: time="2025-09-06T00:26:04.260147144Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:26:04.260159 env[1317]: time="2025-09-06T00:26:04.260161702Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:26:04.271797 env[1317]: time="2025-09-06T00:26:04.271746263Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:26:04.271797 env[1317]: time="2025-09-06T00:26:04.271781108Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:26:04.272144 env[1317]: time="2025-09-06T00:26:04.271811355Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:26:04.272144 env[1317]: time="2025-09-06T00:26:04.271826643Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:26:04.278589 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport348644446-merged.mount: Deactivated successfully. Sep 6 00:26:05.294278 env[1317]: time="2025-09-06T00:26:05.294205884Z" level=info msg="Loading containers: start." Sep 6 00:26:05.428638 kernel: Initializing XFRM netlink socket Sep 6 00:26:05.468042 env[1317]: time="2025-09-06T00:26:05.467976123Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:26:05.532621 systemd-networkd[1030]: docker0: Link UP Sep 6 00:26:05.555512 env[1317]: time="2025-09-06T00:26:05.555372201Z" level=info msg="Loading containers: done." Sep 6 00:26:05.567199 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2105627708-merged.mount: Deactivated successfully. Sep 6 00:26:05.570032 env[1317]: time="2025-09-06T00:26:05.569967547Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:26:05.570236 env[1317]: time="2025-09-06T00:26:05.570200294Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:26:05.570372 env[1317]: time="2025-09-06T00:26:05.570325829Z" level=info msg="Daemon has completed initialization" Sep 6 00:26:05.590502 systemd[1]: Started docker.service. Sep 6 00:26:05.595846 env[1317]: time="2025-09-06T00:26:05.595781679Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:26:06.372460 env[1206]: time="2025-09-06T00:26:06.372406370Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:26:07.010659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261399918.mount: Deactivated successfully. Sep 6 00:26:08.337192 env[1206]: time="2025-09-06T00:26:08.337125608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:08.338896 env[1206]: time="2025-09-06T00:26:08.338866051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:08.340825 env[1206]: time="2025-09-06T00:26:08.340797272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:08.342570 env[1206]: time="2025-09-06T00:26:08.342538948Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:08.343492 env[1206]: time="2025-09-06T00:26:08.343460726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:26:08.344088 env[1206]: time="2025-09-06T00:26:08.344062054Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:26:09.972158 env[1206]: time="2025-09-06T00:26:09.972093813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:09.973938 env[1206]: time="2025-09-06T00:26:09.973883809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:09.975607 env[1206]: time="2025-09-06T00:26:09.975561745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:09.977076 env[1206]: time="2025-09-06T00:26:09.977051769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:09.977735 env[1206]: time="2025-09-06T00:26:09.977698982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:26:09.978273 env[1206]: time="2025-09-06T00:26:09.978204661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:26:12.335163 env[1206]: time="2025-09-06T00:26:12.335094398Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:12.337352 env[1206]: time="2025-09-06T00:26:12.337283583Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:12.339034 env[1206]: time="2025-09-06T00:26:12.339007595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:12.340812 env[1206]: time="2025-09-06T00:26:12.340771002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:12.341585 env[1206]: time="2025-09-06T00:26:12.341547838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:26:12.342065 env[1206]: time="2025-09-06T00:26:12.342037296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:26:12.687418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:26:12.687648 systemd[1]: Stopped kubelet.service. Sep 6 00:26:12.687692 systemd[1]: kubelet.service: Consumed 2.033s CPU time. Sep 6 00:26:12.689265 systemd[1]: Starting kubelet.service... Sep 6 00:26:12.808455 systemd[1]: Started kubelet.service. Sep 6 00:26:12.904626 kubelet[1453]: E0906 00:26:12.904555 1453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:26:12.907610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:26:12.907725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:26:13.741176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386682534.mount: Deactivated successfully. Sep 6 00:26:14.919692 env[1206]: time="2025-09-06T00:26:14.919601138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:14.921918 env[1206]: time="2025-09-06T00:26:14.921817313Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:14.923593 env[1206]: time="2025-09-06T00:26:14.923546245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:14.925386 env[1206]: time="2025-09-06T00:26:14.925346580Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:14.925882 env[1206]: time="2025-09-06T00:26:14.925837180Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:26:14.926953 env[1206]: time="2025-09-06T00:26:14.926900654Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:26:15.475932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671549067.mount: Deactivated successfully. Sep 6 00:26:16.485393 env[1206]: time="2025-09-06T00:26:16.485323747Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:16.487373 env[1206]: time="2025-09-06T00:26:16.487312115Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:16.489199 env[1206]: time="2025-09-06T00:26:16.489174377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:16.490767 env[1206]: time="2025-09-06T00:26:16.490745433Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:16.491427 env[1206]: time="2025-09-06T00:26:16.491394139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:26:16.491974 env[1206]: time="2025-09-06T00:26:16.491940093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:26:17.005702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874394143.mount: Deactivated successfully. Sep 6 00:26:17.010812 env[1206]: time="2025-09-06T00:26:17.010769257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:17.012448 env[1206]: time="2025-09-06T00:26:17.012416165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:17.013676 env[1206]: time="2025-09-06T00:26:17.013641663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:17.014921 env[1206]: time="2025-09-06T00:26:17.014876378Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:17.015378 env[1206]: time="2025-09-06T00:26:17.015349054Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:26:17.015892 env[1206]: time="2025-09-06T00:26:17.015866935Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:26:17.608643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894382833.mount: Deactivated successfully. Sep 6 00:26:21.901805 env[1206]: time="2025-09-06T00:26:21.901746488Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:21.904089 env[1206]: time="2025-09-06T00:26:21.904045469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:21.905972 env[1206]: time="2025-09-06T00:26:21.905945622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:21.907635 env[1206]: time="2025-09-06T00:26:21.907605674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:21.908575 env[1206]: time="2025-09-06T00:26:21.908551367Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:26:23.158832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:26:23.159018 systemd[1]: Stopped kubelet.service. Sep 6 00:26:23.160406 systemd[1]: Starting kubelet.service... Sep 6 00:26:23.265027 systemd[1]: Started kubelet.service. Sep 6 00:26:23.330013 kubelet[1485]: E0906 00:26:23.329932 1485 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:26:23.331646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:26:23.331764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:26:24.389967 systemd[1]: Stopped kubelet.service. Sep 6 00:26:24.392092 systemd[1]: Starting kubelet.service... Sep 6 00:26:24.416254 systemd[1]: Reloading. Sep 6 00:26:24.476599 /usr/lib/systemd/system-generators/torcx-generator[1520]: time="2025-09-06T00:26:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:26:24.476627 /usr/lib/systemd/system-generators/torcx-generator[1520]: time="2025-09-06T00:26:24Z" level=info msg="torcx already run" Sep 6 00:26:25.304941 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:26:25.304958 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:26:25.321812 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:26:25.395730 systemd[1]: Started kubelet.service. Sep 6 00:26:25.396750 systemd[1]: Stopping kubelet.service... Sep 6 00:26:25.396973 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:26:25.397106 systemd[1]: Stopped kubelet.service. Sep 6 00:26:25.398271 systemd[1]: Starting kubelet.service... Sep 6 00:26:25.488419 systemd[1]: Started kubelet.service. Sep 6 00:26:25.535179 kubelet[1567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:26:25.535179 kubelet[1567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:26:25.535179 kubelet[1567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:26:25.535562 kubelet[1567]: I0906 00:26:25.535225 1567 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:26:25.835161 kubelet[1567]: I0906 00:26:25.835120 1567 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:26:25.835161 kubelet[1567]: I0906 00:26:25.835155 1567 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:26:25.835586 kubelet[1567]: I0906 00:26:25.835567 1567 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:26:25.859568 kubelet[1567]: E0906 00:26:25.859523 1567 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:25.861304 kubelet[1567]: I0906 00:26:25.861275 1567 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:26:25.866634 kubelet[1567]: E0906 00:26:25.866590 1567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:26:25.866634 kubelet[1567]: I0906 00:26:25.866636 1567 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:26:25.873401 kubelet[1567]: I0906 00:26:25.873369 1567 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:26:25.874227 kubelet[1567]: I0906 00:26:25.874193 1567 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:26:25.874436 kubelet[1567]: I0906 00:26:25.874388 1567 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:26:25.874688 kubelet[1567]: I0906 00:26:25.874432 1567 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:26:25.874811 kubelet[1567]: I0906 00:26:25.874703 1567 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:26:25.874811 kubelet[1567]: I0906 00:26:25.874716 1567 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:26:25.874890 kubelet[1567]: I0906 00:26:25.874879 1567 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:26:25.885225 kubelet[1567]: I0906 00:26:25.885178 1567 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:26:25.885225 kubelet[1567]: I0906 00:26:25.885242 1567 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:26:25.885503 kubelet[1567]: I0906 00:26:25.885308 1567 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:26:25.885503 kubelet[1567]: I0906 00:26:25.885357 1567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:26:25.890695 kubelet[1567]: W0906 00:26:25.890627 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Sep 6 00:26:25.890752 kubelet[1567]: E0906 00:26:25.890710 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:25.893346 kubelet[1567]: I0906 00:26:25.893312 1567 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:26:25.893714 kubelet[1567]: W0906 00:26:25.893671 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Sep 6 00:26:25.893779 kubelet[1567]: E0906 00:26:25.893727 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:25.893839 kubelet[1567]: I0906 00:26:25.893786 1567 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:26:25.894455 kubelet[1567]: W0906 00:26:25.894443 1567 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:26:25.898066 kubelet[1567]: I0906 00:26:25.898030 1567 server.go:1274] "Started kubelet" Sep 6 00:26:25.900550 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:26:25.900653 kubelet[1567]: I0906 00:26:25.900638 1567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:26:25.904546 kubelet[1567]: I0906 00:26:25.904493 1567 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:26:25.905294 kubelet[1567]: I0906 00:26:25.905272 1567 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:26:25.905379 kubelet[1567]: I0906 00:26:25.905350 1567 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:26:25.905508 kubelet[1567]: E0906 00:26:25.905489 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:25.905966 kubelet[1567]: I0906 00:26:25.905952 1567 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:26:25.906040 kubelet[1567]: I0906 00:26:25.906026 1567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:26:25.906248 kubelet[1567]: E0906 00:26:25.902397 1567 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186289dcc3c21cf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:26:25.8980037 +0000 UTC m=+0.406180670,LastTimestamp:2025-09-06 00:26:25.8980037 +0000 UTC m=+0.406180670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:26:25.906367 kubelet[1567]: E0906 00:26:25.906301 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="200ms" Sep 6 00:26:25.906601 kubelet[1567]: I0906 00:26:25.906407 1567 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:26:25.906601 kubelet[1567]: I0906 00:26:25.906455 1567 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:26:25.906701 kubelet[1567]: W0906 00:26:25.906674 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Sep 6 00:26:25.906746 kubelet[1567]: E0906 00:26:25.906710 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:25.906953 kubelet[1567]: I0906 00:26:25.906898 1567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:26:25.907352 kubelet[1567]: I0906 00:26:25.907119 1567 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:26:26.021554 kubelet[1567]: W0906 00:26:26.021496 1567 logging.go:55] [core] [Channel #7 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: connection error: desc = "error reading server preface: read unix @->/run/containerd/containerd.sock: use of closed network connection" Sep 6 00:26:26.023047 kubelet[1567]: I0906 00:26:26.022673 1567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:26:26.023047 kubelet[1567]: E0906 00:26:26.022957 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:26.023163 kubelet[1567]: E0906 00:26:26.023150 1567 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:26:26.027425 kubelet[1567]: I0906 00:26:26.027390 1567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:26:26.028472 kubelet[1567]: I0906 00:26:26.028440 1567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:26:26.028516 kubelet[1567]: I0906 00:26:26.028491 1567 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:26:26.028566 kubelet[1567]: I0906 00:26:26.028552 1567 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:26:26.028625 kubelet[1567]: E0906 00:26:26.028603 1567 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:26:26.029000 kubelet[1567]: W0906 00:26:26.028956 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Sep 6 00:26:26.029047 kubelet[1567]: E0906 00:26:26.029008 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:26.107076 kubelet[1567]: E0906 00:26:26.106967 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="400ms" Sep 6 00:26:26.122992 kubelet[1567]: I0906 00:26:26.122970 1567 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:26:26.123095 kubelet[1567]: E0906 00:26:26.123074 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:26.128672 kubelet[1567]: E0906 00:26:26.128645 1567 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:26:26.134270 kubelet[1567]: I0906 00:26:26.134251 1567 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:26:26.134270 kubelet[1567]: I0906 00:26:26.134265 1567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:26:26.134405 kubelet[1567]: I0906 00:26:26.134283 1567 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:26:26.223803 kubelet[1567]: E0906 00:26:26.223763 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:26.324315 kubelet[1567]: E0906 00:26:26.324275 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:26.329459 kubelet[1567]: E0906 00:26:26.329433 1567 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:26:26.425404 kubelet[1567]: E0906 00:26:26.425308 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:26.507972 kubelet[1567]: E0906 00:26:26.507916 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="800ms" Sep 6 00:26:26.526301 kubelet[1567]: E0906 00:26:26.526259 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:26.626849 kubelet[1567]: E0906 00:26:26.626802 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:26.718783 kubelet[1567]: I0906 00:26:26.718688 1567 policy_none.go:49] "None policy: Start" Sep 6 00:26:26.719500 kubelet[1567]: I0906 00:26:26.719468 1567 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:26:26.719571 kubelet[1567]: I0906 00:26:26.719504 1567 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:26:26.726890 kubelet[1567]: E0906 00:26:26.726867 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:26:26.727074 systemd[1]: Created slice kubepods.slice. Sep 6 00:26:26.729636 kubelet[1567]: E0906 00:26:26.729611 1567 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:26:26.731083 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:26:26.735873 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:26:26.740945 kubelet[1567]: I0906 00:26:26.740911 1567 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:26:26.741112 kubelet[1567]: I0906 00:26:26.741098 1567 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:26:26.741178 kubelet[1567]: I0906 00:26:26.741117 1567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:26:26.741659 kubelet[1567]: I0906 00:26:26.741412 1567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:26:26.743128 kubelet[1567]: E0906 00:26:26.742772 1567 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:26:26.843264 kubelet[1567]: I0906 00:26:26.843213 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:26:26.843708 kubelet[1567]: E0906 00:26:26.843673 1567 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Sep 6 00:26:27.045240 kubelet[1567]: I0906 00:26:27.045151 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:26:27.045400 kubelet[1567]: E0906 00:26:27.045382 1567 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Sep 6 00:26:27.080765 kubelet[1567]: W0906 00:26:27.080710 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Sep 6 00:26:27.080842 kubelet[1567]: E0906 00:26:27.080764 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:27.153635 kubelet[1567]: W0906 00:26:27.153578 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Sep 6 00:26:27.153635 kubelet[1567]: E0906 00:26:27.153618 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:27.270500 kubelet[1567]: W0906 00:26:27.270457 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Sep 6 00:26:27.270549 kubelet[1567]: E0906 00:26:27.270500 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:27.309201 kubelet[1567]: E0906 00:26:27.309135 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="1.6s" Sep 6 00:26:27.447031 kubelet[1567]: I0906 00:26:27.447004 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:26:27.447237 kubelet[1567]: E0906 00:26:27.447209 1567 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Sep 6 00:26:27.485596 kubelet[1567]: W0906 00:26:27.485532 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Sep 6 00:26:27.485596 kubelet[1567]: E0906 00:26:27.485583 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:27.536021 systemd[1]: Created slice kubepods-burstable-podf71226de899dc7966ce7babac874b34d.slice. Sep 6 00:26:27.546806 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 6 00:26:27.554085 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 6 00:26:27.631039 kubelet[1567]: I0906 00:26:27.630697 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:27.631039 kubelet[1567]: I0906 00:26:27.630742 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f71226de899dc7966ce7babac874b34d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f71226de899dc7966ce7babac874b34d\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:27.631039 kubelet[1567]: I0906 00:26:27.630765 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:27.631039 kubelet[1567]: I0906 00:26:27.630788 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:27.631039 kubelet[1567]: I0906 00:26:27.630804 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:27.631452 kubelet[1567]: I0906 00:26:27.630850 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:27.631452 kubelet[1567]: I0906 00:26:27.630876 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:26:27.631452 kubelet[1567]: I0906 00:26:27.630889 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f71226de899dc7966ce7babac874b34d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f71226de899dc7966ce7babac874b34d\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:27.631452 kubelet[1567]: I0906 00:26:27.630912 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f71226de899dc7966ce7babac874b34d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f71226de899dc7966ce7babac874b34d\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:27.846161 kubelet[1567]: E0906 00:26:27.846101 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:27.846973 env[1206]: time="2025-09-06T00:26:27.846934394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f71226de899dc7966ce7babac874b34d,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:27.853085 kubelet[1567]: E0906 00:26:27.853059 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:27.853365 env[1206]: time="2025-09-06T00:26:27.853323744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:27.856602 kubelet[1567]: E0906 00:26:27.856573 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:27.856837 env[1206]: time="2025-09-06T00:26:27.856813788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:27.998397 kubelet[1567]: E0906 00:26:27.998266 1567 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:26:28.248761 kubelet[1567]: I0906 00:26:28.248672 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:26:28.249037 kubelet[1567]: E0906 00:26:28.248997 1567 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Sep 6 00:26:28.785126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382469247.mount: Deactivated successfully. Sep 6 00:26:28.790287 env[1206]: time="2025-09-06T00:26:28.790232247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.793083 env[1206]: time="2025-09-06T00:26:28.793060731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.797227 env[1206]: time="2025-09-06T00:26:28.797176117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.798429 env[1206]: time="2025-09-06T00:26:28.798406484Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.799728 env[1206]: time="2025-09-06T00:26:28.799702855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.800914 env[1206]: time="2025-09-06T00:26:28.800893657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.802717 env[1206]: time="2025-09-06T00:26:28.802692420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.804742 env[1206]: time="2025-09-06T00:26:28.804721064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.806660 env[1206]: time="2025-09-06T00:26:28.806641104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.807356 env[1206]: time="2025-09-06T00:26:28.807315869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.808057 env[1206]: time="2025-09-06T00:26:28.808028044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.808662 env[1206]: time="2025-09-06T00:26:28.808635594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:28.825780 env[1206]: time="2025-09-06T00:26:28.825694449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:26:28.825780 env[1206]: time="2025-09-06T00:26:28.825732009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:26:28.825780 env[1206]: time="2025-09-06T00:26:28.825741748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:26:28.826131 env[1206]: time="2025-09-06T00:26:28.826071726Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9f196db9f4b1eb6d959547c8df164ca176c4df4bbc783b1a91d49d47d3c2e44 pid=1608 runtime=io.containerd.runc.v2 Sep 6 00:26:28.836954 env[1206]: time="2025-09-06T00:26:28.836789392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:26:28.836954 env[1206]: time="2025-09-06T00:26:28.836828926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:26:28.836954 env[1206]: time="2025-09-06T00:26:28.836839336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:26:28.837127 env[1206]: time="2025-09-06T00:26:28.836975922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d8edc0f272fa11f46c397f0600fe7fce5840f24e3a25fae86ff1f87963525e1 pid=1634 runtime=io.containerd.runc.v2 Sep 6 00:26:28.838178 systemd[1]: Started cri-containerd-a9f196db9f4b1eb6d959547c8df164ca176c4df4bbc783b1a91d49d47d3c2e44.scope. Sep 6 00:26:28.849183 env[1206]: time="2025-09-06T00:26:28.849119010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:26:28.849183 env[1206]: time="2025-09-06T00:26:28.849152012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:26:28.849555 env[1206]: time="2025-09-06T00:26:28.849163313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:26:28.849555 env[1206]: time="2025-09-06T00:26:28.849457194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84639429a25daf93968a5117c13d4d9c1537916737089a7febab1e67d8d2c7b6 pid=1667 runtime=io.containerd.runc.v2 Sep 6 00:26:28.850986 systemd[1]: Started cri-containerd-8d8edc0f272fa11f46c397f0600fe7fce5840f24e3a25fae86ff1f87963525e1.scope. Sep 6 00:26:28.867087 systemd[1]: Started cri-containerd-84639429a25daf93968a5117c13d4d9c1537916737089a7febab1e67d8d2c7b6.scope. Sep 6 00:26:28.876959 env[1206]: time="2025-09-06T00:26:28.876926840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f71226de899dc7966ce7babac874b34d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9f196db9f4b1eb6d959547c8df164ca176c4df4bbc783b1a91d49d47d3c2e44\"" Sep 6 00:26:28.878002 kubelet[1567]: E0906 00:26:28.877825 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:28.880738 env[1206]: time="2025-09-06T00:26:28.880684886Z" level=info msg="CreateContainer within sandbox \"a9f196db9f4b1eb6d959547c8df164ca176c4df4bbc783b1a91d49d47d3c2e44\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:26:28.894568 env[1206]: time="2025-09-06T00:26:28.894524105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d8edc0f272fa11f46c397f0600fe7fce5840f24e3a25fae86ff1f87963525e1\"" Sep 6 00:26:28.895181 kubelet[1567]: E0906 00:26:28.895138 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:28.896431 env[1206]: time="2025-09-06T00:26:28.896411103Z" level=info msg="CreateContainer within sandbox \"8d8edc0f272fa11f46c397f0600fe7fce5840f24e3a25fae86ff1f87963525e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:26:28.905288 env[1206]: time="2025-09-06T00:26:28.905245557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"84639429a25daf93968a5117c13d4d9c1537916737089a7febab1e67d8d2c7b6\"" Sep 6 00:26:28.905872 kubelet[1567]: E0906 00:26:28.905846 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:28.907236 env[1206]: time="2025-09-06T00:26:28.907209089Z" level=info msg="CreateContainer within sandbox \"84639429a25daf93968a5117c13d4d9c1537916737089a7febab1e67d8d2c7b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:26:28.909891 kubelet[1567]: E0906 00:26:28.909864 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="3.2s" Sep 6 00:26:29.287352 env[1206]: time="2025-09-06T00:26:29.287294008Z" level=info msg="CreateContainer within sandbox \"a9f196db9f4b1eb6d959547c8df164ca176c4df4bbc783b1a91d49d47d3c2e44\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e6bd7589f3fd82147ae47f1313b7c2d3f260321b9aff6cc271c33967122eeb21\"" Sep 6 00:26:29.288122 env[1206]: time="2025-09-06T00:26:29.288085151Z" level=info msg="StartContainer for \"e6bd7589f3fd82147ae47f1313b7c2d3f260321b9aff6cc271c33967122eeb21\"" Sep 6 00:26:29.301218 env[1206]: time="2025-09-06T00:26:29.301178020Z" level=info msg="CreateContainer within sandbox \"8d8edc0f272fa11f46c397f0600fe7fce5840f24e3a25fae86ff1f87963525e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b41665b2bbbd0f7b25016c43a7caabcfd92dddffd72d3abdaf29d6f0e3644568\"" Sep 6 00:26:29.302002 env[1206]: time="2025-09-06T00:26:29.301951270Z" level=info msg="StartContainer for \"b41665b2bbbd0f7b25016c43a7caabcfd92dddffd72d3abdaf29d6f0e3644568\"" Sep 6 00:26:29.302761 systemd[1]: Started cri-containerd-e6bd7589f3fd82147ae47f1313b7c2d3f260321b9aff6cc271c33967122eeb21.scope. Sep 6 00:26:29.303089 env[1206]: time="2025-09-06T00:26:29.303042847Z" level=info msg="CreateContainer within sandbox \"84639429a25daf93968a5117c13d4d9c1537916737089a7febab1e67d8d2c7b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"19c66ce3fcad36637898ab705f016848cfb65f90a5a61b18eb1c605f912671a4\"" Sep 6 00:26:29.304208 env[1206]: time="2025-09-06T00:26:29.304174048Z" level=info msg="StartContainer for \"19c66ce3fcad36637898ab705f016848cfb65f90a5a61b18eb1c605f912671a4\"" Sep 6 00:26:29.320764 systemd[1]: Started cri-containerd-19c66ce3fcad36637898ab705f016848cfb65f90a5a61b18eb1c605f912671a4.scope. Sep 6 00:26:29.323955 systemd[1]: Started cri-containerd-b41665b2bbbd0f7b25016c43a7caabcfd92dddffd72d3abdaf29d6f0e3644568.scope. Sep 6 00:26:29.407167 env[1206]: time="2025-09-06T00:26:29.407114902Z" level=info msg="StartContainer for \"e6bd7589f3fd82147ae47f1313b7c2d3f260321b9aff6cc271c33967122eeb21\" returns successfully" Sep 6 00:26:29.408676 env[1206]: time="2025-09-06T00:26:29.408644751Z" level=info msg="StartContainer for \"19c66ce3fcad36637898ab705f016848cfb65f90a5a61b18eb1c605f912671a4\" returns successfully" Sep 6 00:26:29.408883 env[1206]: time="2025-09-06T00:26:29.408644961Z" level=info msg="StartContainer for \"b41665b2bbbd0f7b25016c43a7caabcfd92dddffd72d3abdaf29d6f0e3644568\" returns successfully" Sep 6 00:26:29.851461 kubelet[1567]: I0906 00:26:29.851021 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:26:30.039965 kubelet[1567]: E0906 00:26:30.039925 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:30.042126 kubelet[1567]: E0906 00:26:30.042063 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:30.043261 kubelet[1567]: E0906 00:26:30.043242 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:30.522632 kubelet[1567]: I0906 00:26:30.522579 1567 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:26:30.522632 kubelet[1567]: E0906 00:26:30.522637 1567 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 00:26:30.892633 kubelet[1567]: I0906 00:26:30.892513 1567 apiserver.go:52] "Watching apiserver" Sep 6 00:26:30.906807 kubelet[1567]: I0906 00:26:30.906743 1567 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:26:31.049677 kubelet[1567]: E0906 00:26:31.049640 1567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 6 00:26:31.050064 kubelet[1567]: E0906 00:26:31.049640 1567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:31.050064 kubelet[1567]: E0906 00:26:31.049795 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:31.050064 kubelet[1567]: E0906 00:26:31.049647 1567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:31.050064 kubelet[1567]: E0906 00:26:31.049880 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:31.050064 kubelet[1567]: E0906 00:26:31.049926 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:32.170611 kubelet[1567]: E0906 00:26:32.170570 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:32.176228 kubelet[1567]: E0906 00:26:32.176203 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:32.942855 systemd[1]: Reloading. Sep 6 00:26:33.017301 /usr/lib/systemd/system-generators/torcx-generator[1861]: time="2025-09-06T00:26:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:26:33.017331 /usr/lib/systemd/system-generators/torcx-generator[1861]: time="2025-09-06T00:26:33Z" level=info msg="torcx already run" Sep 6 00:26:33.047846 kubelet[1567]: E0906 00:26:33.047825 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:33.138353 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:26:33.138367 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:26:33.155158 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:26:33.190685 kubelet[1567]: E0906 00:26:33.190639 1567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:33.190913 kubelet[1567]: E0906 00:26:33.190807 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:33.241427 systemd[1]: Stopping kubelet.service... Sep 6 00:26:33.261637 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:26:33.261805 systemd[1]: Stopped kubelet.service. Sep 6 00:26:33.261847 systemd[1]: kubelet.service: Consumed 1.056s CPU time. Sep 6 00:26:33.263226 systemd[1]: Starting kubelet.service... Sep 6 00:26:33.561583 systemd[1]: Started kubelet.service. Sep 6 00:26:33.618246 kubelet[1908]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:26:33.618246 kubelet[1908]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:26:33.618246 kubelet[1908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:26:33.618683 kubelet[1908]: I0906 00:26:33.618275 1908 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:26:33.623452 kubelet[1908]: I0906 00:26:33.623422 1908 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:26:33.623452 kubelet[1908]: I0906 00:26:33.623443 1908 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:26:33.623668 kubelet[1908]: I0906 00:26:33.623640 1908 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:26:33.625023 kubelet[1908]: I0906 00:26:33.625003 1908 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:26:33.627100 kubelet[1908]: I0906 00:26:33.627074 1908 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:26:33.630890 kubelet[1908]: E0906 00:26:33.630850 1908 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:26:33.630950 kubelet[1908]: I0906 00:26:33.630892 1908 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:26:33.635100 kubelet[1908]: I0906 00:26:33.635084 1908 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:26:33.635283 kubelet[1908]: I0906 00:26:33.635269 1908 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:26:33.635506 kubelet[1908]: I0906 00:26:33.635481 1908 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:26:33.635754 kubelet[1908]: I0906 00:26:33.635567 1908 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:26:33.635905 kubelet[1908]: I0906 00:26:33.635891 1908 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:26:33.635998 kubelet[1908]: I0906 00:26:33.635984 1908 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:26:33.636102 kubelet[1908]: I0906 00:26:33.636088 1908 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:26:33.636266 kubelet[1908]: I0906 00:26:33.636255 1908 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:26:33.636363 kubelet[1908]: I0906 00:26:33.636348 1908 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:26:33.636454 kubelet[1908]: I0906 00:26:33.636440 1908 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:26:33.636537 kubelet[1908]: I0906 00:26:33.636522 1908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:26:33.637369 kubelet[1908]: I0906 00:26:33.637324 1908 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:26:33.638161 kubelet[1908]: I0906 00:26:33.638146 1908 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:26:33.638962 kubelet[1908]: I0906 00:26:33.638928 1908 server.go:1274] "Started kubelet" Sep 6 00:26:33.641201 kubelet[1908]: I0906 00:26:33.641161 1908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:26:33.643537 kubelet[1908]: I0906 00:26:33.641919 1908 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:26:33.643537 kubelet[1908]: I0906 00:26:33.642755 1908 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:26:33.644445 kubelet[1908]: I0906 00:26:33.644406 1908 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:26:33.644741 kubelet[1908]: I0906 00:26:33.644695 1908 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:26:33.647638 kubelet[1908]: E0906 00:26:33.646219 1908 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:26:33.647638 kubelet[1908]: I0906 00:26:33.647258 1908 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:26:33.649675 kubelet[1908]: I0906 00:26:33.649627 1908 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:26:33.649804 kubelet[1908]: I0906 00:26:33.649783 1908 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:26:33.649938 kubelet[1908]: I0906 00:26:33.649918 1908 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:26:33.651012 kubelet[1908]: I0906 00:26:33.650978 1908 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:26:33.651205 kubelet[1908]: I0906 00:26:33.651178 1908 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:26:33.652739 kubelet[1908]: I0906 00:26:33.652723 1908 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:26:33.660404 kubelet[1908]: I0906 00:26:33.660345 1908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:26:33.661498 kubelet[1908]: I0906 00:26:33.661007 1908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:26:33.661579 kubelet[1908]: I0906 00:26:33.661565 1908 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:26:33.661693 kubelet[1908]: I0906 00:26:33.661676 1908 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:26:33.661824 kubelet[1908]: E0906 00:26:33.661793 1908 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:26:33.680308 kubelet[1908]: I0906 00:26:33.680277 1908 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:26:33.680461 kubelet[1908]: I0906 00:26:33.680300 1908 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:26:33.680461 kubelet[1908]: I0906 00:26:33.680355 1908 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:26:33.680534 kubelet[1908]: I0906 00:26:33.680520 1908 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:26:33.680565 kubelet[1908]: I0906 00:26:33.680534 1908 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:26:33.680565 kubelet[1908]: I0906 00:26:33.680555 1908 policy_none.go:49] "None policy: Start" Sep 6 00:26:33.681021 kubelet[1908]: I0906 00:26:33.681000 1908 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:26:33.681021 kubelet[1908]: I0906 00:26:33.681021 1908 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:26:33.681163 kubelet[1908]: I0906 00:26:33.681145 1908 state_mem.go:75] "Updated machine memory state" Sep 6 00:26:33.690318 kubelet[1908]: I0906 00:26:33.690286 1908 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:26:33.690463 kubelet[1908]: I0906 00:26:33.690449 1908 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:26:33.690534 kubelet[1908]: I0906 00:26:33.690461 1908 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:26:33.690732 kubelet[1908]: I0906 00:26:33.690720 1908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:26:33.797830 kubelet[1908]: I0906 00:26:33.797798 1908 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:26:33.879854 kubelet[1908]: E0906 00:26:33.879764 1908 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:33.880023 kubelet[1908]: E0906 00:26:33.879763 1908 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:33.915719 kubelet[1908]: I0906 00:26:33.915676 1908 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 6 00:26:33.915790 kubelet[1908]: I0906 00:26:33.915743 1908 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:26:33.941625 sudo[1943]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:26:33.941846 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:26:33.950619 kubelet[1908]: I0906 00:26:33.950576 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:33.950726 kubelet[1908]: I0906 00:26:33.950627 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:33.950726 kubelet[1908]: I0906 00:26:33.950655 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:26:33.950726 kubelet[1908]: I0906 00:26:33.950681 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f71226de899dc7966ce7babac874b34d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f71226de899dc7966ce7babac874b34d\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:33.950726 kubelet[1908]: I0906 00:26:33.950698 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:33.950726 kubelet[1908]: I0906 00:26:33.950713 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:33.950849 kubelet[1908]: I0906 00:26:33.950726 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:33.950849 kubelet[1908]: I0906 00:26:33.950741 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f71226de899dc7966ce7babac874b34d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f71226de899dc7966ce7babac874b34d\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:33.950849 kubelet[1908]: I0906 00:26:33.950756 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f71226de899dc7966ce7babac874b34d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f71226de899dc7966ce7babac874b34d\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:26:34.121974 kubelet[1908]: E0906 00:26:34.121940 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:34.180534 kubelet[1908]: E0906 00:26:34.180431 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:34.180534 kubelet[1908]: E0906 00:26:34.180443 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:34.637022 kubelet[1908]: I0906 00:26:34.637002 1908 apiserver.go:52] "Watching apiserver" Sep 6 00:26:34.644830 sudo[1943]: pam_unix(sudo:session): session closed for user root Sep 6 00:26:34.650944 kubelet[1908]: I0906 00:26:34.650909 1908 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:26:34.674230 kubelet[1908]: E0906 00:26:34.674213 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:34.674406 kubelet[1908]: E0906 00:26:34.674387 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:34.840955 kubelet[1908]: E0906 00:26:34.840234 1908 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:26:34.840955 kubelet[1908]: E0906 00:26:34.840468 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:34.840955 kubelet[1908]: I0906 00:26:34.840613 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.840590153 podStartE2EDuration="2.840590153s" podCreationTimestamp="2025-09-06 00:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:26:34.840359189 +0000 UTC m=+1.275983526" watchObservedRunningTime="2025-09-06 00:26:34.840590153 +0000 UTC m=+1.276214489" Sep 6 00:26:35.017808 kubelet[1908]: I0906 00:26:35.017650 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.01762336 podStartE2EDuration="3.01762336s" podCreationTimestamp="2025-09-06 00:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:26:35.004257182 +0000 UTC m=+1.439881518" watchObservedRunningTime="2025-09-06 00:26:35.01762336 +0000 UTC m=+1.453247696" Sep 6 00:26:35.062395 kubelet[1908]: I0906 00:26:35.062119 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.062099604 podStartE2EDuration="2.062099604s" podCreationTimestamp="2025-09-06 00:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:26:35.052938221 +0000 UTC m=+1.488562577" watchObservedRunningTime="2025-09-06 00:26:35.062099604 +0000 UTC m=+1.497723940" Sep 6 00:26:35.675754 kubelet[1908]: E0906 00:26:35.675720 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:35.676139 kubelet[1908]: E0906 00:26:35.675859 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:37.013521 sudo[1305]: pam_unix(sudo:session): session closed for user root Sep 6 00:26:37.015800 sshd[1302]: pam_unix(sshd:session): session closed for user core Sep 6 00:26:37.019094 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:41388.service: Deactivated successfully. Sep 6 00:26:37.019979 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:26:37.020149 systemd[1]: session-5.scope: Consumed 5.319s CPU time. Sep 6 00:26:37.021026 systemd-logind[1197]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:26:37.021886 systemd-logind[1197]: Removed session 5. Sep 6 00:26:38.133014 kubelet[1908]: E0906 00:26:38.132970 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:40.004203 kubelet[1908]: I0906 00:26:40.004170 1908 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:26:40.004651 env[1206]: time="2025-09-06T00:26:40.004510544Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:26:40.004894 kubelet[1908]: I0906 00:26:40.004682 1908 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:26:40.295800 kubelet[1908]: E0906 00:26:40.295676 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:40.480892 systemd[1]: Created slice kubepods-besteffort-pod9707a0ad_ca18_4fe6_bc11_75857479899d.slice. Sep 6 00:26:40.492407 systemd[1]: Created slice kubepods-burstable-pod8bcf16aa_a650_4099_a379_262d19b13552.slice. Sep 6 00:26:40.620187 kubelet[1908]: I0906 00:26:40.620128 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9707a0ad-ca18-4fe6-bc11-75857479899d-xtables-lock\") pod \"kube-proxy-n59w6\" (UID: \"9707a0ad-ca18-4fe6-bc11-75857479899d\") " pod="kube-system/kube-proxy-n59w6" Sep 6 00:26:40.620187 kubelet[1908]: I0906 00:26:40.620162 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dljx\" (UniqueName: \"kubernetes.io/projected/9707a0ad-ca18-4fe6-bc11-75857479899d-kube-api-access-6dljx\") pod \"kube-proxy-n59w6\" (UID: \"9707a0ad-ca18-4fe6-bc11-75857479899d\") " pod="kube-system/kube-proxy-n59w6" Sep 6 00:26:40.620187 kubelet[1908]: I0906 00:26:40.620183 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-etc-cni-netd\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620187 kubelet[1908]: I0906 00:26:40.620198 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9707a0ad-ca18-4fe6-bc11-75857479899d-kube-proxy\") pod \"kube-proxy-n59w6\" (UID: \"9707a0ad-ca18-4fe6-bc11-75857479899d\") " pod="kube-system/kube-proxy-n59w6" Sep 6 00:26:40.620499 kubelet[1908]: I0906 00:26:40.620211 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-xtables-lock\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620499 kubelet[1908]: I0906 00:26:40.620225 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bcf16aa-a650-4099-a379-262d19b13552-cilium-config-path\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620499 kubelet[1908]: I0906 00:26:40.620245 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-bpf-maps\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620499 kubelet[1908]: I0906 00:26:40.620259 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8bcf16aa-a650-4099-a379-262d19b13552-hubble-tls\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620499 kubelet[1908]: I0906 00:26:40.620298 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwnt\" (UniqueName: \"kubernetes.io/projected/8bcf16aa-a650-4099-a379-262d19b13552-kube-api-access-mzwnt\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620499 kubelet[1908]: I0906 00:26:40.620326 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-hostproc\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620670 kubelet[1908]: I0906 00:26:40.620365 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-host-proc-sys-kernel\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620670 kubelet[1908]: I0906 00:26:40.620381 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-host-proc-sys-net\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620670 kubelet[1908]: I0906 00:26:40.620404 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9707a0ad-ca18-4fe6-bc11-75857479899d-lib-modules\") pod \"kube-proxy-n59w6\" (UID: \"9707a0ad-ca18-4fe6-bc11-75857479899d\") " pod="kube-system/kube-proxy-n59w6" Sep 6 00:26:40.620670 kubelet[1908]: I0906 00:26:40.620428 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cilium-cgroup\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620670 kubelet[1908]: I0906 00:26:40.620478 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8bcf16aa-a650-4099-a379-262d19b13552-clustermesh-secrets\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620836 kubelet[1908]: I0906 00:26:40.620513 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-lib-modules\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620836 kubelet[1908]: I0906 00:26:40.620535 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cilium-run\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.620836 kubelet[1908]: I0906 00:26:40.620561 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cni-path\") pod \"cilium-2dsnm\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " pod="kube-system/cilium-2dsnm" Sep 6 00:26:40.683559 kubelet[1908]: E0906 00:26:40.683515 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:40.721869 kubelet[1908]: I0906 00:26:40.721813 1908 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:26:40.742552 systemd[1]: Created slice kubepods-besteffort-pod4fff2499_3f78_4775_b3f1_e9dd8b47c411.slice. Sep 6 00:26:40.787286 kubelet[1908]: E0906 00:26:40.787242 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:40.788427 env[1206]: time="2025-09-06T00:26:40.788385496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n59w6,Uid:9707a0ad-ca18-4fe6-bc11-75857479899d,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:40.794498 kubelet[1908]: E0906 00:26:40.794471 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:40.794871 env[1206]: time="2025-09-06T00:26:40.794842914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2dsnm,Uid:8bcf16aa-a650-4099-a379-262d19b13552,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:40.845419 kubelet[1908]: I0906 00:26:40.823063 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288mc\" (UniqueName: \"kubernetes.io/projected/4fff2499-3f78-4775-b3f1-e9dd8b47c411-kube-api-access-288mc\") pod \"cilium-operator-5d85765b45-d78n9\" (UID: \"4fff2499-3f78-4775-b3f1-e9dd8b47c411\") " pod="kube-system/cilium-operator-5d85765b45-d78n9" Sep 6 00:26:40.845419 kubelet[1908]: I0906 00:26:40.823120 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fff2499-3f78-4775-b3f1-e9dd8b47c411-cilium-config-path\") pod \"cilium-operator-5d85765b45-d78n9\" (UID: \"4fff2499-3f78-4775-b3f1-e9dd8b47c411\") " pod="kube-system/cilium-operator-5d85765b45-d78n9" Sep 6 00:26:40.852535 env[1206]: time="2025-09-06T00:26:40.852455097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:26:40.852535 env[1206]: time="2025-09-06T00:26:40.852511244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:26:40.852535 env[1206]: time="2025-09-06T00:26:40.852531022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:26:40.852724 env[1206]: time="2025-09-06T00:26:40.852669115Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89cda7c35ec43977a801b6211de1d9fb036d5b03ca214fc4f97b10f57af04d05 pid=2001 runtime=io.containerd.runc.v2 Sep 6 00:26:40.865972 systemd[1]: Started cri-containerd-89cda7c35ec43977a801b6211de1d9fb036d5b03ca214fc4f97b10f57af04d05.scope. Sep 6 00:26:40.889664 env[1206]: time="2025-09-06T00:26:40.889535031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n59w6,Uid:9707a0ad-ca18-4fe6-bc11-75857479899d,Namespace:kube-system,Attempt:0,} returns sandbox id \"89cda7c35ec43977a801b6211de1d9fb036d5b03ca214fc4f97b10f57af04d05\"" Sep 6 00:26:40.890828 kubelet[1908]: E0906 00:26:40.890796 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:40.893710 env[1206]: time="2025-09-06T00:26:40.893618258Z" level=info msg="CreateContainer within sandbox \"89cda7c35ec43977a801b6211de1d9fb036d5b03ca214fc4f97b10f57af04d05\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:26:40.951587 env[1206]: time="2025-09-06T00:26:40.951445467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:26:40.951587 env[1206]: time="2025-09-06T00:26:40.951487888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:26:40.951587 env[1206]: time="2025-09-06T00:26:40.951499099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:26:40.952166 env[1206]: time="2025-09-06T00:26:40.951703508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a pid=2040 runtime=io.containerd.runc.v2 Sep 6 00:26:40.958715 env[1206]: time="2025-09-06T00:26:40.958669486Z" level=info msg="CreateContainer within sandbox \"89cda7c35ec43977a801b6211de1d9fb036d5b03ca214fc4f97b10f57af04d05\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f997525a6f88fb962a3bed7966791b1e9782a71a5cb7d3e2fe8cb50a4d46d2f\"" Sep 6 00:26:40.959472 env[1206]: time="2025-09-06T00:26:40.959447147Z" level=info msg="StartContainer for \"7f997525a6f88fb962a3bed7966791b1e9782a71a5cb7d3e2fe8cb50a4d46d2f\"" Sep 6 00:26:40.963175 systemd[1]: Started cri-containerd-bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a.scope. Sep 6 00:26:40.981574 systemd[1]: Started cri-containerd-7f997525a6f88fb962a3bed7966791b1e9782a71a5cb7d3e2fe8cb50a4d46d2f.scope. Sep 6 00:26:40.985730 env[1206]: time="2025-09-06T00:26:40.985680713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2dsnm,Uid:8bcf16aa-a650-4099-a379-262d19b13552,Namespace:kube-system,Attempt:0,} returns sandbox id \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\"" Sep 6 00:26:40.986517 kubelet[1908]: E0906 00:26:40.986304 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:40.989497 env[1206]: time="2025-09-06T00:26:40.989472324Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:26:41.015278 env[1206]: time="2025-09-06T00:26:41.015233933Z" level=info msg="StartContainer for \"7f997525a6f88fb962a3bed7966791b1e9782a71a5cb7d3e2fe8cb50a4d46d2f\" returns successfully" Sep 6 00:26:41.048669 kubelet[1908]: E0906 00:26:41.048625 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:41.050645 env[1206]: time="2025-09-06T00:26:41.049434248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d78n9,Uid:4fff2499-3f78-4775-b3f1-e9dd8b47c411,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:41.065642 env[1206]: time="2025-09-06T00:26:41.063804461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:26:41.065642 env[1206]: time="2025-09-06T00:26:41.063873974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:26:41.065642 env[1206]: time="2025-09-06T00:26:41.063895364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:26:41.065642 env[1206]: time="2025-09-06T00:26:41.064039278Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f pid=2125 runtime=io.containerd.runc.v2 Sep 6 00:26:41.076228 systemd[1]: Started cri-containerd-8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f.scope. Sep 6 00:26:41.106768 env[1206]: time="2025-09-06T00:26:41.106714900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d78n9,Uid:4fff2499-3f78-4775-b3f1-e9dd8b47c411,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f\"" Sep 6 00:26:41.107458 kubelet[1908]: E0906 00:26:41.107406 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:41.421609 kubelet[1908]: E0906 00:26:41.421557 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:41.687370 kubelet[1908]: E0906 00:26:41.687076 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:41.687370 kubelet[1908]: E0906 00:26:41.687076 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:41.687370 kubelet[1908]: E0906 00:26:41.687226 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:41.703106 kubelet[1908]: I0906 00:26:41.703056 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n59w6" podStartSLOduration=1.703036157 podStartE2EDuration="1.703036157s" podCreationTimestamp="2025-09-06 00:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:26:41.702541947 +0000 UTC m=+8.138166283" watchObservedRunningTime="2025-09-06 00:26:41.703036157 +0000 UTC m=+8.138660493" Sep 6 00:26:45.413651 update_engine[1199]: I0906 00:26:45.413196 1199 update_attempter.cc:509] Updating boot flags... Sep 6 00:26:48.137889 kubelet[1908]: E0906 00:26:48.137861 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:49.872726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032077238.mount: Deactivated successfully. Sep 6 00:26:53.328254 env[1206]: time="2025-09-06T00:26:53.328192476Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:53.330110 env[1206]: time="2025-09-06T00:26:53.330078848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:53.331872 env[1206]: time="2025-09-06T00:26:53.331821117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:53.332700 env[1206]: time="2025-09-06T00:26:53.332643680Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:26:53.334146 env[1206]: time="2025-09-06T00:26:53.334107222Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:26:53.334982 env[1206]: time="2025-09-06T00:26:53.334940676Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:26:53.349705 env[1206]: time="2025-09-06T00:26:53.349647059Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b\"" Sep 6 00:26:53.350146 env[1206]: time="2025-09-06T00:26:53.350116505Z" level=info msg="StartContainer for \"8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b\"" Sep 6 00:26:53.369505 systemd[1]: Started cri-containerd-8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b.scope. Sep 6 00:26:53.406457 systemd[1]: cri-containerd-8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b.scope: Deactivated successfully. Sep 6 00:26:53.840469 env[1206]: time="2025-09-06T00:26:53.840408930Z" level=info msg="StartContainer for \"8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b\" returns successfully" Sep 6 00:26:53.940200 env[1206]: time="2025-09-06T00:26:53.940145394Z" level=info msg="shim disconnected" id=8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b Sep 6 00:26:53.940200 env[1206]: time="2025-09-06T00:26:53.940195930Z" level=warning msg="cleaning up after shim disconnected" id=8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b namespace=k8s.io Sep 6 00:26:53.940200 env[1206]: time="2025-09-06T00:26:53.940205357Z" level=info msg="cleaning up dead shim" Sep 6 00:26:53.947389 env[1206]: time="2025-09-06T00:26:53.947354864Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2351 runtime=io.containerd.runc.v2\n" Sep 6 00:26:54.346101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b-rootfs.mount: Deactivated successfully. Sep 6 00:26:54.845892 kubelet[1908]: E0906 00:26:54.845864 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:54.847866 env[1206]: time="2025-09-06T00:26:54.847819718Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:26:55.015905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482594614.mount: Deactivated successfully. Sep 6 00:26:55.019846 env[1206]: time="2025-09-06T00:26:55.019780455Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558\"" Sep 6 00:26:55.020583 env[1206]: time="2025-09-06T00:26:55.020550548Z" level=info msg="StartContainer for \"afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558\"" Sep 6 00:26:55.038064 systemd[1]: Started cri-containerd-afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558.scope. Sep 6 00:26:55.098780 env[1206]: time="2025-09-06T00:26:55.098320321Z" level=info msg="StartContainer for \"afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558\" returns successfully" Sep 6 00:26:55.101125 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:26:55.101341 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:26:55.101532 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:26:55.102916 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:26:55.104457 systemd[1]: cri-containerd-afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558.scope: Deactivated successfully. Sep 6 00:26:55.115423 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:26:55.136226 env[1206]: time="2025-09-06T00:26:55.136150398Z" level=info msg="shim disconnected" id=afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558 Sep 6 00:26:55.136226 env[1206]: time="2025-09-06T00:26:55.136216202Z" level=warning msg="cleaning up after shim disconnected" id=afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558 namespace=k8s.io Sep 6 00:26:55.136226 env[1206]: time="2025-09-06T00:26:55.136228556Z" level=info msg="cleaning up dead shim" Sep 6 00:26:55.148135 env[1206]: time="2025-09-06T00:26:55.148082475Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2414 runtime=io.containerd.runc.v2\n" Sep 6 00:26:55.345691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558-rootfs.mount: Deactivated successfully. Sep 6 00:26:55.848351 kubelet[1908]: E0906 00:26:55.848296 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:55.849839 env[1206]: time="2025-09-06T00:26:55.849791367Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:26:56.091762 env[1206]: time="2025-09-06T00:26:56.091700224Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207\"" Sep 6 00:26:56.092233 env[1206]: time="2025-09-06T00:26:56.092139091Z" level=info msg="StartContainer for \"736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207\"" Sep 6 00:26:56.105750 env[1206]: time="2025-09-06T00:26:56.105654495Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:56.108108 env[1206]: time="2025-09-06T00:26:56.108060311Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:56.110916 env[1206]: time="2025-09-06T00:26:56.110881069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:26:56.111155 env[1206]: time="2025-09-06T00:26:56.111127975Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:26:56.112843 systemd[1]: Started cri-containerd-736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207.scope. Sep 6 00:26:56.116196 env[1206]: time="2025-09-06T00:26:56.116165514Z" level=info msg="CreateContainer within sandbox \"8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:26:56.130472 env[1206]: time="2025-09-06T00:26:56.130424971Z" level=info msg="CreateContainer within sandbox \"8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\"" Sep 6 00:26:56.132190 env[1206]: time="2025-09-06T00:26:56.131408124Z" level=info msg="StartContainer for \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\"" Sep 6 00:26:56.146279 systemd[1]: cri-containerd-736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207.scope: Deactivated successfully. Sep 6 00:26:56.147117 env[1206]: time="2025-09-06T00:26:56.147078883Z" level=info msg="StartContainer for \"736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207\" returns successfully" Sep 6 00:26:56.151286 systemd[1]: Started cri-containerd-e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad.scope. Sep 6 00:26:56.346433 systemd[1]: run-containerd-runc-k8s.io-736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207-runc.FScpDd.mount: Deactivated successfully. Sep 6 00:26:56.346522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207-rootfs.mount: Deactivated successfully. Sep 6 00:26:56.595107 env[1206]: time="2025-09-06T00:26:56.595053392Z" level=info msg="StartContainer for \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\" returns successfully" Sep 6 00:26:56.596298 env[1206]: time="2025-09-06T00:26:56.596265186Z" level=info msg="shim disconnected" id=736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207 Sep 6 00:26:56.596368 env[1206]: time="2025-09-06T00:26:56.596301745Z" level=warning msg="cleaning up after shim disconnected" id=736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207 namespace=k8s.io Sep 6 00:26:56.596368 env[1206]: time="2025-09-06T00:26:56.596312395Z" level=info msg="cleaning up dead shim" Sep 6 00:26:56.607970 env[1206]: time="2025-09-06T00:26:56.607922106Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2509 runtime=io.containerd.runc.v2\n" Sep 6 00:26:56.853151 kubelet[1908]: E0906 00:26:56.852114 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:56.855617 env[1206]: time="2025-09-06T00:26:56.855576191Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:26:56.859080 kubelet[1908]: E0906 00:26:56.858997 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:56.875736 env[1206]: time="2025-09-06T00:26:56.875684626Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49\"" Sep 6 00:26:56.876379 env[1206]: time="2025-09-06T00:26:56.876360049Z" level=info msg="StartContainer for \"f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49\"" Sep 6 00:26:56.919956 systemd[1]: Started cri-containerd-f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49.scope. Sep 6 00:26:56.927930 kubelet[1908]: I0906 00:26:56.927841 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-d78n9" podStartSLOduration=1.9237545 podStartE2EDuration="16.927815651s" podCreationTimestamp="2025-09-06 00:26:40 +0000 UTC" firstStartedPulling="2025-09-06 00:26:41.107894775 +0000 UTC m=+7.543519112" lastFinishedPulling="2025-09-06 00:26:56.111955927 +0000 UTC m=+22.547580263" observedRunningTime="2025-09-06 00:26:56.927661622 +0000 UTC m=+23.363285968" watchObservedRunningTime="2025-09-06 00:26:56.927815651 +0000 UTC m=+23.363439977" Sep 6 00:26:56.964987 systemd[1]: cri-containerd-f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49.scope: Deactivated successfully. Sep 6 00:26:56.965634 env[1206]: time="2025-09-06T00:26:56.965522720Z" level=info msg="StartContainer for \"f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49\" returns successfully" Sep 6 00:26:56.987921 env[1206]: time="2025-09-06T00:26:56.987854428Z" level=info msg="shim disconnected" id=f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49 Sep 6 00:26:56.987921 env[1206]: time="2025-09-06T00:26:56.987909452Z" level=warning msg="cleaning up after shim disconnected" id=f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49 namespace=k8s.io Sep 6 00:26:56.987921 env[1206]: time="2025-09-06T00:26:56.987917888Z" level=info msg="cleaning up dead shim" Sep 6 00:26:56.995579 env[1206]: time="2025-09-06T00:26:56.995525652Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:26:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2563 runtime=io.containerd.runc.v2\n" Sep 6 00:26:57.346423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49-rootfs.mount: Deactivated successfully. Sep 6 00:26:57.861896 kubelet[1908]: E0906 00:26:57.861868 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:57.862471 kubelet[1908]: E0906 00:26:57.861905 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:57.863282 env[1206]: time="2025-09-06T00:26:57.863241886Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:26:57.878882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085226862.mount: Deactivated successfully. Sep 6 00:26:57.881527 env[1206]: time="2025-09-06T00:26:57.881472669Z" level=info msg="CreateContainer within sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\"" Sep 6 00:26:57.882008 env[1206]: time="2025-09-06T00:26:57.881945280Z" level=info msg="StartContainer for \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\"" Sep 6 00:26:57.897688 systemd[1]: Started cri-containerd-5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b.scope. Sep 6 00:26:57.920910 env[1206]: time="2025-09-06T00:26:57.920853618Z" level=info msg="StartContainer for \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\" returns successfully" Sep 6 00:26:58.022834 kubelet[1908]: I0906 00:26:58.022798 1908 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:26:58.066825 systemd[1]: Created slice kubepods-burstable-pod6c090587_7e0c_4ae7_bc9f_d646dcd5faa2.slice. Sep 6 00:26:58.073696 systemd[1]: Created slice kubepods-burstable-pod3b5abf28_2335_4cc8_8bf0_1e4e42a4f41d.slice. Sep 6 00:26:58.246416 kubelet[1908]: I0906 00:26:58.246308 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c090587-7e0c-4ae7-bc9f-d646dcd5faa2-config-volume\") pod \"coredns-7c65d6cfc9-twcsx\" (UID: \"6c090587-7e0c-4ae7-bc9f-d646dcd5faa2\") " pod="kube-system/coredns-7c65d6cfc9-twcsx" Sep 6 00:26:58.246416 kubelet[1908]: I0906 00:26:58.246381 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b5abf28-2335-4cc8-8bf0-1e4e42a4f41d-config-volume\") pod \"coredns-7c65d6cfc9-8rmpm\" (UID: \"3b5abf28-2335-4cc8-8bf0-1e4e42a4f41d\") " pod="kube-system/coredns-7c65d6cfc9-8rmpm" Sep 6 00:26:58.246416 kubelet[1908]: I0906 00:26:58.246407 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmqq\" (UniqueName: \"kubernetes.io/projected/3b5abf28-2335-4cc8-8bf0-1e4e42a4f41d-kube-api-access-9mmqq\") pod \"coredns-7c65d6cfc9-8rmpm\" (UID: \"3b5abf28-2335-4cc8-8bf0-1e4e42a4f41d\") " pod="kube-system/coredns-7c65d6cfc9-8rmpm" Sep 6 00:26:58.246639 kubelet[1908]: I0906 00:26:58.246431 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gnrj\" (UniqueName: \"kubernetes.io/projected/6c090587-7e0c-4ae7-bc9f-d646dcd5faa2-kube-api-access-4gnrj\") pod \"coredns-7c65d6cfc9-twcsx\" (UID: \"6c090587-7e0c-4ae7-bc9f-d646dcd5faa2\") " pod="kube-system/coredns-7c65d6cfc9-twcsx" Sep 6 00:26:58.345905 systemd[1]: run-containerd-runc-k8s.io-5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b-runc.j6qzfy.mount: Deactivated successfully. Sep 6 00:26:58.373175 kubelet[1908]: E0906 00:26:58.373143 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:58.374007 env[1206]: time="2025-09-06T00:26:58.373958510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-twcsx,Uid:6c090587-7e0c-4ae7-bc9f-d646dcd5faa2,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:58.376186 kubelet[1908]: E0906 00:26:58.376136 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:58.376510 env[1206]: time="2025-09-06T00:26:58.376472948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8rmpm,Uid:3b5abf28-2335-4cc8-8bf0-1e4e42a4f41d,Namespace:kube-system,Attempt:0,}" Sep 6 00:26:58.865308 kubelet[1908]: E0906 00:26:58.865276 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:59.867002 kubelet[1908]: E0906 00:26:59.866975 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:26:59.964364 systemd-networkd[1030]: cilium_host: Link UP Sep 6 00:26:59.964526 systemd-networkd[1030]: cilium_net: Link UP Sep 6 00:26:59.964530 systemd-networkd[1030]: cilium_net: Gained carrier Sep 6 00:26:59.966836 systemd-networkd[1030]: cilium_host: Gained carrier Sep 6 00:26:59.967365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:27:00.034398 systemd-networkd[1030]: cilium_vxlan: Link UP Sep 6 00:27:00.034450 systemd-networkd[1030]: cilium_vxlan: Gained carrier Sep 6 00:27:00.214246 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:40494.service. Sep 6 00:27:00.222360 kernel: NET: Registered PF_ALG protocol family Sep 6 00:27:00.250426 sshd[2845]: Accepted publickey for core from 10.0.0.1 port 40494 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:00.251947 sshd[2845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:00.256032 systemd-logind[1197]: New session 6 of user core. Sep 6 00:27:00.257058 systemd[1]: Started session-6.scope. Sep 6 00:27:00.394540 sshd[2845]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:00.397040 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:40494.service: Deactivated successfully. Sep 6 00:27:00.397487 systemd-networkd[1030]: cilium_host: Gained IPv6LL Sep 6 00:27:00.397884 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:27:00.398244 systemd-logind[1197]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:27:00.401773 systemd-logind[1197]: Removed session 6. Sep 6 00:27:00.758839 systemd-networkd[1030]: lxc_health: Link UP Sep 6 00:27:00.766852 systemd-networkd[1030]: lxc_health: Gained carrier Sep 6 00:27:00.768315 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:27:00.809836 kubelet[1908]: I0906 00:27:00.809777 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2dsnm" podStartSLOduration=8.46375895 podStartE2EDuration="20.809760137s" podCreationTimestamp="2025-09-06 00:26:40 +0000 UTC" firstStartedPulling="2025-09-06 00:26:40.987783587 +0000 UTC m=+7.423407923" lastFinishedPulling="2025-09-06 00:26:53.333784773 +0000 UTC m=+19.769409110" observedRunningTime="2025-09-06 00:26:58.932491008 +0000 UTC m=+25.368115344" watchObservedRunningTime="2025-09-06 00:27:00.809760137 +0000 UTC m=+27.245384473" Sep 6 00:27:00.837492 systemd-networkd[1030]: cilium_net: Gained IPv6LL Sep 6 00:27:00.869300 kubelet[1908]: E0906 00:27:00.869272 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:00.913606 systemd-networkd[1030]: lxcf2a1c8d5db79: Link UP Sep 6 00:27:00.922435 kernel: eth0: renamed from tmp28913 Sep 6 00:27:00.932562 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf2a1c8d5db79: link becomes ready Sep 6 00:27:00.933432 systemd-networkd[1030]: lxcf2a1c8d5db79: Gained carrier Sep 6 00:27:00.934382 systemd-networkd[1030]: lxc492329267214: Link UP Sep 6 00:27:00.942405 kernel: eth0: renamed from tmp99603 Sep 6 00:27:00.947884 systemd-networkd[1030]: lxc492329267214: Gained carrier Sep 6 00:27:00.948369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc492329267214: link becomes ready Sep 6 00:27:01.093612 systemd-networkd[1030]: cilium_vxlan: Gained IPv6LL Sep 6 00:27:01.870934 kubelet[1908]: I0906 00:27:01.870885 1908 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:27:01.871506 kubelet[1908]: E0906 00:27:01.871283 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:02.053518 systemd-networkd[1030]: lxc_health: Gained IPv6LL Sep 6 00:27:02.565794 systemd-networkd[1030]: lxc492329267214: Gained IPv6LL Sep 6 00:27:02.757852 systemd-networkd[1030]: lxcf2a1c8d5db79: Gained IPv6LL Sep 6 00:27:02.873421 kubelet[1908]: E0906 00:27:02.873299 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:03.874980 kubelet[1908]: E0906 00:27:03.874951 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:04.326359 env[1206]: time="2025-09-06T00:27:04.326282431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:27:04.326359 env[1206]: time="2025-09-06T00:27:04.326325312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:27:04.326359 env[1206]: time="2025-09-06T00:27:04.326354356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:27:04.326815 env[1206]: time="2025-09-06T00:27:04.326553500Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99603d499ec36deabd4ed061f17171d65f13bafcabc97f613d31296b51e17ea5 pid=3150 runtime=io.containerd.runc.v2 Sep 6 00:27:04.341672 systemd[1]: Started cri-containerd-99603d499ec36deabd4ed061f17171d65f13bafcabc97f613d31296b51e17ea5.scope. Sep 6 00:27:04.351513 systemd-resolved[1146]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:27:04.371541 env[1206]: time="2025-09-06T00:27:04.371476104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8rmpm,Uid:3b5abf28-2335-4cc8-8bf0-1e4e42a4f41d,Namespace:kube-system,Attempt:0,} returns sandbox id \"99603d499ec36deabd4ed061f17171d65f13bafcabc97f613d31296b51e17ea5\"" Sep 6 00:27:04.372051 kubelet[1908]: E0906 00:27:04.372019 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:04.382851 env[1206]: time="2025-09-06T00:27:04.382503219Z" level=info msg="CreateContainer within sandbox \"99603d499ec36deabd4ed061f17171d65f13bafcabc97f613d31296b51e17ea5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:27:04.387365 env[1206]: time="2025-09-06T00:27:04.386937240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:27:04.387365 env[1206]: time="2025-09-06T00:27:04.386991683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:27:04.387365 env[1206]: time="2025-09-06T00:27:04.387001943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:27:04.387365 env[1206]: time="2025-09-06T00:27:04.387150843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28913de75f64281f5855e6eb8b6b302b2983825364fd155f78abd5b2400065ff pid=3191 runtime=io.containerd.runc.v2 Sep 6 00:27:04.402512 env[1206]: time="2025-09-06T00:27:04.402453451Z" level=info msg="CreateContainer within sandbox \"99603d499ec36deabd4ed061f17171d65f13bafcabc97f613d31296b51e17ea5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ab06845debeac4cfb5e99ee5ad04a109a0437cf0864a62052be17f754c796e3\"" Sep 6 00:27:04.402902 systemd[1]: Started cri-containerd-28913de75f64281f5855e6eb8b6b302b2983825364fd155f78abd5b2400065ff.scope. Sep 6 00:27:04.404214 env[1206]: time="2025-09-06T00:27:04.403218881Z" level=info msg="StartContainer for \"8ab06845debeac4cfb5e99ee5ad04a109a0437cf0864a62052be17f754c796e3\"" Sep 6 00:27:04.416768 systemd-resolved[1146]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:27:04.422287 systemd[1]: Started cri-containerd-8ab06845debeac4cfb5e99ee5ad04a109a0437cf0864a62052be17f754c796e3.scope. Sep 6 00:27:04.445362 env[1206]: time="2025-09-06T00:27:04.445301771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-twcsx,Uid:6c090587-7e0c-4ae7-bc9f-d646dcd5faa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"28913de75f64281f5855e6eb8b6b302b2983825364fd155f78abd5b2400065ff\"" Sep 6 00:27:04.446374 kubelet[1908]: E0906 00:27:04.446128 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:04.448310 env[1206]: time="2025-09-06T00:27:04.448255027Z" level=info msg="CreateContainer within sandbox \"28913de75f64281f5855e6eb8b6b302b2983825364fd155f78abd5b2400065ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:27:04.652905 env[1206]: time="2025-09-06T00:27:04.652793334Z" level=info msg="StartContainer for \"8ab06845debeac4cfb5e99ee5ad04a109a0437cf0864a62052be17f754c796e3\" returns successfully" Sep 6 00:27:04.667500 env[1206]: time="2025-09-06T00:27:04.667434770Z" level=info msg="CreateContainer within sandbox \"28913de75f64281f5855e6eb8b6b302b2983825364fd155f78abd5b2400065ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7950dfc34f1af358327411e42b7c1b6ae98e689639022415cdb14c73874dfda2\"" Sep 6 00:27:04.667901 env[1206]: time="2025-09-06T00:27:04.667870789Z" level=info msg="StartContainer for \"7950dfc34f1af358327411e42b7c1b6ae98e689639022415cdb14c73874dfda2\"" Sep 6 00:27:04.689382 systemd[1]: Started cri-containerd-7950dfc34f1af358327411e42b7c1b6ae98e689639022415cdb14c73874dfda2.scope. Sep 6 00:27:04.715862 env[1206]: time="2025-09-06T00:27:04.715809577Z" level=info msg="StartContainer for \"7950dfc34f1af358327411e42b7c1b6ae98e689639022415cdb14c73874dfda2\" returns successfully" Sep 6 00:27:04.884383 kubelet[1908]: E0906 00:27:04.884350 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:04.884831 kubelet[1908]: E0906 00:27:04.884410 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:04.897257 kubelet[1908]: I0906 00:27:04.897183 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8rmpm" podStartSLOduration=24.897165661 podStartE2EDuration="24.897165661s" podCreationTimestamp="2025-09-06 00:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:27:04.896720604 +0000 UTC m=+31.332344960" watchObservedRunningTime="2025-09-06 00:27:04.897165661 +0000 UTC m=+31.332789997" Sep 6 00:27:04.907988 kubelet[1908]: I0906 00:27:04.907845 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-twcsx" podStartSLOduration=24.907821097 podStartE2EDuration="24.907821097s" podCreationTimestamp="2025-09-06 00:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:27:04.907617865 +0000 UTC m=+31.343242201" watchObservedRunningTime="2025-09-06 00:27:04.907821097 +0000 UTC m=+31.343445433" Sep 6 00:27:05.397683 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:40510.service. Sep 6 00:27:05.434596 sshd[3306]: Accepted publickey for core from 10.0.0.1 port 40510 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:05.435829 sshd[3306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:05.439185 systemd-logind[1197]: New session 7 of user core. Sep 6 00:27:05.440212 systemd[1]: Started session-7.scope. Sep 6 00:27:05.548237 sshd[3306]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:05.550051 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:40510.service: Deactivated successfully. Sep 6 00:27:05.550775 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:27:05.551210 systemd-logind[1197]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:27:05.551773 systemd-logind[1197]: Removed session 7. Sep 6 00:27:08.374593 kubelet[1908]: E0906 00:27:08.374558 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:08.377169 kubelet[1908]: E0906 00:27:08.377137 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:08.890580 kubelet[1908]: E0906 00:27:08.890547 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:08.890760 kubelet[1908]: E0906 00:27:08.890673 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:10.551893 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:45786.service. Sep 6 00:27:10.585215 sshd[3327]: Accepted publickey for core from 10.0.0.1 port 45786 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:10.586250 sshd[3327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:10.589872 systemd-logind[1197]: New session 8 of user core. Sep 6 00:27:10.590918 systemd[1]: Started session-8.scope. Sep 6 00:27:10.725880 sshd[3327]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:10.728269 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:45786.service: Deactivated successfully. Sep 6 00:27:10.729083 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:27:10.729961 systemd-logind[1197]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:27:10.730705 systemd-logind[1197]: Removed session 8. Sep 6 00:27:15.729610 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:45796.service. Sep 6 00:27:15.766750 sshd[3343]: Accepted publickey for core from 10.0.0.1 port 45796 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:15.767960 sshd[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:15.771552 systemd-logind[1197]: New session 9 of user core. Sep 6 00:27:15.772501 systemd[1]: Started session-9.scope. Sep 6 00:27:15.884129 sshd[3343]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:15.886236 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:45796.service: Deactivated successfully. Sep 6 00:27:15.887092 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:27:15.887723 systemd-logind[1197]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:27:15.888399 systemd-logind[1197]: Removed session 9. Sep 6 00:27:20.890422 systemd[1]: Started sshd@9-10.0.0.101:22-10.0.0.1:45336.service. Sep 6 00:27:20.925209 sshd[3357]: Accepted publickey for core from 10.0.0.1 port 45336 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:20.926507 sshd[3357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:20.930046 systemd-logind[1197]: New session 10 of user core. Sep 6 00:27:20.931012 systemd[1]: Started session-10.scope. Sep 6 00:27:21.052713 sshd[3357]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:21.056579 systemd[1]: sshd@9-10.0.0.101:22-10.0.0.1:45336.service: Deactivated successfully. Sep 6 00:27:21.057329 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:27:21.057942 systemd-logind[1197]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:27:21.059321 systemd[1]: Started sshd@10-10.0.0.101:22-10.0.0.1:45350.service. Sep 6 00:27:21.060267 systemd-logind[1197]: Removed session 10. Sep 6 00:27:21.094592 sshd[3371]: Accepted publickey for core from 10.0.0.1 port 45350 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:21.096094 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:21.100623 systemd-logind[1197]: New session 11 of user core. Sep 6 00:27:21.101678 systemd[1]: Started session-11.scope. Sep 6 00:27:21.297357 sshd[3371]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:21.300086 systemd[1]: sshd@10-10.0.0.101:22-10.0.0.1:45350.service: Deactivated successfully. Sep 6 00:27:21.300611 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:27:21.302225 systemd[1]: Started sshd@11-10.0.0.101:22-10.0.0.1:45358.service. Sep 6 00:27:21.302664 systemd-logind[1197]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:27:21.303563 systemd-logind[1197]: Removed session 11. Sep 6 00:27:21.337869 sshd[3382]: Accepted publickey for core from 10.0.0.1 port 45358 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:21.338983 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:21.342464 systemd-logind[1197]: New session 12 of user core. Sep 6 00:27:21.343224 systemd[1]: Started session-12.scope. Sep 6 00:27:21.464456 sshd[3382]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:21.466835 systemd[1]: sshd@11-10.0.0.101:22-10.0.0.1:45358.service: Deactivated successfully. Sep 6 00:27:21.467792 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:27:21.468368 systemd-logind[1197]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:27:21.469137 systemd-logind[1197]: Removed session 12. Sep 6 00:27:26.468876 systemd[1]: Started sshd@12-10.0.0.101:22-10.0.0.1:45372.service. Sep 6 00:27:26.502430 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 45372 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:26.503598 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:26.507189 systemd-logind[1197]: New session 13 of user core. Sep 6 00:27:26.508132 systemd[1]: Started session-13.scope. Sep 6 00:27:26.615319 sshd[3397]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:26.617401 systemd[1]: sshd@12-10.0.0.101:22-10.0.0.1:45372.service: Deactivated successfully. Sep 6 00:27:26.618046 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:27:26.618579 systemd-logind[1197]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:27:26.619191 systemd-logind[1197]: Removed session 13. Sep 6 00:27:31.618911 systemd[1]: Started sshd@13-10.0.0.101:22-10.0.0.1:58212.service. Sep 6 00:27:31.652210 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 58212 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:31.653192 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:31.656245 systemd-logind[1197]: New session 14 of user core. Sep 6 00:27:31.657179 systemd[1]: Started session-14.scope. Sep 6 00:27:31.829801 sshd[3410]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:31.832887 systemd[1]: sshd@13-10.0.0.101:22-10.0.0.1:58212.service: Deactivated successfully. Sep 6 00:27:31.833386 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:27:31.833981 systemd-logind[1197]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:27:31.834911 systemd[1]: Started sshd@14-10.0.0.101:22-10.0.0.1:58214.service. Sep 6 00:27:31.835671 systemd-logind[1197]: Removed session 14. Sep 6 00:27:31.869594 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 58214 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:31.870855 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:31.874100 systemd-logind[1197]: New session 15 of user core. Sep 6 00:27:31.874962 systemd[1]: Started session-15.scope. Sep 6 00:27:32.301390 sshd[3423]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:32.303960 systemd[1]: sshd@14-10.0.0.101:22-10.0.0.1:58214.service: Deactivated successfully. Sep 6 00:27:32.304460 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:27:32.305026 systemd-logind[1197]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:27:32.306031 systemd[1]: Started sshd@15-10.0.0.101:22-10.0.0.1:58226.service. Sep 6 00:27:32.306788 systemd-logind[1197]: Removed session 15. Sep 6 00:27:32.342010 sshd[3434]: Accepted publickey for core from 10.0.0.1 port 58226 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:32.342977 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:32.345943 systemd-logind[1197]: New session 16 of user core. Sep 6 00:27:32.346625 systemd[1]: Started session-16.scope. Sep 6 00:27:33.385141 sshd[3434]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:33.387503 systemd[1]: Started sshd@16-10.0.0.101:22-10.0.0.1:58232.service. Sep 6 00:27:33.388683 systemd[1]: sshd@15-10.0.0.101:22-10.0.0.1:58226.service: Deactivated successfully. Sep 6 00:27:33.389238 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:27:33.390047 systemd-logind[1197]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:27:33.390933 systemd-logind[1197]: Removed session 16. Sep 6 00:27:33.421625 sshd[3451]: Accepted publickey for core from 10.0.0.1 port 58232 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:33.422957 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:33.426607 systemd-logind[1197]: New session 17 of user core. Sep 6 00:27:33.427301 systemd[1]: Started session-17.scope. Sep 6 00:27:33.649889 sshd[3451]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:33.655303 systemd[1]: Started sshd@17-10.0.0.101:22-10.0.0.1:58238.service. Sep 6 00:27:33.655745 systemd[1]: sshd@16-10.0.0.101:22-10.0.0.1:58232.service: Deactivated successfully. Sep 6 00:27:33.656303 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:27:33.657260 systemd-logind[1197]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:27:33.658679 systemd-logind[1197]: Removed session 17. Sep 6 00:27:33.690622 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 58238 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:33.691958 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:33.695540 systemd-logind[1197]: New session 18 of user core. Sep 6 00:27:33.696481 systemd[1]: Started session-18.scope. Sep 6 00:27:33.796684 sshd[3463]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:33.798994 systemd[1]: sshd@17-10.0.0.101:22-10.0.0.1:58238.service: Deactivated successfully. Sep 6 00:27:33.799663 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:27:33.800154 systemd-logind[1197]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:27:33.800774 systemd-logind[1197]: Removed session 18. Sep 6 00:27:38.801787 systemd[1]: Started sshd@18-10.0.0.101:22-10.0.0.1:58254.service. Sep 6 00:27:38.835644 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 58254 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:38.836861 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:38.840320 systemd-logind[1197]: New session 19 of user core. Sep 6 00:27:38.841147 systemd[1]: Started session-19.scope. Sep 6 00:27:38.943641 sshd[3480]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:38.946099 systemd[1]: sshd@18-10.0.0.101:22-10.0.0.1:58254.service: Deactivated successfully. Sep 6 00:27:38.946782 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:27:38.947327 systemd-logind[1197]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:27:38.947989 systemd-logind[1197]: Removed session 19. Sep 6 00:27:43.663127 kubelet[1908]: E0906 00:27:43.663080 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:27:43.948635 systemd[1]: Started sshd@19-10.0.0.101:22-10.0.0.1:41654.service. Sep 6 00:27:43.982416 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 41654 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:43.983514 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:43.986830 systemd-logind[1197]: New session 20 of user core. Sep 6 00:27:43.987834 systemd[1]: Started session-20.scope. Sep 6 00:27:44.087078 sshd[3500]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:44.089019 systemd[1]: sshd@19-10.0.0.101:22-10.0.0.1:41654.service: Deactivated successfully. Sep 6 00:27:44.089701 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:27:44.090253 systemd-logind[1197]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:27:44.090933 systemd-logind[1197]: Removed session 20. Sep 6 00:27:49.091827 systemd[1]: Started sshd@20-10.0.0.101:22-10.0.0.1:41670.service. Sep 6 00:27:49.125870 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 41670 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:49.126933 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:49.130370 systemd-logind[1197]: New session 21 of user core. Sep 6 00:27:49.131132 systemd[1]: Started session-21.scope. Sep 6 00:27:49.236889 sshd[3513]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:49.239604 systemd[1]: sshd@20-10.0.0.101:22-10.0.0.1:41670.service: Deactivated successfully. Sep 6 00:27:49.240563 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:27:49.241269 systemd-logind[1197]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:27:49.242143 systemd-logind[1197]: Removed session 21. Sep 6 00:27:54.240718 systemd[1]: Started sshd@21-10.0.0.101:22-10.0.0.1:51794.service. Sep 6 00:27:54.283559 sshd[3526]: Accepted publickey for core from 10.0.0.1 port 51794 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:54.284635 sshd[3526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:54.288164 systemd-logind[1197]: New session 22 of user core. Sep 6 00:27:54.288928 systemd[1]: Started session-22.scope. Sep 6 00:27:54.396281 sshd[3526]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:54.399412 systemd[1]: sshd@21-10.0.0.101:22-10.0.0.1:51794.service: Deactivated successfully. Sep 6 00:27:54.399889 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:27:54.400383 systemd-logind[1197]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:27:54.401415 systemd[1]: Started sshd@22-10.0.0.101:22-10.0.0.1:51800.service. Sep 6 00:27:54.402154 systemd-logind[1197]: Removed session 22. Sep 6 00:27:54.434492 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 51800 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:54.435462 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:54.438344 systemd-logind[1197]: New session 23 of user core. Sep 6 00:27:54.439026 systemd[1]: Started session-23.scope. Sep 6 00:27:55.826697 env[1206]: time="2025-09-06T00:27:55.826252372Z" level=info msg="StopContainer for \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\" with timeout 30 (s)" Sep 6 00:27:55.826697 env[1206]: time="2025-09-06T00:27:55.826582934Z" level=info msg="Stop container \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\" with signal terminated" Sep 6 00:27:55.838378 systemd[1]: cri-containerd-e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad.scope: Deactivated successfully. Sep 6 00:27:55.859106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad-rootfs.mount: Deactivated successfully. Sep 6 00:27:55.863754 env[1206]: time="2025-09-06T00:27:55.863701556Z" level=info msg="shim disconnected" id=e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad Sep 6 00:27:55.863849 env[1206]: time="2025-09-06T00:27:55.863755128Z" level=warning msg="cleaning up after shim disconnected" id=e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad namespace=k8s.io Sep 6 00:27:55.863849 env[1206]: time="2025-09-06T00:27:55.863766019Z" level=info msg="cleaning up dead shim" Sep 6 00:27:55.870721 env[1206]: time="2025-09-06T00:27:55.870671094Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:27:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3585 runtime=io.containerd.runc.v2\n" Sep 6 00:27:55.873604 env[1206]: time="2025-09-06T00:27:55.873569523Z" level=info msg="StopContainer for \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\" returns successfully" Sep 6 00:27:55.880585 env[1206]: time="2025-09-06T00:27:55.880549110Z" level=info msg="StopPodSandbox for \"8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f\"" Sep 6 00:27:55.880667 env[1206]: time="2025-09-06T00:27:55.880614666Z" level=info msg="Container to stop \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:27:55.882436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f-shm.mount: Deactivated successfully. Sep 6 00:27:55.888538 systemd[1]: cri-containerd-8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f.scope: Deactivated successfully. Sep 6 00:27:55.891762 env[1206]: time="2025-09-06T00:27:55.891717603Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:27:55.899597 env[1206]: time="2025-09-06T00:27:55.899569938Z" level=info msg="StopContainer for \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\" with timeout 2 (s)" Sep 6 00:27:55.899891 env[1206]: time="2025-09-06T00:27:55.899858600Z" level=info msg="Stop container \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\" with signal terminated" Sep 6 00:27:55.905108 systemd-networkd[1030]: lxc_health: Link DOWN Sep 6 00:27:55.905113 systemd-networkd[1030]: lxc_health: Lost carrier Sep 6 00:27:55.907667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f-rootfs.mount: Deactivated successfully. Sep 6 00:27:55.915218 env[1206]: time="2025-09-06T00:27:55.915167123Z" level=info msg="shim disconnected" id=8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f Sep 6 00:27:55.915218 env[1206]: time="2025-09-06T00:27:55.915214975Z" level=warning msg="cleaning up after shim disconnected" id=8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f namespace=k8s.io Sep 6 00:27:55.915402 env[1206]: time="2025-09-06T00:27:55.915224723Z" level=info msg="cleaning up dead shim" Sep 6 00:27:55.921505 env[1206]: time="2025-09-06T00:27:55.921467193Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:27:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3627 runtime=io.containerd.runc.v2\n" Sep 6 00:27:55.921761 env[1206]: time="2025-09-06T00:27:55.921729103Z" level=info msg="TearDown network for sandbox \"8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f\" successfully" Sep 6 00:27:55.921761 env[1206]: time="2025-09-06T00:27:55.921751105Z" level=info msg="StopPodSandbox for \"8b39dfb4fdf6492a7561853f19983a229c84c3c76bb59d2eaf8fe86d92cfc31f\" returns successfully" Sep 6 00:27:55.933809 systemd[1]: cri-containerd-5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b.scope: Deactivated successfully. Sep 6 00:27:55.934096 systemd[1]: cri-containerd-5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b.scope: Consumed 5.868s CPU time. Sep 6 00:27:55.948978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b-rootfs.mount: Deactivated successfully. Sep 6 00:27:55.956361 env[1206]: time="2025-09-06T00:27:55.956291660Z" level=info msg="shim disconnected" id=5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b Sep 6 00:27:55.956478 env[1206]: time="2025-09-06T00:27:55.956361974Z" level=warning msg="cleaning up after shim disconnected" id=5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b namespace=k8s.io Sep 6 00:27:55.956478 env[1206]: time="2025-09-06T00:27:55.956376382Z" level=info msg="cleaning up dead shim" Sep 6 00:27:55.963094 env[1206]: time="2025-09-06T00:27:55.963062579Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:27:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3654 runtime=io.containerd.runc.v2\n" Sep 6 00:27:55.965582 env[1206]: time="2025-09-06T00:27:55.965549210Z" level=info msg="StopContainer for \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\" returns successfully" Sep 6 00:27:55.966092 env[1206]: time="2025-09-06T00:27:55.966041391Z" level=info msg="StopPodSandbox for \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\"" Sep 6 00:27:55.966251 env[1206]: time="2025-09-06T00:27:55.966104121Z" level=info msg="Container to stop \"736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:27:55.966251 env[1206]: time="2025-09-06T00:27:55.966118198Z" level=info msg="Container to stop \"f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:27:55.966251 env[1206]: time="2025-09-06T00:27:55.966128888Z" level=info msg="Container to stop \"8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:27:55.966251 env[1206]: time="2025-09-06T00:27:55.966154106Z" level=info msg="Container to stop \"afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:27:55.966251 env[1206]: time="2025-09-06T00:27:55.966163624Z" level=info msg="Container to stop \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:27:55.968360 kubelet[1908]: I0906 00:27:55.968305 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fff2499-3f78-4775-b3f1-e9dd8b47c411-cilium-config-path\") pod \"4fff2499-3f78-4775-b3f1-e9dd8b47c411\" (UID: \"4fff2499-3f78-4775-b3f1-e9dd8b47c411\") " Sep 6 00:27:55.968360 kubelet[1908]: I0906 00:27:55.968354 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288mc\" (UniqueName: \"kubernetes.io/projected/4fff2499-3f78-4775-b3f1-e9dd8b47c411-kube-api-access-288mc\") pod \"4fff2499-3f78-4775-b3f1-e9dd8b47c411\" (UID: \"4fff2499-3f78-4775-b3f1-e9dd8b47c411\") " Sep 6 00:27:55.970587 kubelet[1908]: I0906 00:27:55.970555 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fff2499-3f78-4775-b3f1-e9dd8b47c411-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4fff2499-3f78-4775-b3f1-e9dd8b47c411" (UID: "4fff2499-3f78-4775-b3f1-e9dd8b47c411"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:27:55.970858 systemd[1]: cri-containerd-bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a.scope: Deactivated successfully. Sep 6 00:27:55.971917 kubelet[1908]: I0906 00:27:55.971891 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fff2499-3f78-4775-b3f1-e9dd8b47c411-kube-api-access-288mc" (OuterVolumeSpecName: "kube-api-access-288mc") pod "4fff2499-3f78-4775-b3f1-e9dd8b47c411" (UID: "4fff2499-3f78-4775-b3f1-e9dd8b47c411"). InnerVolumeSpecName "kube-api-access-288mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:27:55.973173 kubelet[1908]: I0906 00:27:55.973133 1908 scope.go:117] "RemoveContainer" containerID="e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad" Sep 6 00:27:55.976248 systemd[1]: Removed slice kubepods-besteffort-pod4fff2499_3f78_4775_b3f1_e9dd8b47c411.slice. Sep 6 00:27:55.980258 env[1206]: time="2025-09-06T00:27:55.980229183Z" level=info msg="RemoveContainer for \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\"" Sep 6 00:27:55.988554 env[1206]: time="2025-09-06T00:27:55.988520878Z" level=info msg="RemoveContainer for \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\" returns successfully" Sep 6 00:27:55.989565 kubelet[1908]: I0906 00:27:55.989537 1908 scope.go:117] "RemoveContainer" containerID="e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad" Sep 6 00:27:55.989896 env[1206]: time="2025-09-06T00:27:55.989776939Z" level=error msg="ContainerStatus for \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\": not found" Sep 6 00:27:55.990364 kubelet[1908]: E0906 00:27:55.990310 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\": not found" containerID="e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad" Sep 6 00:27:55.990562 kubelet[1908]: I0906 00:27:55.990360 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad"} err="failed to get container status \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\": rpc error: code = NotFound desc = an error occurred when try to find container \"e91aa11b1661ba3e4c23e963b0eaa90ba273c51827b5c0ff798d4dcc4b6fcdad\": not found" Sep 6 00:27:56.003527 env[1206]: time="2025-09-06T00:27:56.003473549Z" level=info msg="shim disconnected" id=bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a Sep 6 00:27:56.003527 env[1206]: time="2025-09-06T00:27:56.003527753Z" level=warning msg="cleaning up after shim disconnected" id=bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a namespace=k8s.io Sep 6 00:27:56.003789 env[1206]: time="2025-09-06T00:27:56.003537541Z" level=info msg="cleaning up dead shim" Sep 6 00:27:56.010192 env[1206]: time="2025-09-06T00:27:56.010155711Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:27:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3685 runtime=io.containerd.runc.v2\n" Sep 6 00:27:56.010524 env[1206]: time="2025-09-06T00:27:56.010491573Z" level=info msg="TearDown network for sandbox \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" successfully" Sep 6 00:27:56.010524 env[1206]: time="2025-09-06T00:27:56.010515309Z" level=info msg="StopPodSandbox for \"bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a\" returns successfully" Sep 6 00:27:56.069058 kubelet[1908]: I0906 00:27:56.069010 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fff2499-3f78-4775-b3f1-e9dd8b47c411-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.069229 kubelet[1908]: I0906 00:27:56.069069 1908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288mc\" (UniqueName: \"kubernetes.io/projected/4fff2499-3f78-4775-b3f1-e9dd8b47c411-kube-api-access-288mc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.169794 kubelet[1908]: I0906 00:27:56.169654 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8bcf16aa-a650-4099-a379-262d19b13552-clustermesh-secrets\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.169794 kubelet[1908]: I0906 00:27:56.169702 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bcf16aa-a650-4099-a379-262d19b13552-cilium-config-path\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.169794 kubelet[1908]: I0906 00:27:56.169725 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-bpf-maps\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.169794 kubelet[1908]: I0906 00:27:56.169741 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cilium-run\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.169794 kubelet[1908]: I0906 00:27:56.169755 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cni-path\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.169794 kubelet[1908]: I0906 00:27:56.169767 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-lib-modules\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.170124 kubelet[1908]: I0906 00:27:56.169780 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-xtables-lock\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.170124 kubelet[1908]: I0906 00:27:56.169792 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-hostproc\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.170124 kubelet[1908]: I0906 00:27:56.169804 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-host-proc-sys-net\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.170124 kubelet[1908]: I0906 00:27:56.169817 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-host-proc-sys-kernel\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.170124 kubelet[1908]: I0906 00:27:56.169830 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cilium-cgroup\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.170124 kubelet[1908]: I0906 00:27:56.169844 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-etc-cni-netd\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.170356 kubelet[1908]: I0906 00:27:56.169861 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8bcf16aa-a650-4099-a379-262d19b13552-hubble-tls\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.170356 kubelet[1908]: I0906 00:27:56.169875 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzwnt\" (UniqueName: \"kubernetes.io/projected/8bcf16aa-a650-4099-a379-262d19b13552-kube-api-access-mzwnt\") pod \"8bcf16aa-a650-4099-a379-262d19b13552\" (UID: \"8bcf16aa-a650-4099-a379-262d19b13552\") " Sep 6 00:27:56.172189 kubelet[1908]: I0906 00:27:56.170474 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172189 kubelet[1908]: I0906 00:27:56.171308 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172189 kubelet[1908]: I0906 00:27:56.171350 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172189 kubelet[1908]: I0906 00:27:56.171395 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172189 kubelet[1908]: I0906 00:27:56.171428 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172420 kubelet[1908]: I0906 00:27:56.171456 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-hostproc" (OuterVolumeSpecName: "hostproc") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172420 kubelet[1908]: I0906 00:27:56.171477 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172420 kubelet[1908]: I0906 00:27:56.171507 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cni-path" (OuterVolumeSpecName: "cni-path") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172420 kubelet[1908]: I0906 00:27:56.171231 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172420 kubelet[1908]: I0906 00:27:56.171739 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:56.172584 kubelet[1908]: I0906 00:27:56.172536 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bcf16aa-a650-4099-a379-262d19b13552-kube-api-access-mzwnt" (OuterVolumeSpecName: "kube-api-access-mzwnt") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "kube-api-access-mzwnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:27:56.173460 kubelet[1908]: I0906 00:27:56.173426 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bcf16aa-a650-4099-a379-262d19b13552-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:27:56.174284 kubelet[1908]: I0906 00:27:56.174235 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bcf16aa-a650-4099-a379-262d19b13552-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:27:56.174392 kubelet[1908]: I0906 00:27:56.174298 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcf16aa-a650-4099-a379-262d19b13552-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8bcf16aa-a650-4099-a379-262d19b13552" (UID: "8bcf16aa-a650-4099-a379-262d19b13552"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:27:56.270704 kubelet[1908]: I0906 00:27:56.270645 1908 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8bcf16aa-a650-4099-a379-262d19b13552-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.270704 kubelet[1908]: I0906 00:27:56.270680 1908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzwnt\" (UniqueName: \"kubernetes.io/projected/8bcf16aa-a650-4099-a379-262d19b13552-kube-api-access-mzwnt\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.270704 kubelet[1908]: I0906 00:27:56.270696 1908 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8bcf16aa-a650-4099-a379-262d19b13552-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.270704 kubelet[1908]: I0906 00:27:56.270703 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bcf16aa-a650-4099-a379-262d19b13552-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.270704 kubelet[1908]: I0906 00:27:56.270714 1908 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.270704 kubelet[1908]: I0906 00:27:56.270721 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.270704 kubelet[1908]: I0906 00:27:56.270728 1908 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.271089 kubelet[1908]: I0906 00:27:56.270736 1908 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.271089 kubelet[1908]: I0906 00:27:56.270743 1908 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.271089 kubelet[1908]: I0906 00:27:56.270750 1908 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.271089 kubelet[1908]: I0906 00:27:56.270757 1908 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.271089 kubelet[1908]: I0906 00:27:56.270764 1908 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.271089 kubelet[1908]: I0906 00:27:56.270770 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.271089 kubelet[1908]: I0906 00:27:56.270778 1908 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bcf16aa-a650-4099-a379-262d19b13552-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:56.836020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a-rootfs.mount: Deactivated successfully. Sep 6 00:27:56.836119 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bca50959e3af063e406e58a91782690802c35f0ab8f7cf6c1214de139510484a-shm.mount: Deactivated successfully. Sep 6 00:27:56.836180 systemd[1]: var-lib-kubelet-pods-4fff2499\x2d3f78\x2d4775\x2db3f1\x2de9dd8b47c411-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d288mc.mount: Deactivated successfully. Sep 6 00:27:56.836232 systemd[1]: var-lib-kubelet-pods-8bcf16aa\x2da650\x2d4099\x2da379\x2d262d19b13552-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzwnt.mount: Deactivated successfully. Sep 6 00:27:56.836284 systemd[1]: var-lib-kubelet-pods-8bcf16aa\x2da650\x2d4099\x2da379\x2d262d19b13552-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:27:56.836344 systemd[1]: var-lib-kubelet-pods-8bcf16aa\x2da650\x2d4099\x2da379\x2d262d19b13552-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:27:56.978355 kubelet[1908]: I0906 00:27:56.978313 1908 scope.go:117] "RemoveContainer" containerID="5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b" Sep 6 00:27:56.979632 env[1206]: time="2025-09-06T00:27:56.979602076Z" level=info msg="RemoveContainer for \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\"" Sep 6 00:27:56.981882 systemd[1]: Removed slice kubepods-burstable-pod8bcf16aa_a650_4099_a379_262d19b13552.slice. Sep 6 00:27:56.981960 systemd[1]: kubepods-burstable-pod8bcf16aa_a650_4099_a379_262d19b13552.slice: Consumed 5.963s CPU time. Sep 6 00:27:56.983450 env[1206]: time="2025-09-06T00:27:56.983412384Z" level=info msg="RemoveContainer for \"5261f30c770a8a1c0ad68a8fa9fca0db5df1bc354fac13d91f5e4c20ed9efc4b\" returns successfully" Sep 6 00:27:56.983637 kubelet[1908]: I0906 00:27:56.983607 1908 scope.go:117] "RemoveContainer" containerID="f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49" Sep 6 00:27:56.984735 env[1206]: time="2025-09-06T00:27:56.984663934Z" level=info msg="RemoveContainer for \"f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49\"" Sep 6 00:27:56.987894 env[1206]: time="2025-09-06T00:27:56.987855931Z" level=info msg="RemoveContainer for \"f81070a93c2ca8fb226da61280ed996aa3cbbb823bc185c8d89e0b3e59b7ef49\" returns successfully" Sep 6 00:27:56.988015 kubelet[1908]: I0906 00:27:56.987994 1908 scope.go:117] "RemoveContainer" containerID="736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207" Sep 6 00:27:56.988946 env[1206]: time="2025-09-06T00:27:56.988911027Z" level=info msg="RemoveContainer for \"736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207\"" Sep 6 00:27:56.993348 env[1206]: time="2025-09-06T00:27:56.993282546Z" level=info msg="RemoveContainer for \"736db3a92b24734678bc0b626fcc69a912686c03a2ce3499f6cebd15c26e5207\" returns successfully" Sep 6 00:27:56.993515 kubelet[1908]: I0906 00:27:56.993486 1908 scope.go:117] "RemoveContainer" containerID="afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558" Sep 6 00:27:56.994679 env[1206]: time="2025-09-06T00:27:56.994645329Z" level=info msg="RemoveContainer for \"afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558\"" Sep 6 00:27:56.997465 env[1206]: time="2025-09-06T00:27:56.997432223Z" level=info msg="RemoveContainer for \"afd4ef82149fbbe4fab565345350017f73033c4889e8630a4d5a81e79db94558\" returns successfully" Sep 6 00:27:56.997575 kubelet[1908]: I0906 00:27:56.997555 1908 scope.go:117] "RemoveContainer" containerID="8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b" Sep 6 00:27:56.998520 env[1206]: time="2025-09-06T00:27:56.998494722Z" level=info msg="RemoveContainer for \"8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b\"" Sep 6 00:27:57.001166 env[1206]: time="2025-09-06T00:27:57.001136277Z" level=info msg="RemoveContainer for \"8fbdbd1da7e3ce14e2f7bdce600a0f6cba311ef61da421714d4cf3b791ba3e3b\" returns successfully" Sep 6 00:27:57.664311 kubelet[1908]: I0906 00:27:57.664261 1908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fff2499-3f78-4775-b3f1-e9dd8b47c411" path="/var/lib/kubelet/pods/4fff2499-3f78-4775-b3f1-e9dd8b47c411/volumes" Sep 6 00:27:57.664653 kubelet[1908]: I0906 00:27:57.664627 1908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bcf16aa-a650-4099-a379-262d19b13552" path="/var/lib/kubelet/pods/8bcf16aa-a650-4099-a379-262d19b13552/volumes" Sep 6 00:27:57.828531 systemd[1]: Started sshd@23-10.0.0.101:22-10.0.0.1:51804.service. Sep 6 00:27:57.832398 sshd[3539]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:57.834381 systemd[1]: sshd@22-10.0.0.101:22-10.0.0.1:51800.service: Deactivated successfully. Sep 6 00:27:57.834910 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:27:57.835586 systemd-logind[1197]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:27:57.836174 systemd-logind[1197]: Removed session 23. Sep 6 00:27:57.863139 sshd[3702]: Accepted publickey for core from 10.0.0.1 port 51804 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:57.864213 sshd[3702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:57.867079 systemd-logind[1197]: New session 24 of user core. Sep 6 00:27:57.867815 systemd[1]: Started session-24.scope. Sep 6 00:27:58.311217 sshd[3702]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:58.314674 systemd[1]: Started sshd@24-10.0.0.101:22-10.0.0.1:51820.service. Sep 6 00:27:58.319203 systemd-logind[1197]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:27:58.319311 systemd[1]: sshd@23-10.0.0.101:22-10.0.0.1:51804.service: Deactivated successfully. Sep 6 00:27:58.319897 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:27:58.320466 systemd-logind[1197]: Removed session 24. Sep 6 00:27:58.344213 kubelet[1908]: E0906 00:27:58.344178 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bcf16aa-a650-4099-a379-262d19b13552" containerName="cilium-agent" Sep 6 00:27:58.344213 kubelet[1908]: E0906 00:27:58.344208 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bcf16aa-a650-4099-a379-262d19b13552" containerName="mount-cgroup" Sep 6 00:27:58.344213 kubelet[1908]: E0906 00:27:58.344214 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4fff2499-3f78-4775-b3f1-e9dd8b47c411" containerName="cilium-operator" Sep 6 00:27:58.344213 kubelet[1908]: E0906 00:27:58.344219 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bcf16aa-a650-4099-a379-262d19b13552" containerName="clean-cilium-state" Sep 6 00:27:58.344213 kubelet[1908]: E0906 00:27:58.344225 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bcf16aa-a650-4099-a379-262d19b13552" containerName="mount-bpf-fs" Sep 6 00:27:58.344213 kubelet[1908]: E0906 00:27:58.344231 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bcf16aa-a650-4099-a379-262d19b13552" containerName="apply-sysctl-overwrites" Sep 6 00:27:58.344686 kubelet[1908]: I0906 00:27:58.344259 1908 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcf16aa-a650-4099-a379-262d19b13552" containerName="cilium-agent" Sep 6 00:27:58.344686 kubelet[1908]: I0906 00:27:58.344265 1908 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fff2499-3f78-4775-b3f1-e9dd8b47c411" containerName="cilium-operator" Sep 6 00:27:58.349193 systemd[1]: Created slice kubepods-burstable-podc748a123_f6be_4536_a82b_3ba3c50265a4.slice. Sep 6 00:27:58.349946 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 51820 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:58.351424 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:58.356521 systemd[1]: Started session-25.scope. Sep 6 00:27:58.357826 systemd-logind[1197]: New session 25 of user core. Sep 6 00:27:58.477938 sshd[3714]: pam_unix(sshd:session): session closed for user core Sep 6 00:27:58.480732 systemd[1]: sshd@24-10.0.0.101:22-10.0.0.1:51820.service: Deactivated successfully. Sep 6 00:27:58.481204 kubelet[1908]: I0906 00:27:58.481098 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-bpf-maps\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481204 kubelet[1908]: I0906 00:27:58.481129 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-xtables-lock\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481204 kubelet[1908]: I0906 00:27:58.481150 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c748a123-f6be-4536-a82b-3ba3c50265a4-clustermesh-secrets\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481204 kubelet[1908]: I0906 00:27:58.481173 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-ipsec-secrets\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481204 kubelet[1908]: I0906 00:27:58.481186 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-host-proc-sys-net\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481437 kubelet[1908]: I0906 00:27:58.481306 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-host-proc-sys-kernel\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481437 kubelet[1908]: I0906 00:27:58.481412 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l529s\" (UniqueName: \"kubernetes.io/projected/c748a123-f6be-4536-a82b-3ba3c50265a4-kube-api-access-l529s\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481225 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:27:58.481545 kubelet[1908]: I0906 00:27:58.481440 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c748a123-f6be-4536-a82b-3ba3c50265a4-hubble-tls\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481678 kubelet[1908]: I0906 00:27:58.481654 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-etc-cni-netd\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481761 kubelet[1908]: I0906 00:27:58.481682 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-config-path\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481761 kubelet[1908]: I0906 00:27:58.481715 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-run\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481761 kubelet[1908]: I0906 00:27:58.481742 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-cgroup\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481861 kubelet[1908]: I0906 00:27:58.481766 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cni-path\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481861 kubelet[1908]: I0906 00:27:58.481781 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-lib-modules\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.481861 kubelet[1908]: I0906 00:27:58.481804 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-hostproc\") pod \"cilium-txbvj\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " pod="kube-system/cilium-txbvj" Sep 6 00:27:58.482615 systemd[1]: Started sshd@25-10.0.0.101:22-10.0.0.1:51836.service. Sep 6 00:27:58.483122 systemd-logind[1197]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:27:58.484316 systemd-logind[1197]: Removed session 25. Sep 6 00:27:58.491705 kubelet[1908]: E0906 00:27:58.491643 1908 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-l529s lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-txbvj" podUID="c748a123-f6be-4536-a82b-3ba3c50265a4" Sep 6 00:27:58.516243 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 51836 ssh2: RSA SHA256:mcWI8lWnD23EhgVbwJM01vCNerY3y/CLOw6SIUCzfEA Sep 6 00:27:58.517393 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:27:58.520557 systemd-logind[1197]: New session 26 of user core. Sep 6 00:27:58.521386 systemd[1]: Started session-26.scope. Sep 6 00:27:58.715580 kubelet[1908]: E0906 00:27:58.715473 1908 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:27:59.186351 kubelet[1908]: I0906 00:27:59.186292 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c748a123-f6be-4536-a82b-3ba3c50265a4-hubble-tls\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186517 kubelet[1908]: I0906 00:27:59.186359 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-hostproc\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186517 kubelet[1908]: I0906 00:27:59.186391 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-run\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186517 kubelet[1908]: I0906 00:27:59.186408 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cni-path\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186517 kubelet[1908]: I0906 00:27:59.186430 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c748a123-f6be-4536-a82b-3ba3c50265a4-clustermesh-secrets\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186517 kubelet[1908]: I0906 00:27:59.186449 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-etc-cni-netd\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186517 kubelet[1908]: I0906 00:27:59.186448 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-hostproc" (OuterVolumeSpecName: "hostproc") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.186734 kubelet[1908]: I0906 00:27:59.186468 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-host-proc-sys-net\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186734 kubelet[1908]: I0906 00:27:59.186485 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cni-path" (OuterVolumeSpecName: "cni-path") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.186734 kubelet[1908]: I0906 00:27:59.186489 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l529s\" (UniqueName: \"kubernetes.io/projected/c748a123-f6be-4536-a82b-3ba3c50265a4-kube-api-access-l529s\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186734 kubelet[1908]: I0906 00:27:59.186504 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.186734 kubelet[1908]: I0906 00:27:59.186506 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-xtables-lock\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186922 kubelet[1908]: I0906 00:27:59.186527 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.186922 kubelet[1908]: I0906 00:27:59.186529 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-bpf-maps\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186922 kubelet[1908]: I0906 00:27:59.186560 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-host-proc-sys-kernel\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186922 kubelet[1908]: I0906 00:27:59.186588 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-ipsec-secrets\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186922 kubelet[1908]: I0906 00:27:59.186607 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-cgroup\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.186922 kubelet[1908]: I0906 00:27:59.186631 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-config-path\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.187117 kubelet[1908]: I0906 00:27:59.186649 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-lib-modules\") pod \"c748a123-f6be-4536-a82b-3ba3c50265a4\" (UID: \"c748a123-f6be-4536-a82b-3ba3c50265a4\") " Sep 6 00:27:59.187117 kubelet[1908]: I0906 00:27:59.186680 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.187117 kubelet[1908]: I0906 00:27:59.186693 1908 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.187117 kubelet[1908]: I0906 00:27:59.186703 1908 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.187117 kubelet[1908]: I0906 00:27:59.186713 1908 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.189096 kubelet[1908]: I0906 00:27:59.186540 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.189204 kubelet[1908]: I0906 00:27:59.186550 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.189288 kubelet[1908]: I0906 00:27:59.186735 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.189411 kubelet[1908]: I0906 00:27:59.186750 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.189500 kubelet[1908]: I0906 00:27:59.189060 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c748a123-f6be-4536-a82b-3ba3c50265a4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:27:59.189589 kubelet[1908]: I0906 00:27:59.189073 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:27:59.189673 kubelet[1908]: I0906 00:27:59.189085 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.189787 kubelet[1908]: I0906 00:27:59.189765 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:27:59.190254 systemd[1]: var-lib-kubelet-pods-c748a123\x2df6be\x2d4536\x2da82b\x2d3ba3c50265a4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:27:59.190371 systemd[1]: var-lib-kubelet-pods-c748a123\x2df6be\x2d4536\x2da82b\x2d3ba3c50265a4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:27:59.190420 systemd[1]: var-lib-kubelet-pods-c748a123\x2df6be\x2d4536\x2da82b\x2d3ba3c50265a4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:27:59.191040 kubelet[1908]: I0906 00:27:59.191004 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:27:59.191512 kubelet[1908]: I0906 00:27:59.191491 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c748a123-f6be-4536-a82b-3ba3c50265a4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:27:59.192157 kubelet[1908]: I0906 00:27:59.192118 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c748a123-f6be-4536-a82b-3ba3c50265a4-kube-api-access-l529s" (OuterVolumeSpecName: "kube-api-access-l529s") pod "c748a123-f6be-4536-a82b-3ba3c50265a4" (UID: "c748a123-f6be-4536-a82b-3ba3c50265a4"). InnerVolumeSpecName "kube-api-access-l529s". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:27:59.192692 systemd[1]: var-lib-kubelet-pods-c748a123\x2df6be\x2d4536\x2da82b\x2d3ba3c50265a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl529s.mount: Deactivated successfully. Sep 6 00:27:59.287258 kubelet[1908]: I0906 00:27:59.287214 1908 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287258 kubelet[1908]: I0906 00:27:59.287245 1908 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287258 kubelet[1908]: I0906 00:27:59.287253 1908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l529s\" (UniqueName: \"kubernetes.io/projected/c748a123-f6be-4536-a82b-3ba3c50265a4-kube-api-access-l529s\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287258 kubelet[1908]: I0906 00:27:59.287262 1908 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287258 kubelet[1908]: I0906 00:27:59.287269 1908 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287560 kubelet[1908]: I0906 00:27:59.287276 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287560 kubelet[1908]: I0906 00:27:59.287283 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287560 kubelet[1908]: I0906 00:27:59.287290 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c748a123-f6be-4536-a82b-3ba3c50265a4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287560 kubelet[1908]: I0906 00:27:59.287297 1908 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c748a123-f6be-4536-a82b-3ba3c50265a4-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287560 kubelet[1908]: I0906 00:27:59.287303 1908 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c748a123-f6be-4536-a82b-3ba3c50265a4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.287560 kubelet[1908]: I0906 00:27:59.287309 1908 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c748a123-f6be-4536-a82b-3ba3c50265a4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:27:59.667099 systemd[1]: Removed slice kubepods-burstable-podc748a123_f6be_4536_a82b_3ba3c50265a4.slice. Sep 6 00:28:00.021042 systemd[1]: Created slice kubepods-burstable-pod7d34e477_8d65_4692_9734_bf918744cbc6.slice. Sep 6 00:28:00.191291 kubelet[1908]: I0906 00:28:00.191223 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-host-proc-sys-kernel\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191291 kubelet[1908]: I0906 00:28:00.191273 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-cni-path\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191291 kubelet[1908]: I0906 00:28:00.191294 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7d34e477-8d65-4692-9734-bf918744cbc6-cilium-ipsec-secrets\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191291 kubelet[1908]: I0906 00:28:00.191307 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk2rz\" (UniqueName: \"kubernetes.io/projected/7d34e477-8d65-4692-9734-bf918744cbc6-kube-api-access-lk2rz\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191752 kubelet[1908]: I0906 00:28:00.191378 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-etc-cni-netd\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191752 kubelet[1908]: I0906 00:28:00.191412 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-lib-modules\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191752 kubelet[1908]: I0906 00:28:00.191432 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-cilium-cgroup\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191752 kubelet[1908]: I0906 00:28:00.191477 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d34e477-8d65-4692-9734-bf918744cbc6-clustermesh-secrets\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191752 kubelet[1908]: I0906 00:28:00.191505 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d34e477-8d65-4692-9734-bf918744cbc6-cilium-config-path\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191752 kubelet[1908]: I0906 00:28:00.191522 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d34e477-8d65-4692-9734-bf918744cbc6-hubble-tls\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191892 kubelet[1908]: I0906 00:28:00.191548 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-cilium-run\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191892 kubelet[1908]: I0906 00:28:00.191568 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-xtables-lock\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191892 kubelet[1908]: I0906 00:28:00.191585 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-bpf-maps\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191892 kubelet[1908]: I0906 00:28:00.191601 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-hostproc\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.191892 kubelet[1908]: I0906 00:28:00.191624 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d34e477-8d65-4692-9734-bf918744cbc6-host-proc-sys-net\") pod \"cilium-kfxml\" (UID: \"7d34e477-8d65-4692-9734-bf918744cbc6\") " pod="kube-system/cilium-kfxml" Sep 6 00:28:00.324229 kubelet[1908]: E0906 00:28:00.324193 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:00.325206 env[1206]: time="2025-09-06T00:28:00.324811798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfxml,Uid:7d34e477-8d65-4692-9734-bf918744cbc6,Namespace:kube-system,Attempt:0,}" Sep 6 00:28:00.340292 env[1206]: time="2025-09-06T00:28:00.340228251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:28:00.340292 env[1206]: time="2025-09-06T00:28:00.340267476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:28:00.340292 env[1206]: time="2025-09-06T00:28:00.340284018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:28:00.340519 env[1206]: time="2025-09-06T00:28:00.340458250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0 pid=3758 runtime=io.containerd.runc.v2 Sep 6 00:28:00.349376 systemd[1]: Started cri-containerd-765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0.scope. Sep 6 00:28:00.370201 env[1206]: time="2025-09-06T00:28:00.370131255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfxml,Uid:7d34e477-8d65-4692-9734-bf918744cbc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\"" Sep 6 00:28:00.370811 kubelet[1908]: E0906 00:28:00.370783 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:00.373838 env[1206]: time="2025-09-06T00:28:00.373789710Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:28:00.388831 env[1206]: time="2025-09-06T00:28:00.388775062Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7583934cf89d39372a9dca6c83b68eaafe01ffeaa5f0e33d64181b461872e58\"" Sep 6 00:28:00.389211 env[1206]: time="2025-09-06T00:28:00.389188791Z" level=info msg="StartContainer for \"a7583934cf89d39372a9dca6c83b68eaafe01ffeaa5f0e33d64181b461872e58\"" Sep 6 00:28:00.401163 systemd[1]: Started cri-containerd-a7583934cf89d39372a9dca6c83b68eaafe01ffeaa5f0e33d64181b461872e58.scope. Sep 6 00:28:00.428642 systemd[1]: cri-containerd-a7583934cf89d39372a9dca6c83b68eaafe01ffeaa5f0e33d64181b461872e58.scope: Deactivated successfully. Sep 6 00:28:00.468780 env[1206]: time="2025-09-06T00:28:00.468725466Z" level=info msg="StartContainer for \"a7583934cf89d39372a9dca6c83b68eaafe01ffeaa5f0e33d64181b461872e58\" returns successfully" Sep 6 00:28:00.496842 env[1206]: time="2025-09-06T00:28:00.496796917Z" level=info msg="shim disconnected" id=a7583934cf89d39372a9dca6c83b68eaafe01ffeaa5f0e33d64181b461872e58 Sep 6 00:28:00.496842 env[1206]: time="2025-09-06T00:28:00.496839680Z" level=warning msg="cleaning up after shim disconnected" id=a7583934cf89d39372a9dca6c83b68eaafe01ffeaa5f0e33d64181b461872e58 namespace=k8s.io Sep 6 00:28:00.497014 env[1206]: time="2025-09-06T00:28:00.496848266Z" level=info msg="cleaning up dead shim" Sep 6 00:28:00.503299 env[1206]: time="2025-09-06T00:28:00.503280079Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:28:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3843 runtime=io.containerd.runc.v2\n" Sep 6 00:28:00.662493 kubelet[1908]: E0906 00:28:00.662392 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:00.662824 kubelet[1908]: E0906 00:28:00.662600 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:00.987980 kubelet[1908]: E0906 00:28:00.987864 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:00.989795 env[1206]: time="2025-09-06T00:28:00.989728672Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:28:01.005308 env[1206]: time="2025-09-06T00:28:01.005265642Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3\"" Sep 6 00:28:01.005986 env[1206]: time="2025-09-06T00:28:01.005967600Z" level=info msg="StartContainer for \"fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3\"" Sep 6 00:28:01.021864 systemd[1]: Started cri-containerd-fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3.scope. Sep 6 00:28:01.043286 env[1206]: time="2025-09-06T00:28:01.043229986Z" level=info msg="StartContainer for \"fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3\" returns successfully" Sep 6 00:28:01.047364 systemd[1]: cri-containerd-fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3.scope: Deactivated successfully. Sep 6 00:28:01.073825 env[1206]: time="2025-09-06T00:28:01.073768858Z" level=info msg="shim disconnected" id=fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3 Sep 6 00:28:01.073825 env[1206]: time="2025-09-06T00:28:01.073822781Z" level=warning msg="cleaning up after shim disconnected" id=fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3 namespace=k8s.io Sep 6 00:28:01.073825 env[1206]: time="2025-09-06T00:28:01.073832520Z" level=info msg="cleaning up dead shim" Sep 6 00:28:01.081487 env[1206]: time="2025-09-06T00:28:01.081444597Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:28:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3904 runtime=io.containerd.runc.v2\n" Sep 6 00:28:01.587686 systemd[1]: run-containerd-runc-k8s.io-fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3-runc.UjFfgc.mount: Deactivated successfully. Sep 6 00:28:01.587820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcc8cc9a9b61150c070737e11c09cc9d5a26ee1006ba6797a0f2f0d78bcfe7f3-rootfs.mount: Deactivated successfully. Sep 6 00:28:01.665404 kubelet[1908]: I0906 00:28:01.665319 1908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c748a123-f6be-4536-a82b-3ba3c50265a4" path="/var/lib/kubelet/pods/c748a123-f6be-4536-a82b-3ba3c50265a4/volumes" Sep 6 00:28:01.995534 kubelet[1908]: E0906 00:28:01.995402 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:01.997859 env[1206]: time="2025-09-06T00:28:01.997804210Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:28:02.027217 env[1206]: time="2025-09-06T00:28:02.027134398Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"689d5dde1f77bb1ad03a7695b3db1e40697c16fddec48e15b04f4d776ccd1742\"" Sep 6 00:28:02.028042 env[1206]: time="2025-09-06T00:28:02.027999496Z" level=info msg="StartContainer for \"689d5dde1f77bb1ad03a7695b3db1e40697c16fddec48e15b04f4d776ccd1742\"" Sep 6 00:28:02.050769 systemd[1]: Started cri-containerd-689d5dde1f77bb1ad03a7695b3db1e40697c16fddec48e15b04f4d776ccd1742.scope. Sep 6 00:28:02.086014 env[1206]: time="2025-09-06T00:28:02.085871808Z" level=info msg="StartContainer for \"689d5dde1f77bb1ad03a7695b3db1e40697c16fddec48e15b04f4d776ccd1742\" returns successfully" Sep 6 00:28:02.088295 systemd[1]: cri-containerd-689d5dde1f77bb1ad03a7695b3db1e40697c16fddec48e15b04f4d776ccd1742.scope: Deactivated successfully. Sep 6 00:28:02.119920 env[1206]: time="2025-09-06T00:28:02.119848055Z" level=info msg="shim disconnected" id=689d5dde1f77bb1ad03a7695b3db1e40697c16fddec48e15b04f4d776ccd1742 Sep 6 00:28:02.119920 env[1206]: time="2025-09-06T00:28:02.119920213Z" level=warning msg="cleaning up after shim disconnected" id=689d5dde1f77bb1ad03a7695b3db1e40697c16fddec48e15b04f4d776ccd1742 namespace=k8s.io Sep 6 00:28:02.119920 env[1206]: time="2025-09-06T00:28:02.119937175Z" level=info msg="cleaning up dead shim" Sep 6 00:28:02.128440 env[1206]: time="2025-09-06T00:28:02.128378476Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:28:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3960 runtime=io.containerd.runc.v2\n" Sep 6 00:28:02.588675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-689d5dde1f77bb1ad03a7695b3db1e40697c16fddec48e15b04f4d776ccd1742-rootfs.mount: Deactivated successfully. Sep 6 00:28:02.663103 kubelet[1908]: E0906 00:28:02.662933 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:03.012231 kubelet[1908]: E0906 00:28:03.011847 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:03.014875 env[1206]: time="2025-09-06T00:28:03.014709648Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:28:03.051260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842396620.mount: Deactivated successfully. Sep 6 00:28:03.058925 env[1206]: time="2025-09-06T00:28:03.058844165Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"42db6160c53423ced49706a7814cfa2c72c5887135a9d561ef68c7500e25e2ca\"" Sep 6 00:28:03.059653 env[1206]: time="2025-09-06T00:28:03.059610183Z" level=info msg="StartContainer for \"42db6160c53423ced49706a7814cfa2c72c5887135a9d561ef68c7500e25e2ca\"" Sep 6 00:28:03.096111 systemd[1]: Started cri-containerd-42db6160c53423ced49706a7814cfa2c72c5887135a9d561ef68c7500e25e2ca.scope. Sep 6 00:28:03.130782 systemd[1]: cri-containerd-42db6160c53423ced49706a7814cfa2c72c5887135a9d561ef68c7500e25e2ca.scope: Deactivated successfully. Sep 6 00:28:03.133018 env[1206]: time="2025-09-06T00:28:03.132969605Z" level=info msg="StartContainer for \"42db6160c53423ced49706a7814cfa2c72c5887135a9d561ef68c7500e25e2ca\" returns successfully" Sep 6 00:28:03.164124 env[1206]: time="2025-09-06T00:28:03.164031958Z" level=info msg="shim disconnected" id=42db6160c53423ced49706a7814cfa2c72c5887135a9d561ef68c7500e25e2ca Sep 6 00:28:03.164124 env[1206]: time="2025-09-06T00:28:03.164100358Z" level=warning msg="cleaning up after shim disconnected" id=42db6160c53423ced49706a7814cfa2c72c5887135a9d561ef68c7500e25e2ca namespace=k8s.io Sep 6 00:28:03.164124 env[1206]: time="2025-09-06T00:28:03.164115167Z" level=info msg="cleaning up dead shim" Sep 6 00:28:03.174822 env[1206]: time="2025-09-06T00:28:03.174732546Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:28:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4013 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:28:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Sep 6 00:28:03.588881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42db6160c53423ced49706a7814cfa2c72c5887135a9d561ef68c7500e25e2ca-rootfs.mount: Deactivated successfully. Sep 6 00:28:03.718660 kubelet[1908]: E0906 00:28:03.718535 1908 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:28:04.019910 kubelet[1908]: E0906 00:28:04.017847 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:04.020448 env[1206]: time="2025-09-06T00:28:04.020403358Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:28:04.120371 env[1206]: time="2025-09-06T00:28:04.120233086Z" level=info msg="CreateContainer within sandbox \"765eec9b68525c96328aaf5b798bccfb6a141abe9d39027c4b521d8b14954ac0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15e01a58be34ff97efa6b149cf8d411030428119cc52f4b913c2ba901dbb6a39\"" Sep 6 00:28:04.120976 env[1206]: time="2025-09-06T00:28:04.120937657Z" level=info msg="StartContainer for \"15e01a58be34ff97efa6b149cf8d411030428119cc52f4b913c2ba901dbb6a39\"" Sep 6 00:28:04.146899 systemd[1]: Started cri-containerd-15e01a58be34ff97efa6b149cf8d411030428119cc52f4b913c2ba901dbb6a39.scope. Sep 6 00:28:04.192730 env[1206]: time="2025-09-06T00:28:04.192642036Z" level=info msg="StartContainer for \"15e01a58be34ff97efa6b149cf8d411030428119cc52f4b913c2ba901dbb6a39\" returns successfully" Sep 6 00:28:04.415957 update_engine[1199]: I0906 00:28:04.412684 1199 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 6 00:28:04.415957 update_engine[1199]: I0906 00:28:04.415786 1199 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 6 00:28:04.418366 update_engine[1199]: I0906 00:28:04.417748 1199 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 6 00:28:04.418366 update_engine[1199]: I0906 00:28:04.418232 1199 omaha_request_params.cc:62] Current group set to lts Sep 6 00:28:04.419238 update_engine[1199]: I0906 00:28:04.419037 1199 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 6 00:28:04.419238 update_engine[1199]: I0906 00:28:04.419049 1199 update_attempter.cc:643] Scheduling an action processor start. Sep 6 00:28:04.419238 update_engine[1199]: I0906 00:28:04.419070 1199 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:28:04.419238 update_engine[1199]: I0906 00:28:04.419101 1199 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 6 00:28:04.419238 update_engine[1199]: I0906 00:28:04.419158 1199 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 6 00:28:04.419238 update_engine[1199]: I0906 00:28:04.419164 1199 omaha_request_action.cc:271] Request: Sep 6 00:28:04.419238 update_engine[1199]: Sep 6 00:28:04.419238 update_engine[1199]: Sep 6 00:28:04.419238 update_engine[1199]: Sep 6 00:28:04.419238 update_engine[1199]: Sep 6 00:28:04.419238 update_engine[1199]: Sep 6 00:28:04.419238 update_engine[1199]: Sep 6 00:28:04.419238 update_engine[1199]: Sep 6 00:28:04.419238 update_engine[1199]: Sep 6 00:28:04.419238 update_engine[1199]: I0906 00:28:04.419169 1199 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:28:04.422850 update_engine[1199]: I0906 00:28:04.422350 1199 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:28:04.422850 update_engine[1199]: I0906 00:28:04.422809 1199 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:28:04.423387 locksmithd[1233]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 6 00:28:04.433137 update_engine[1199]: E0906 00:28:04.432971 1199 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:28:04.433137 update_engine[1199]: I0906 00:28:04.433104 1199 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 6 00:28:04.824398 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:28:05.027456 kubelet[1908]: E0906 00:28:05.027380 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:05.926057 kubelet[1908]: I0906 00:28:05.925967 1908 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:28:05Z","lastTransitionTime":"2025-09-06T00:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:28:06.327018 kubelet[1908]: E0906 00:28:06.326530 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:08.579024 systemd-networkd[1030]: lxc_health: Link UP Sep 6 00:28:08.589430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:28:08.589200 systemd-networkd[1030]: lxc_health: Gained carrier Sep 6 00:28:09.637517 systemd-networkd[1030]: lxc_health: Gained IPv6LL Sep 6 00:28:10.327498 kubelet[1908]: E0906 00:28:10.327448 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:10.351838 kubelet[1908]: I0906 00:28:10.351761 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kfxml" podStartSLOduration=10.35173916 podStartE2EDuration="10.35173916s" podCreationTimestamp="2025-09-06 00:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:28:05.081348507 +0000 UTC m=+91.516972863" watchObservedRunningTime="2025-09-06 00:28:10.35173916 +0000 UTC m=+96.787363496" Sep 6 00:28:11.055685 kubelet[1908]: E0906 00:28:11.051359 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:11.476819 systemd[1]: run-containerd-runc-k8s.io-15e01a58be34ff97efa6b149cf8d411030428119cc52f4b913c2ba901dbb6a39-runc.gJVyoZ.mount: Deactivated successfully. Sep 6 00:28:12.057127 kubelet[1908]: E0906 00:28:12.057090 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:28:13.684873 systemd[1]: run-containerd-runc-k8s.io-15e01a58be34ff97efa6b149cf8d411030428119cc52f4b913c2ba901dbb6a39-runc.zUlV9Z.mount: Deactivated successfully. Sep 6 00:28:13.766018 sshd[3729]: pam_unix(sshd:session): session closed for user core Sep 6 00:28:13.769004 systemd[1]: sshd@25-10.0.0.101:22-10.0.0.1:51836.service: Deactivated successfully. Sep 6 00:28:13.769979 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:28:13.770682 systemd-logind[1197]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:28:13.771577 systemd-logind[1197]: Removed session 26. Sep 6 00:28:14.412587 update_engine[1199]: I0906 00:28:14.412492 1199 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:28:14.413080 update_engine[1199]: I0906 00:28:14.412806 1199 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:28:14.413080 update_engine[1199]: I0906 00:28:14.413048 1199 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:28:14.427943 update_engine[1199]: E0906 00:28:14.427881 1199 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:28:14.428139 update_engine[1199]: I0906 00:28:14.428006 1199 libcurl_http_fetcher.cc:283] No HTTP response, retry 2