Sep 13 00:51:16.827358 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:51:16.827375 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:51:16.827384 kernel: BIOS-provided physical RAM map: Sep 13 00:51:16.827390 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:51:16.827395 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:51:16.827400 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:51:16.827407 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:51:16.827412 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:51:16.827418 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:51:16.827425 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:51:16.827430 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 13 00:51:16.827436 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 13 00:51:16.827441 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:51:16.827447 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:51:16.827454 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:51:16.827461 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:51:16.827467 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:51:16.827472 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:51:16.827478 kernel: NX (Execute Disable) protection: active Sep 13 00:51:16.827484 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 13 00:51:16.827490 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 13 00:51:16.827496 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 13 00:51:16.827501 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 13 00:51:16.827507 kernel: extended physical RAM map: Sep 13 00:51:16.827513 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:51:16.827520 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:51:16.827526 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:51:16.827532 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:51:16.827537 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:51:16.827543 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:51:16.827549 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:51:16.827555 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Sep 13 00:51:16.827561 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Sep 13 00:51:16.827566 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Sep 13 00:51:16.827572 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Sep 13 00:51:16.827578 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Sep 13 00:51:16.827585 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 13 00:51:16.827591 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:51:16.827597 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:51:16.827603 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:51:16.827611 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:51:16.827617 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:51:16.827623 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:51:16.827631 kernel: efi: EFI v2.70 by EDK II Sep 13 00:51:16.827639 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Sep 13 00:51:16.827647 kernel: random: crng init done Sep 13 00:51:16.827655 kernel: SMBIOS 2.8 present. Sep 13 00:51:16.827664 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 13 00:51:16.827672 kernel: Hypervisor detected: KVM Sep 13 00:51:16.827689 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:51:16.827697 kernel: kvm-clock: cpu 0, msr 5819f001, primary cpu clock Sep 13 00:51:16.827705 kernel: kvm-clock: using sched offset of 3944325863 cycles Sep 13 00:51:16.827716 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:51:16.827725 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:51:16.827732 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:51:16.827739 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:51:16.827745 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 13 00:51:16.827752 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:51:16.827759 kernel: Using GB pages for direct mapping Sep 13 00:51:16.827765 kernel: Secure boot disabled Sep 13 00:51:16.827771 kernel: ACPI: Early table checksum verification disabled Sep 13 00:51:16.827779 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:51:16.827785 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:51:16.827792 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:16.827799 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:16.827805 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:51:16.827811 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:16.827818 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:16.827824 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:16.827831 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:51:16.827838 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:51:16.827845 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:51:16.827851 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:51:16.827857 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:51:16.827864 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:51:16.827870 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:51:16.827876 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:51:16.827883 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:51:16.827889 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:51:16.827897 kernel: No NUMA configuration found Sep 13 00:51:16.827903 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 13 00:51:16.827921 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 13 00:51:16.827928 kernel: Zone ranges: Sep 13 00:51:16.827934 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:51:16.827941 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 13 00:51:16.827947 kernel: Normal empty Sep 13 00:51:16.827953 kernel: Movable zone start for each node Sep 13 00:51:16.827960 kernel: Early memory node ranges Sep 13 00:51:16.827967 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:51:16.827973 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:51:16.827980 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:51:16.827986 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 13 00:51:16.827992 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 13 00:51:16.827999 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 13 00:51:16.828005 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 13 00:51:16.828011 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:51:16.828018 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:51:16.828024 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:51:16.828031 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:51:16.828038 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 13 00:51:16.828044 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:51:16.828051 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 13 00:51:16.828057 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:51:16.828063 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:51:16.828070 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:51:16.828076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:51:16.828082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:51:16.828090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:51:16.828096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:51:16.828103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:51:16.828109 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:51:16.828116 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:51:16.828122 kernel: TSC deadline timer available Sep 13 00:51:16.828128 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:51:16.828135 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:51:16.828141 kernel: kvm-guest: setup PV sched yield Sep 13 00:51:16.828149 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:51:16.828155 kernel: Booting paravirtualized kernel on KVM Sep 13 00:51:16.828167 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:51:16.828175 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:51:16.828182 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:51:16.828188 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:51:16.828195 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:51:16.828202 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:51:16.828208 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Sep 13 00:51:16.828215 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:51:16.828222 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:51:16.828228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 13 00:51:16.828236 kernel: Policy zone: DMA32 Sep 13 00:51:16.828244 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:51:16.828251 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:51:16.828258 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:51:16.828266 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:51:16.828273 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:51:16.828280 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 169308K reserved, 0K cma-reserved) Sep 13 00:51:16.828287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:51:16.828294 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:51:16.828300 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:51:16.828307 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:51:16.828314 kernel: rcu: RCU event tracing is enabled. Sep 13 00:51:16.828321 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:51:16.828329 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:51:16.828336 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:51:16.828343 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:51:16.828350 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:51:16.828357 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:51:16.828363 kernel: Console: colour dummy device 80x25 Sep 13 00:51:16.828370 kernel: printk: console [ttyS0] enabled Sep 13 00:51:16.828377 kernel: ACPI: Core revision 20210730 Sep 13 00:51:16.828384 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:51:16.828391 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:51:16.828398 kernel: x2apic enabled Sep 13 00:51:16.828405 kernel: Switched APIC routing to physical x2apic. Sep 13 00:51:16.828412 kernel: kvm-guest: setup PV IPIs Sep 13 00:51:16.828418 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:51:16.828425 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:51:16.828432 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:51:16.828439 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:51:16.828445 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:51:16.828453 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:51:16.828460 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:51:16.828467 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:51:16.828473 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:51:16.828480 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:51:16.828487 kernel: active return thunk: retbleed_return_thunk Sep 13 00:51:16.828494 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:51:16.828501 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:51:16.828507 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:51:16.828516 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:51:16.828522 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:51:16.828529 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:51:16.828536 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:51:16.828543 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:51:16.828549 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:51:16.828556 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:51:16.828563 kernel: LSM: Security Framework initializing Sep 13 00:51:16.828569 kernel: SELinux: Initializing. Sep 13 00:51:16.828577 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:51:16.828584 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:51:16.828591 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:51:16.828598 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:51:16.828604 kernel: ... version: 0 Sep 13 00:51:16.828611 kernel: ... bit width: 48 Sep 13 00:51:16.828618 kernel: ... generic registers: 6 Sep 13 00:51:16.828624 kernel: ... value mask: 0000ffffffffffff Sep 13 00:51:16.828631 kernel: ... max period: 00007fffffffffff Sep 13 00:51:16.828639 kernel: ... fixed-purpose events: 0 Sep 13 00:51:16.828646 kernel: ... event mask: 000000000000003f Sep 13 00:51:16.828652 kernel: signal: max sigframe size: 1776 Sep 13 00:51:16.828659 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:51:16.828666 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:51:16.828672 kernel: x86: Booting SMP configuration: Sep 13 00:51:16.828686 kernel: .... node #0, CPUs: #1 Sep 13 00:51:16.828693 kernel: kvm-clock: cpu 1, msr 5819f041, secondary cpu clock Sep 13 00:51:16.828699 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:51:16.828708 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Sep 13 00:51:16.828715 kernel: #2 Sep 13 00:51:16.828722 kernel: kvm-clock: cpu 2, msr 5819f081, secondary cpu clock Sep 13 00:51:16.828729 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:51:16.828735 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Sep 13 00:51:16.828742 kernel: #3 Sep 13 00:51:16.828749 kernel: kvm-clock: cpu 3, msr 5819f0c1, secondary cpu clock Sep 13 00:51:16.828755 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:51:16.828762 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Sep 13 00:51:16.828769 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:51:16.828777 kernel: smpboot: Max logical packages: 1 Sep 13 00:51:16.828784 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:51:16.828791 kernel: devtmpfs: initialized Sep 13 00:51:16.828797 kernel: x86/mm: Memory block size: 128MB Sep 13 00:51:16.828804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:51:16.828811 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:51:16.828818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 13 00:51:16.828825 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:51:16.828832 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:51:16.828840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:51:16.828847 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:51:16.828853 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:51:16.828860 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:51:16.828867 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:51:16.828874 kernel: audit: type=2000 audit(1757724676.165:1): state=initialized audit_enabled=0 res=1 Sep 13 00:51:16.828880 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:51:16.828887 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:51:16.828895 kernel: cpuidle: using governor menu Sep 13 00:51:16.828902 kernel: ACPI: bus type PCI registered Sep 13 00:51:16.828918 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:51:16.828925 kernel: dca service started, version 1.12.1 Sep 13 00:51:16.828932 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:51:16.828939 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:51:16.828947 kernel: PCI: Using configuration type 1 for base access Sep 13 00:51:16.828955 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:51:16.828963 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:51:16.828972 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:51:16.828979 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:51:16.828986 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:51:16.828992 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:51:16.828999 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:51:16.829006 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:51:16.829012 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:51:16.829019 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:51:16.829026 kernel: ACPI: Interpreter enabled Sep 13 00:51:16.829033 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:51:16.829040 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:51:16.829047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:51:16.829054 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:51:16.829061 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:51:16.829169 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:51:16.829240 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:51:16.829305 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:51:16.829317 kernel: PCI host bridge to bus 0000:00 Sep 13 00:51:16.829387 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:51:16.829448 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:51:16.829509 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:51:16.829568 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:51:16.829628 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:51:16.829746 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 13 00:51:16.829827 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:51:16.829919 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:51:16.829997 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:51:16.830065 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:51:16.830132 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 13 00:51:16.830199 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:51:16.830268 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 13 00:51:16.830333 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:51:16.830407 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:51:16.830478 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 13 00:51:16.830546 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 13 00:51:16.830611 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 13 00:51:16.830693 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:51:16.830765 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 13 00:51:16.830831 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 13 00:51:16.830898 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 13 00:51:16.830990 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:51:16.831058 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 13 00:51:16.831125 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 13 00:51:16.831191 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 13 00:51:16.831260 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 13 00:51:16.831330 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:51:16.831397 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:51:16.831468 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:51:16.831533 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 13 00:51:16.831599 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 13 00:51:16.831670 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:51:16.831751 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 13 00:51:16.831760 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:51:16.831768 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:51:16.831775 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:51:16.831781 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:51:16.831788 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:51:16.831795 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:51:16.831802 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:51:16.831810 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:51:16.831817 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:51:16.831824 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:51:16.831831 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:51:16.831838 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:51:16.831844 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:51:16.831851 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:51:16.831858 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:51:16.831865 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:51:16.831873 kernel: iommu: Default domain type: Translated Sep 13 00:51:16.831880 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:51:16.831963 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:51:16.832032 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:51:16.832097 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:51:16.832106 kernel: vgaarb: loaded Sep 13 00:51:16.832113 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:51:16.832120 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:51:16.832127 kernel: PTP clock support registered Sep 13 00:51:16.832136 kernel: Registered efivars operations Sep 13 00:51:16.832143 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:51:16.832150 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:51:16.832157 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:51:16.832163 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 13 00:51:16.832170 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Sep 13 00:51:16.832177 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Sep 13 00:51:16.832183 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 13 00:51:16.832190 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 13 00:51:16.832198 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:51:16.832205 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:51:16.832212 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:51:16.832219 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:51:16.832225 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:51:16.832232 kernel: pnp: PnP ACPI init Sep 13 00:51:16.832304 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:51:16.832315 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:51:16.832322 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:51:16.832329 kernel: NET: Registered PF_INET protocol family Sep 13 00:51:16.832336 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:51:16.832343 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:51:16.832350 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:51:16.832357 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:51:16.832364 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:51:16.832371 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:51:16.832379 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:51:16.832386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:51:16.832392 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:51:16.832399 kernel: NET: Registered PF_XDP protocol family Sep 13 00:51:16.832468 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 13 00:51:16.832536 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 13 00:51:16.832596 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:51:16.832655 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:51:16.832726 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:51:16.832785 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:51:16.832843 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:51:16.832902 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 13 00:51:16.832923 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:51:16.832930 kernel: Initialise system trusted keyrings Sep 13 00:51:16.832938 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:51:16.832944 kernel: Key type asymmetric registered Sep 13 00:51:16.832951 kernel: Asymmetric key parser 'x509' registered Sep 13 00:51:16.832960 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:51:16.832967 kernel: io scheduler mq-deadline registered Sep 13 00:51:16.832983 kernel: io scheduler kyber registered Sep 13 00:51:16.832991 kernel: io scheduler bfq registered Sep 13 00:51:16.832999 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:51:16.833006 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:51:16.833014 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:51:16.833021 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:51:16.833028 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:51:16.833036 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:51:16.833044 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:51:16.833051 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:51:16.833064 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:51:16.833170 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:51:16.833182 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:51:16.833243 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:51:16.833694 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:51:16 UTC (1757724676) Sep 13 00:51:16.833772 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:51:16.833781 kernel: efifb: probing for efifb Sep 13 00:51:16.833789 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 13 00:51:16.833796 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 13 00:51:16.833803 kernel: efifb: scrolling: redraw Sep 13 00:51:16.833810 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:51:16.833817 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:51:16.833824 kernel: fb0: EFI VGA frame buffer device Sep 13 00:51:16.833832 kernel: pstore: Registered efi as persistent store backend Sep 13 00:51:16.833841 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:51:16.833848 kernel: Segment Routing with IPv6 Sep 13 00:51:16.833855 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:51:16.833863 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:51:16.833870 kernel: Key type dns_resolver registered Sep 13 00:51:16.833877 kernel: IPI shorthand broadcast: enabled Sep 13 00:51:16.833886 kernel: sched_clock: Marking stable (406267707, 123122580)->(608378657, -78988370) Sep 13 00:51:16.833893 kernel: registered taskstats version 1 Sep 13 00:51:16.833900 kernel: Loading compiled-in X.509 certificates Sep 13 00:51:16.833923 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:51:16.833930 kernel: Key type .fscrypt registered Sep 13 00:51:16.833941 kernel: Key type fscrypt-provisioning registered Sep 13 00:51:16.833948 kernel: pstore: Using crash dump compression: deflate Sep 13 00:51:16.833955 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:51:16.834895 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:51:16.834933 kernel: ima: No architecture policies found Sep 13 00:51:16.834941 kernel: clk: Disabling unused clocks Sep 13 00:51:16.834948 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:51:16.834956 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:51:16.834963 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:51:16.834970 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:51:16.834978 kernel: Run /init as init process Sep 13 00:51:16.834985 kernel: with arguments: Sep 13 00:51:16.834995 kernel: /init Sep 13 00:51:16.835002 kernel: with environment: Sep 13 00:51:16.835010 kernel: HOME=/ Sep 13 00:51:16.835016 kernel: TERM=linux Sep 13 00:51:16.835023 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:51:16.835033 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:51:16.835043 systemd[1]: Detected virtualization kvm. Sep 13 00:51:16.835050 systemd[1]: Detected architecture x86-64. Sep 13 00:51:16.835059 systemd[1]: Running in initrd. Sep 13 00:51:16.835066 systemd[1]: No hostname configured, using default hostname. Sep 13 00:51:16.835074 systemd[1]: Hostname set to . Sep 13 00:51:16.835082 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:51:16.835089 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:51:16.835097 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:51:16.835104 systemd[1]: Reached target cryptsetup.target. Sep 13 00:51:16.835111 systemd[1]: Reached target paths.target. Sep 13 00:51:16.835119 systemd[1]: Reached target slices.target. Sep 13 00:51:16.835127 systemd[1]: Reached target swap.target. Sep 13 00:51:16.835135 systemd[1]: Reached target timers.target. Sep 13 00:51:16.835143 systemd[1]: Listening on iscsid.socket. Sep 13 00:51:16.835150 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:51:16.835158 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:51:16.835165 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:51:16.835173 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:51:16.835182 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:51:16.835190 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:51:16.835197 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:51:16.835205 systemd[1]: Reached target sockets.target. Sep 13 00:51:16.835213 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:51:16.835220 systemd[1]: Finished network-cleanup.service. Sep 13 00:51:16.835228 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:51:16.835235 systemd[1]: Starting systemd-journald.service... Sep 13 00:51:16.835242 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:51:16.835251 systemd[1]: Starting systemd-resolved.service... Sep 13 00:51:16.835259 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:51:16.835266 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:51:16.835274 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:51:16.835281 kernel: audit: type=1130 audit(1757724676.826:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.835289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:51:16.835299 systemd-journald[197]: Journal started Sep 13 00:51:16.835341 systemd-journald[197]: Runtime Journal (/run/log/journal/39c16d072fa649daafd79f23e073ab5c) is 6.0M, max 48.4M, 42.4M free. Sep 13 00:51:16.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.833592 systemd-modules-load[198]: Inserted module 'overlay' Sep 13 00:51:16.838285 systemd[1]: Started systemd-journald.service. Sep 13 00:51:16.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.838702 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:51:16.846071 kernel: audit: type=1130 audit(1757724676.838:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.846107 kernel: audit: type=1130 audit(1757724676.842:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.845162 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:51:16.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.850929 kernel: audit: type=1130 audit(1757724676.846:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.851224 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:51:16.858332 systemd-resolved[199]: Positive Trust Anchors: Sep 13 00:51:16.858532 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:51:16.858719 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:51:16.860938 systemd-resolved[199]: Defaulting to hostname 'linux'. Sep 13 00:51:16.861721 systemd[1]: Started systemd-resolved.service. Sep 13 00:51:16.862253 systemd[1]: Reached target nss-lookup.target. Sep 13 00:51:16.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.865931 kernel: audit: type=1130 audit(1757724676.857:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.875929 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:51:16.877492 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:51:16.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.878510 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:51:16.883699 kernel: audit: type=1130 audit(1757724676.876:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.883715 kernel: Bridge firewalling registered Sep 13 00:51:16.882971 systemd-modules-load[198]: Inserted module 'br_netfilter' Sep 13 00:51:16.886767 dracut-cmdline[215]: dracut-dracut-053 Sep 13 00:51:16.888745 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:51:16.902930 kernel: SCSI subsystem initialized Sep 13 00:51:16.914184 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:51:16.914207 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:51:16.915413 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:51:16.918288 systemd-modules-load[198]: Inserted module 'dm_multipath' Sep 13 00:51:16.918953 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:51:16.925084 kernel: audit: type=1130 audit(1757724676.920:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.921444 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:51:16.929405 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:51:16.933937 kernel: audit: type=1130 audit(1757724676.930:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:16.945925 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:51:16.960926 kernel: iscsi: registered transport (tcp) Sep 13 00:51:16.981927 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:51:16.981949 kernel: QLogic iSCSI HBA Driver Sep 13 00:51:17.002135 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:51:17.006991 kernel: audit: type=1130 audit(1757724677.002:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:17.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:17.003874 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:51:17.046943 kernel: raid6: avx2x4 gen() 29762 MB/s Sep 13 00:51:17.063942 kernel: raid6: avx2x4 xor() 7359 MB/s Sep 13 00:51:17.080936 kernel: raid6: avx2x2 gen() 32015 MB/s Sep 13 00:51:17.097941 kernel: raid6: avx2x2 xor() 18990 MB/s Sep 13 00:51:17.114934 kernel: raid6: avx2x1 gen() 26063 MB/s Sep 13 00:51:17.131934 kernel: raid6: avx2x1 xor() 15314 MB/s Sep 13 00:51:17.148932 kernel: raid6: sse2x4 gen() 14818 MB/s Sep 13 00:51:17.165933 kernel: raid6: sse2x4 xor() 6824 MB/s Sep 13 00:51:17.182932 kernel: raid6: sse2x2 gen() 16237 MB/s Sep 13 00:51:17.199945 kernel: raid6: sse2x2 xor() 9624 MB/s Sep 13 00:51:17.271938 kernel: raid6: sse2x1 gen() 11917 MB/s Sep 13 00:51:17.289257 kernel: raid6: sse2x1 xor() 7814 MB/s Sep 13 00:51:17.289276 kernel: raid6: using algorithm avx2x2 gen() 32015 MB/s Sep 13 00:51:17.289286 kernel: raid6: .... xor() 18990 MB/s, rmw enabled Sep 13 00:51:17.289950 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:51:17.347938 kernel: xor: automatically using best checksumming function avx Sep 13 00:51:17.436941 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:51:17.443131 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:51:17.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:17.443000 audit: BPF prog-id=7 op=LOAD Sep 13 00:51:17.443000 audit: BPF prog-id=8 op=LOAD Sep 13 00:51:17.445114 systemd[1]: Starting systemd-udevd.service... Sep 13 00:51:17.456236 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 13 00:51:17.459899 systemd[1]: Started systemd-udevd.service. Sep 13 00:51:17.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:17.461823 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:51:17.471160 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 13 00:51:17.493068 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:51:17.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:17.495358 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:51:17.525935 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:51:17.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:17.580942 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:51:17.590307 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:51:17.590325 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:51:17.590337 kernel: GPT:9289727 != 19775487 Sep 13 00:51:17.590346 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:51:17.590355 kernel: GPT:9289727 != 19775487 Sep 13 00:51:17.590363 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:51:17.590371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:51:17.590379 kernel: libata version 3.00 loaded. Sep 13 00:51:17.599930 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:51:17.599958 kernel: AES CTR mode by8 optimization enabled Sep 13 00:51:17.604927 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (456) Sep 13 00:51:17.613423 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:51:17.615504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:51:17.621226 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:51:17.634474 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:51:17.634488 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:51:17.634572 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:51:17.634645 kernel: scsi host0: ahci Sep 13 00:51:17.634743 kernel: scsi host1: ahci Sep 13 00:51:17.634822 kernel: scsi host2: ahci Sep 13 00:51:17.634901 kernel: scsi host3: ahci Sep 13 00:51:17.635001 kernel: scsi host4: ahci Sep 13 00:51:17.635083 kernel: scsi host5: ahci Sep 13 00:51:17.635159 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 13 00:51:17.635169 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 13 00:51:17.635178 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 13 00:51:17.635187 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 13 00:51:17.635195 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 13 00:51:17.635207 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 13 00:51:17.622027 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:51:17.627551 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:51:17.633838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:51:17.640677 systemd[1]: Starting disk-uuid.service... Sep 13 00:51:17.945362 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:51:17.945408 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:51:17.945936 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:51:17.946930 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:51:17.947939 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:51:17.948949 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:51:17.948965 kernel: ata3.00: applying bridge limits Sep 13 00:51:17.950207 kernel: ata3.00: configured for UDMA/100 Sep 13 00:51:17.950941 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:51:17.955994 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:51:17.983979 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:51:18.001397 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:51:18.001409 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:51:18.145847 disk-uuid[534]: Primary Header is updated. Sep 13 00:51:18.145847 disk-uuid[534]: Secondary Entries is updated. Sep 13 00:51:18.145847 disk-uuid[534]: Secondary Header is updated. Sep 13 00:51:18.185937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:51:18.188923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:51:19.202444 disk-uuid[549]: The operation has completed successfully. Sep 13 00:51:19.204466 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:51:19.221495 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:51:19.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.221589 systemd[1]: Finished disk-uuid.service. Sep 13 00:51:19.232670 systemd[1]: Starting verity-setup.service... Sep 13 00:51:19.316947 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:51:19.336588 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:51:19.337394 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:51:19.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.338903 systemd[1]: Finished verity-setup.service. Sep 13 00:51:19.410837 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:51:19.412317 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:51:19.411725 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:51:19.412323 systemd[1]: Starting ignition-setup.service... Sep 13 00:51:19.413572 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:51:19.422726 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:51:19.422763 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:51:19.422776 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:51:19.432248 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:51:19.470576 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:51:19.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.471000 audit: BPF prog-id=9 op=LOAD Sep 13 00:51:19.473237 systemd[1]: Starting systemd-networkd.service... Sep 13 00:51:19.493571 systemd-networkd[717]: lo: Link UP Sep 13 00:51:19.493580 systemd-networkd[717]: lo: Gained carrier Sep 13 00:51:19.495378 systemd-networkd[717]: Enumeration completed Sep 13 00:51:19.495455 systemd[1]: Started systemd-networkd.service. Sep 13 00:51:19.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.496321 systemd[1]: Reached target network.target. Sep 13 00:51:19.499180 systemd[1]: Starting iscsiuio.service... Sep 13 00:51:19.500777 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:51:19.503313 systemd-networkd[717]: eth0: Link UP Sep 13 00:51:19.503320 systemd-networkd[717]: eth0: Gained carrier Sep 13 00:51:19.520299 systemd[1]: Started iscsiuio.service. Sep 13 00:51:19.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.521855 systemd[1]: Starting iscsid.service... Sep 13 00:51:19.525967 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:51:19.525967 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:51:19.525967 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:51:19.525967 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:51:19.525967 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:51:19.580405 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:51:19.582782 systemd[1]: Started iscsid.service. Sep 13 00:51:19.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.584854 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:51:19.589022 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:51:19.595767 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:51:19.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.596681 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:51:19.598094 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:51:19.598945 systemd[1]: Reached target remote-fs.target. Sep 13 00:51:19.601121 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:51:19.607887 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:51:19.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.731739 systemd[1]: Finished ignition-setup.service. Sep 13 00:51:19.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.734438 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:51:19.853729 ignition[737]: Ignition 2.14.0 Sep 13 00:51:19.853740 ignition[737]: Stage: fetch-offline Sep 13 00:51:19.853823 ignition[737]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:51:19.853833 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:51:19.854009 ignition[737]: parsed url from cmdline: "" Sep 13 00:51:19.854013 ignition[737]: no config URL provided Sep 13 00:51:19.854019 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:51:19.854027 ignition[737]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:51:19.854045 ignition[737]: op(1): [started] loading QEMU firmware config module Sep 13 00:51:19.854051 ignition[737]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:51:19.860761 ignition[737]: op(1): [finished] loading QEMU firmware config module Sep 13 00:51:19.900159 ignition[737]: parsing config with SHA512: 73caacb44cea4e9be2587b679e0aa90d806f0546d5db5d01ee39e8f9f7d695bca77c9a0924bdd752780280f77e209bc9eb3d013a28670c030409ee9dd6e601c0 Sep 13 00:51:19.908746 unknown[737]: fetched base config from "system" Sep 13 00:51:19.908757 unknown[737]: fetched user config from "qemu" Sep 13 00:51:19.909288 ignition[737]: fetch-offline: fetch-offline passed Sep 13 00:51:19.909365 ignition[737]: Ignition finished successfully Sep 13 00:51:19.912405 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:51:19.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.914281 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:51:19.916887 systemd[1]: Starting ignition-kargs.service... Sep 13 00:51:19.927802 ignition[745]: Ignition 2.14.0 Sep 13 00:51:19.927825 ignition[745]: Stage: kargs Sep 13 00:51:19.927963 ignition[745]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:51:19.927991 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:51:19.929678 ignition[745]: kargs: kargs passed Sep 13 00:51:19.929723 ignition[745]: Ignition finished successfully Sep 13 00:51:19.933855 systemd[1]: Finished ignition-kargs.service. Sep 13 00:51:19.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.936107 systemd[1]: Starting ignition-disks.service... Sep 13 00:51:19.995377 ignition[751]: Ignition 2.14.0 Sep 13 00:51:19.995388 ignition[751]: Stage: disks Sep 13 00:51:19.995517 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:51:19.995525 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:51:19.996894 ignition[751]: disks: disks passed Sep 13 00:51:19.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:19.998072 systemd[1]: Finished ignition-disks.service. Sep 13 00:51:19.996943 ignition[751]: Ignition finished successfully Sep 13 00:51:19.998964 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:51:20.000948 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:51:20.002367 systemd[1]: Reached target local-fs.target. Sep 13 00:51:20.003875 systemd[1]: Reached target sysinit.target. Sep 13 00:51:20.005271 systemd[1]: Reached target basic.target. Sep 13 00:51:20.007360 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:51:20.091423 systemd-fsck[759]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:51:20.259539 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:51:20.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.263750 systemd[1]: Mounting sysroot.mount... Sep 13 00:51:20.270927 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:51:20.271229 systemd[1]: Mounted sysroot.mount. Sep 13 00:51:20.272984 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:51:20.276359 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:51:20.278495 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:51:20.278548 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:51:20.279970 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:51:20.284249 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:51:20.286392 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:51:20.290696 initrd-setup-root[769]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:51:20.296894 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:51:20.301357 initrd-setup-root[785]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:51:20.304555 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:51:20.328081 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:51:20.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.409830 systemd[1]: Starting ignition-mount.service... Sep 13 00:51:20.411147 systemd[1]: Starting sysroot-boot.service... Sep 13 00:51:20.412160 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:51:20.419515 bash[811]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:51:20.421955 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Sep 13 00:51:20.424195 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:51:20.424234 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:51:20.424247 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:51:20.507702 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:51:20.516154 ignition[813]: INFO : Ignition 2.14.0 Sep 13 00:51:20.516154 ignition[813]: INFO : Stage: mount Sep 13 00:51:20.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.518635 ignition[813]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:51:20.518635 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:51:20.518635 ignition[813]: INFO : mount: mount passed Sep 13 00:51:20.518635 ignition[813]: INFO : Ignition finished successfully Sep 13 00:51:20.516761 systemd[1]: Finished sysroot-boot.service. Sep 13 00:51:20.523690 systemd[1]: Finished ignition-mount.service. Sep 13 00:51:20.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:20.524401 systemd[1]: Starting ignition-files.service... Sep 13 00:51:20.538888 ignition[842]: INFO : Ignition 2.14.0 Sep 13 00:51:20.538888 ignition[842]: INFO : Stage: files Sep 13 00:51:20.540482 ignition[842]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:51:20.540482 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:51:20.543091 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:51:20.543091 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:51:20.543091 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:51:20.554525 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:51:20.554525 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:51:20.554525 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:51:20.554525 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:51:20.554525 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:51:20.554525 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:51:20.554525 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:51:20.544984 unknown[842]: wrote ssh authorized keys file for user: core Sep 13 00:51:20.597820 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:51:21.016286 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:51:21.018364 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:51:21.018364 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:51:21.042012 systemd-networkd[717]: eth0: Gained IPv6LL Sep 13 00:51:21.267980 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 13 00:51:21.476690 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:51:21.478896 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:51:21.813389 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 13 00:51:22.517150 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:51:22.517150 ignition[842]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 13 00:51:22.520426 ignition[842]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:51:22.522666 ignition[842]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:51:22.522666 ignition[842]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 13 00:51:22.526208 ignition[842]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 13 00:51:22.526208 ignition[842]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:51:22.529827 ignition[842]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:51:22.529827 ignition[842]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 13 00:51:22.529827 ignition[842]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 13 00:51:22.535336 ignition[842]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:51:22.537836 ignition[842]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:51:22.537836 ignition[842]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 13 00:51:22.537836 ignition[842]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:51:22.543201 ignition[842]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:51:22.543201 ignition[842]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:51:22.546483 ignition[842]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:51:22.595185 ignition[842]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:51:22.596993 ignition[842]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:51:22.598680 ignition[842]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:51:22.600532 ignition[842]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:51:22.602116 ignition[842]: INFO : files: files passed Sep 13 00:51:22.602951 ignition[842]: INFO : Ignition finished successfully Sep 13 00:51:22.604373 systemd[1]: Finished ignition-files.service. Sep 13 00:51:22.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.606145 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:51:22.612975 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 13 00:51:22.612998 kernel: audit: type=1130 audit(1757724682.604:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.613012 kernel: audit: type=1130 audit(1757724682.612:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.609411 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:51:22.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.610009 systemd[1]: Starting ignition-quench.service... Sep 13 00:51:22.625674 kernel: audit: type=1130 audit(1757724682.617:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.625688 kernel: audit: type=1131 audit(1757724682.617:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.611517 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:51:22.627400 initrd-setup-root-after-ignition[865]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:51:22.613114 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:51:22.629819 initrd-setup-root-after-ignition[868]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:51:22.613173 systemd[1]: Finished ignition-quench.service. Sep 13 00:51:22.618767 systemd[1]: Reached target ignition-complete.target. Sep 13 00:51:22.626289 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:51:22.636034 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:51:22.636100 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:51:22.644921 kernel: audit: type=1130 audit(1757724682.636:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.644943 kernel: audit: type=1131 audit(1757724682.636:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.637813 systemd[1]: Reached target initrd-fs.target. Sep 13 00:51:22.644949 systemd[1]: Reached target initrd.target. Sep 13 00:51:22.645775 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:51:22.646500 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:51:22.655785 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:51:22.661444 kernel: audit: type=1130 audit(1757724682.655:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.657303 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:51:22.665681 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:51:22.666525 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:51:22.668074 systemd[1]: Stopped target timers.target. Sep 13 00:51:22.669563 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:51:22.675844 kernel: audit: type=1131 audit(1757724682.670:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.669687 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:51:22.671154 systemd[1]: Stopped target initrd.target. Sep 13 00:51:22.675948 systemd[1]: Stopped target basic.target. Sep 13 00:51:22.677580 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:51:22.679234 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:51:22.680868 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:51:22.682623 systemd[1]: Stopped target remote-fs.target. Sep 13 00:51:22.684239 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:51:22.685950 systemd[1]: Stopped target sysinit.target. Sep 13 00:51:22.687441 systemd[1]: Stopped target local-fs.target. Sep 13 00:51:22.689040 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:51:22.690687 systemd[1]: Stopped target swap.target. Sep 13 00:51:22.698090 kernel: audit: type=1131 audit(1757724682.692:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.692148 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:51:22.692228 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:51:22.704580 kernel: audit: type=1131 audit(1757724682.699:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.693875 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:51:22.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.698125 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:51:22.698204 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:51:22.700029 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:51:22.700114 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:51:22.704683 systemd[1]: Stopped target paths.target. Sep 13 00:51:22.706229 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:51:22.708963 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:51:22.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.710020 systemd[1]: Stopped target slices.target. Sep 13 00:51:22.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.711514 systemd[1]: Stopped target sockets.target. Sep 13 00:51:22.719094 iscsid[722]: iscsid shutting down. Sep 13 00:51:22.712925 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:51:22.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.713014 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:51:22.714881 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:51:22.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.714991 systemd[1]: Stopped ignition-files.service. Sep 13 00:51:22.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.717138 systemd[1]: Stopping ignition-mount.service... Sep 13 00:51:22.717985 systemd[1]: Stopping iscsid.service... Sep 13 00:51:22.719052 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:51:22.729936 ignition[882]: INFO : Ignition 2.14.0 Sep 13 00:51:22.729936 ignition[882]: INFO : Stage: umount Sep 13 00:51:22.729936 ignition[882]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:51:22.729936 ignition[882]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:51:22.729936 ignition[882]: INFO : umount: umount passed Sep 13 00:51:22.729936 ignition[882]: INFO : Ignition finished successfully Sep 13 00:51:22.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.719163 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:51:22.720811 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:51:22.722314 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:51:22.722495 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:51:22.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.724237 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:51:22.724388 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:51:22.728707 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:51:22.728820 systemd[1]: Stopped iscsid.service. Sep 13 00:51:22.730618 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:51:22.730700 systemd[1]: Stopped ignition-mount.service. Sep 13 00:51:22.731806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:51:22.731867 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:51:22.733991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:51:22.734871 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:51:22.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.734915 systemd[1]: Closed iscsid.socket. Sep 13 00:51:22.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.736084 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:51:22.736125 systemd[1]: Stopped ignition-disks.service. Sep 13 00:51:22.737720 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:51:22.737754 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:51:22.763000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:51:22.739339 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:51:22.739368 systemd[1]: Stopped ignition-setup.service. Sep 13 00:51:22.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.740248 systemd[1]: Stopping iscsiuio.service... Sep 13 00:51:22.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.742986 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:51:22.743047 systemd[1]: Stopped iscsiuio.service. Sep 13 00:51:22.744216 systemd[1]: Stopped target network.target. Sep 13 00:51:22.745686 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:51:22.745711 systemd[1]: Closed iscsiuio.socket. Sep 13 00:51:22.746439 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:51:22.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.748079 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:51:22.753956 systemd-networkd[717]: eth0: DHCPv6 lease lost Sep 13 00:51:22.776000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:51:22.754844 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:51:22.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.754924 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:51:22.757979 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:51:22.758044 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:51:22.761305 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:51:22.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.761329 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:51:22.763575 systemd[1]: Stopping network-cleanup.service... Sep 13 00:51:22.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.764445 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:51:22.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.764486 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:51:22.766309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:51:22.766342 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:51:22.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.767955 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:51:22.767990 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:51:22.769675 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:51:22.772968 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:51:22.775557 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:51:22.775634 systemd[1]: Stopped network-cleanup.service. Sep 13 00:51:22.778057 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:51:22.778174 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:51:22.780671 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:51:22.780698 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:51:22.782188 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:51:22.782220 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:51:22.784051 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:51:22.784092 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:51:22.785556 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:51:22.785592 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:51:22.787269 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:51:22.787300 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:51:22.789536 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:51:22.790568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:51:22.790605 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:51:22.794251 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:51:22.794313 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:51:22.843576 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:51:22.843652 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:51:22.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.845454 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:51:22.847028 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:51:22.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:22.847076 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:51:22.849331 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:51:22.854148 systemd[1]: Switching root. Sep 13 00:51:22.854000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:51:22.854000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:51:22.856000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:51:22.856000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:51:22.856000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:51:22.874703 systemd-journald[197]: Journal stopped Sep 13 00:51:26.946825 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Sep 13 00:51:26.946871 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:51:26.946883 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:51:26.946893 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:51:26.946902 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:51:26.946933 kernel: SELinux: policy capability open_perms=1 Sep 13 00:51:26.946944 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:51:26.946954 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:51:26.946966 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:51:26.946976 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:51:26.946985 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:51:26.946994 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:51:26.947005 systemd[1]: Successfully loaded SELinux policy in 41.227ms. Sep 13 00:51:26.947031 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.925ms. Sep 13 00:51:26.947046 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:51:26.947057 systemd[1]: Detected virtualization kvm. Sep 13 00:51:26.947067 systemd[1]: Detected architecture x86-64. Sep 13 00:51:26.947077 systemd[1]: Detected first boot. Sep 13 00:51:26.947091 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:51:26.947102 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:51:26.947112 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:51:26.947122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:51:26.947136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:51:26.947147 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:51:26.947160 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:51:26.947170 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:51:26.947182 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:51:26.947192 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:51:26.947202 systemd[1]: Created slice system-getty.slice. Sep 13 00:51:26.947212 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:51:26.947222 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:51:26.947233 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:51:26.947244 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:51:26.947254 systemd[1]: Created slice user.slice. Sep 13 00:51:26.947267 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:51:26.947280 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:51:26.947290 systemd[1]: Set up automount boot.automount. Sep 13 00:51:26.947301 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:51:26.947314 systemd[1]: Reached target integritysetup.target. Sep 13 00:51:26.947324 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:51:26.947334 systemd[1]: Reached target remote-fs.target. Sep 13 00:51:26.947345 systemd[1]: Reached target slices.target. Sep 13 00:51:26.947356 systemd[1]: Reached target swap.target. Sep 13 00:51:26.947368 systemd[1]: Reached target torcx.target. Sep 13 00:51:26.947378 systemd[1]: Reached target veritysetup.target. Sep 13 00:51:26.947398 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:51:26.947408 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:51:26.947418 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:51:26.947428 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:51:26.947438 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:51:26.947448 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:51:26.947458 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:51:26.947469 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:51:26.947481 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:51:26.947492 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:51:26.947502 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:51:26.947513 systemd[1]: Mounting media.mount... Sep 13 00:51:26.947524 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:26.947534 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:51:26.947544 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:51:26.947554 systemd[1]: Mounting tmp.mount... Sep 13 00:51:26.947565 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:51:26.947576 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:26.947587 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:51:26.947597 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:51:26.947607 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:26.947617 systemd[1]: Starting modprobe@drm.service... Sep 13 00:51:26.947627 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:26.947637 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:51:26.947647 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:26.947658 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:51:26.947670 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:51:26.947679 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:51:26.947689 systemd[1]: Starting systemd-journald.service... Sep 13 00:51:26.947699 kernel: fuse: init (API version 7.34) Sep 13 00:51:26.947709 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:51:26.947719 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:51:26.947729 kernel: loop: module loaded Sep 13 00:51:26.947739 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:51:26.947749 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:51:26.947761 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:26.947771 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:51:26.947781 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:51:26.947791 systemd[1]: Mounted media.mount. Sep 13 00:51:26.947802 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:51:26.947815 systemd-journald[1023]: Journal started Sep 13 00:51:26.947850 systemd-journald[1023]: Runtime Journal (/run/log/journal/39c16d072fa649daafd79f23e073ab5c) is 6.0M, max 48.4M, 42.4M free. Sep 13 00:51:26.947879 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:51:26.853000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:51:26.853000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:51:26.944000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:51:26.944000 audit[1023]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff0c405a40 a2=4000 a3=7fff0c405adc items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:26.944000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:51:26.951000 systemd[1]: Started systemd-journald.service. Sep 13 00:51:26.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.951828 systemd[1]: Mounted tmp.mount. Sep 13 00:51:26.952887 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:51:26.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.954017 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:51:26.954238 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:51:26.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.955465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:26.955647 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:26.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.956854 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:51:26.957118 systemd[1]: Finished modprobe@drm.service. Sep 13 00:51:26.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.958539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:26.958838 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:26.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.960241 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:51:26.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.961395 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:51:26.961623 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:51:26.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.962620 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:26.962834 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:26.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.964215 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:51:26.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.965795 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:51:26.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.967119 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:51:26.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.968453 systemd[1]: Reached target network-pre.target. Sep 13 00:51:26.970343 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:51:26.972142 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:51:26.972897 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:51:26.974363 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:51:26.976584 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:51:26.977683 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:51:26.978602 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:51:26.979608 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:51:26.980526 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:51:26.982408 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:51:26.983003 systemd-journald[1023]: Time spent on flushing to /var/log/journal/39c16d072fa649daafd79f23e073ab5c is 13.827ms for 1105 entries. Sep 13 00:51:26.983003 systemd-journald[1023]: System Journal (/var/log/journal/39c16d072fa649daafd79f23e073ab5c) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:51:27.023111 systemd-journald[1023]: Received client request to flush runtime journal. Sep 13 00:51:26.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:26.987234 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:51:26.988449 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:51:26.989480 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:51:27.024203 udevadm[1064]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:51:26.991374 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:51:27.004208 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:51:27.005399 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:51:27.008931 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:51:27.010066 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:51:27.012255 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:51:27.025046 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:51:27.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.031837 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:51:27.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.618564 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:51:27.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.620546 kernel: kauditd_printk_skb: 76 callbacks suppressed Sep 13 00:51:27.620597 kernel: audit: type=1130 audit(1757724687.618:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.621412 systemd[1]: Starting systemd-udevd.service... Sep 13 00:51:27.638255 systemd-udevd[1075]: Using default interface naming scheme 'v252'. Sep 13 00:51:27.651676 systemd[1]: Started systemd-udevd.service. Sep 13 00:51:27.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.653929 systemd[1]: Starting systemd-networkd.service... Sep 13 00:51:27.656926 kernel: audit: type=1130 audit(1757724687.651:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.658803 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:51:27.688424 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:51:27.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.689638 systemd[1]: Started systemd-userdbd.service. Sep 13 00:51:27.697840 kernel: audit: type=1130 audit(1757724687.690:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.732939 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:51:27.733279 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:51:27.737968 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:51:27.744263 systemd-networkd[1076]: lo: Link UP Sep 13 00:51:27.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.744271 systemd-networkd[1076]: lo: Gained carrier Sep 13 00:51:27.744615 systemd-networkd[1076]: Enumeration completed Sep 13 00:51:27.744737 systemd[1]: Started systemd-networkd.service. Sep 13 00:51:27.746620 systemd-networkd[1076]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:51:27.747568 systemd-networkd[1076]: eth0: Link UP Sep 13 00:51:27.747576 systemd-networkd[1076]: eth0: Gained carrier Sep 13 00:51:27.749984 kernel: audit: type=1130 audit(1757724687.744:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.748000 audit[1087]: AVC avc: denied { confidentiality } for pid=1087 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:51:27.748000 audit[1087]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e2178b95a0 a1=338ec a2=7f4d29a24bc5 a3=5 items=110 ppid=1075 pid=1087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:27.763208 kernel: audit: type=1400 audit(1757724687.748:116): avc: denied { confidentiality } for pid=1087 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:51:27.763258 kernel: audit: type=1300 audit(1757724687.748:116): arch=c000003e syscall=175 success=yes exit=0 a0=55e2178b95a0 a1=338ec a2=7f4d29a24bc5 a3=5 items=110 ppid=1075 pid=1087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:27.763285 kernel: audit: type=1307 audit(1757724687.748:116): cwd="/" Sep 13 00:51:27.748000 audit: CWD cwd="/" Sep 13 00:51:27.748000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.767095 kernel: audit: type=1302 audit(1757724687.748:116): item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.767139 kernel: audit: type=1302 audit(1757724687.748:116): item=1 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=1 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.769964 kernel: audit: type=1302 audit(1757724687.748:116): item=2 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=2 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=3 name=(null) inode=13289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=4 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=5 name=(null) inode=13290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=6 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=7 name=(null) inode=13291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=8 name=(null) inode=13291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=9 name=(null) inode=13292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=10 name=(null) inode=13291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=11 name=(null) inode=13293 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=12 name=(null) inode=13291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=13 name=(null) inode=13294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=14 name=(null) inode=13291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=15 name=(null) inode=13295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=16 name=(null) inode=13291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=17 name=(null) inode=13296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=18 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=19 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=20 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=21 name=(null) inode=13298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=22 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=23 name=(null) inode=13299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=24 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=25 name=(null) inode=13300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=26 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=27 name=(null) inode=13301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=28 name=(null) inode=13297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=29 name=(null) inode=13302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=30 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=31 name=(null) inode=13303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=32 name=(null) inode=13303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=33 name=(null) inode=13304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=34 name=(null) inode=13303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=35 name=(null) inode=13305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=36 name=(null) inode=13303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=37 name=(null) inode=13306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=38 name=(null) inode=13303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=39 name=(null) inode=13307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=40 name=(null) inode=13303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=41 name=(null) inode=13308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=42 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=43 name=(null) inode=13309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=44 name=(null) inode=13309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=45 name=(null) inode=13310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=46 name=(null) inode=13309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=47 name=(null) inode=13311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=48 name=(null) inode=13309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=49 name=(null) inode=13312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=50 name=(null) inode=13309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=51 name=(null) inode=16385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=52 name=(null) inode=13309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=53 name=(null) inode=16386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=55 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=56 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=57 name=(null) inode=16388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=58 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=59 name=(null) inode=16389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=60 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=61 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=62 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=63 name=(null) inode=16391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=64 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=65 name=(null) inode=16392 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=66 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=67 name=(null) inode=16393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=68 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=69 name=(null) inode=16394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=70 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=71 name=(null) inode=16395 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=72 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=73 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=74 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=75 name=(null) inode=16397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=76 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=77 name=(null) inode=16398 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=78 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=79 name=(null) inode=16399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=80 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=81 name=(null) inode=16400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=82 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=83 name=(null) inode=16401 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=84 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=85 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=86 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=87 name=(null) inode=16403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=88 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=89 name=(null) inode=16404 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=90 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=91 name=(null) inode=16405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=92 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=93 name=(null) inode=16406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=94 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=95 name=(null) inode=16407 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=96 name=(null) inode=16387 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=97 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=98 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=99 name=(null) inode=16409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=100 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=101 name=(null) inode=16410 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=102 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=103 name=(null) inode=16411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=104 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=105 name=(null) inode=16412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=106 name=(null) inode=16408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=107 name=(null) inode=16413 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PATH item=109 name=(null) inode=16414 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:51:27.748000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:51:27.776210 systemd-networkd[1076]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:51:27.786938 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:51:27.791925 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:51:27.803105 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:51:27.803123 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:51:27.803232 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:51:27.803331 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:51:27.894408 kernel: kvm: Nested Virtualization enabled Sep 13 00:51:27.894554 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:51:27.894570 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:51:27.894583 kernel: SVM: Virtual GIF supported Sep 13 00:51:27.910932 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:51:27.940328 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:51:27.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.942418 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:51:27.949275 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:51:27.970673 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:51:27.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:27.971670 systemd[1]: Reached target cryptsetup.target. Sep 13 00:51:27.973843 systemd[1]: Starting lvm2-activation.service... Sep 13 00:51:27.978029 lvm[1114]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:51:28.005027 systemd[1]: Finished lvm2-activation.service. Sep 13 00:51:28.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.005990 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:51:28.006815 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:51:28.006843 systemd[1]: Reached target local-fs.target. Sep 13 00:51:28.007617 systemd[1]: Reached target machines.target. Sep 13 00:51:28.009540 systemd[1]: Starting ldconfig.service... Sep 13 00:51:28.010494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.010586 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:28.012070 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:51:28.013932 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:51:28.016178 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:51:28.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.018278 systemd[1]: Starting systemd-sysext.service... Sep 13 00:51:28.019298 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1117 (bootctl) Sep 13 00:51:28.020258 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:51:28.023108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:51:28.029481 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:51:28.034087 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:51:28.034326 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:51:28.051945 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:51:28.058580 systemd-fsck[1127]: fsck.fat 4.2 (2021-01-31) Sep 13 00:51:28.058580 systemd-fsck[1127]: /dev/vda1: 791 files, 120781/258078 clusters Sep 13 00:51:28.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.059783 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:51:28.062766 systemd[1]: Mounting boot.mount... Sep 13 00:51:28.076100 systemd[1]: Mounted boot.mount. Sep 13 00:51:28.292940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:51:28.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.294825 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:51:28.303102 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:51:28.303833 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:51:28.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.310927 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:51:28.317389 (sd-sysext)[1139]: Using extensions 'kubernetes'. Sep 13 00:51:28.317728 (sd-sysext)[1139]: Merged extensions into '/usr'. Sep 13 00:51:28.333626 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:28.335028 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:51:28.336077 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.337141 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:28.338967 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:28.340676 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:28.341656 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.341778 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:28.341882 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:28.342679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:28.342811 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:28.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.346373 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:51:28.347829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:28.348156 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:28.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.349777 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:28.350004 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:28.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.351328 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:51:28.351496 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.352382 systemd[1]: Finished systemd-sysext.service. Sep 13 00:51:28.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.354662 systemd[1]: Starting ensure-sysext.service... Sep 13 00:51:28.356617 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:51:28.359638 systemd[1]: Reloading. Sep 13 00:51:28.423701 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2025-09-13T00:51:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:51:28.424064 ldconfig[1116]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:51:28.424277 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2025-09-13T00:51:28Z" level=info msg="torcx already run" Sep 13 00:51:28.424361 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:51:28.425548 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:51:28.427781 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:51:28.493809 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:51:28.493827 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:51:28.511578 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:51:28.569137 systemd[1]: Finished ldconfig.service. Sep 13 00:51:28.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.571159 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:51:28.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.574372 systemd[1]: Starting audit-rules.service... Sep 13 00:51:28.576175 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:51:28.578223 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:51:28.581584 systemd[1]: Starting systemd-resolved.service... Sep 13 00:51:28.584592 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:51:28.586897 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:51:28.588862 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:51:28.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.591000 audit[1236]: SYSTEM_BOOT pid=1236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:51:28.597204 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.598871 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:28.601362 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:28.603068 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:28.606304 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.606519 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:28.606683 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:51:28.606000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:51:28.606000 audit[1252]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3d1899b0 a2=420 a3=0 items=0 ppid=1224 pid=1252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:28.606000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:51:28.607568 augenrules[1252]: No rules Sep 13 00:51:28.608786 systemd[1]: Finished audit-rules.service. Sep 13 00:51:28.610286 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:51:28.612423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:28.612721 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:28.614693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:28.614862 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:28.616653 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:28.616847 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:28.620627 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:51:28.623519 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.625186 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:28.627284 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:28.629234 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:28.630142 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.630270 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:28.632463 systemd[1]: Starting systemd-update-done.service... Sep 13 00:51:28.633368 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:51:28.634643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:28.634836 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:28.636064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:28.636230 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:28.637626 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:28.637854 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:28.639198 systemd[1]: Finished systemd-update-done.service. Sep 13 00:51:28.644521 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.646599 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:51:28.648965 systemd[1]: Starting modprobe@drm.service... Sep 13 00:51:28.650999 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:51:28.652953 systemd[1]: Starting modprobe@loop.service... Sep 13 00:51:28.653760 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.653906 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:28.655637 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:51:28.656812 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:51:28.658255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:51:28.658431 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:51:28.659814 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:51:28.660038 systemd[1]: Finished modprobe@drm.service. Sep 13 00:51:28.661462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:51:28.661641 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:51:28.663120 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:51:28.663396 systemd[1]: Finished modprobe@loop.service. Sep 13 00:51:28.666702 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:51:28.666848 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.669146 systemd-resolved[1231]: Positive Trust Anchors: Sep 13 00:51:28.669163 systemd-resolved[1231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:51:28.669199 systemd-resolved[1231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:51:28.669378 systemd[1]: Finished ensure-sysext.service. Sep 13 00:51:28.677244 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:51:28.678325 systemd-timesyncd[1235]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:51:28.678647 systemd[1]: Reached target time-set.target. Sep 13 00:51:28.678691 systemd-timesyncd[1235]: Initial clock synchronization to Sat 2025-09-13 00:51:28.836625 UTC. Sep 13 00:51:28.682051 systemd-resolved[1231]: Defaulting to hostname 'linux'. Sep 13 00:51:28.683438 systemd[1]: Started systemd-resolved.service. Sep 13 00:51:28.684351 systemd[1]: Reached target network.target. Sep 13 00:51:28.685151 systemd[1]: Reached target nss-lookup.target. Sep 13 00:51:28.685994 systemd[1]: Reached target sysinit.target. Sep 13 00:51:28.686847 systemd[1]: Started motdgen.path. Sep 13 00:51:28.687595 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:51:28.688821 systemd[1]: Started logrotate.timer. Sep 13 00:51:28.689634 systemd[1]: Started mdadm.timer. Sep 13 00:51:28.690352 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:51:28.691225 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:51:28.691303 systemd[1]: Reached target paths.target. Sep 13 00:51:28.692086 systemd[1]: Reached target timers.target. Sep 13 00:51:28.693115 systemd[1]: Listening on dbus.socket. Sep 13 00:51:28.694992 systemd[1]: Starting docker.socket... Sep 13 00:51:28.696570 systemd[1]: Listening on sshd.socket. Sep 13 00:51:28.697415 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:28.697661 systemd[1]: Listening on docker.socket. Sep 13 00:51:28.698453 systemd[1]: Reached target sockets.target. Sep 13 00:51:28.699288 systemd[1]: Reached target basic.target. Sep 13 00:51:28.700559 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:51:28.700604 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.700636 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:51:28.701927 systemd[1]: Starting containerd.service... Sep 13 00:51:28.703590 systemd[1]: Starting dbus.service... Sep 13 00:51:28.705225 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:51:28.707045 systemd[1]: Starting extend-filesystems.service... Sep 13 00:51:28.708101 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:51:28.709958 jq[1286]: false Sep 13 00:51:28.709067 systemd[1]: Starting motdgen.service... Sep 13 00:51:28.711283 systemd[1]: Starting prepare-helm.service... Sep 13 00:51:28.713205 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:51:28.715285 systemd[1]: Starting sshd-keygen.service... Sep 13 00:51:28.717783 systemd[1]: Starting systemd-logind.service... Sep 13 00:51:28.718709 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:28.718770 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:51:28.719713 systemd[1]: Starting update-engine.service... Sep 13 00:51:28.720342 dbus-daemon[1285]: [system] SELinux support is enabled Sep 13 00:51:28.721513 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:51:28.722997 systemd[1]: Started dbus.service. Sep 13 00:51:28.726929 jq[1300]: true Sep 13 00:51:28.726927 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:51:28.727140 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:51:28.727837 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:51:28.728062 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:51:28.729728 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:51:28.729756 systemd[1]: Reached target system-config.target. Sep 13 00:51:28.730825 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:51:28.730844 systemd[1]: Reached target user-config.target. Sep 13 00:51:28.736678 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:51:28.737310 extend-filesystems[1287]: Found loop1 Sep 13 00:51:28.737310 extend-filesystems[1287]: Found sr0 Sep 13 00:51:28.737310 extend-filesystems[1287]: Found vda Sep 13 00:51:28.737310 extend-filesystems[1287]: Found vda1 Sep 13 00:51:28.737310 extend-filesystems[1287]: Found vda2 Sep 13 00:51:28.737310 extend-filesystems[1287]: Found vda3 Sep 13 00:51:28.737310 extend-filesystems[1287]: Found usr Sep 13 00:51:28.737310 extend-filesystems[1287]: Found vda4 Sep 13 00:51:28.737310 extend-filesystems[1287]: Found vda6 Sep 13 00:51:28.737310 extend-filesystems[1287]: Found vda7 Sep 13 00:51:28.737310 extend-filesystems[1287]: Found vda9 Sep 13 00:51:28.737310 extend-filesystems[1287]: Checking size of /dev/vda9 Sep 13 00:51:28.736897 systemd[1]: Finished motdgen.service. Sep 13 00:51:28.837689 jq[1312]: true Sep 13 00:51:28.837836 extend-filesystems[1287]: Resized partition /dev/vda9 Sep 13 00:51:28.842003 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:51:28.842029 tar[1307]: linux-amd64/helm Sep 13 00:51:28.842203 extend-filesystems[1321]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:51:28.846411 systemd-logind[1298]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:51:28.846428 systemd-logind[1298]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:51:28.846803 systemd-logind[1298]: New seat seat0. Sep 13 00:51:28.850154 systemd-networkd[1076]: eth0: Gained IPv6LL Sep 13 00:51:28.851007 systemd[1]: Started systemd-logind.service. Sep 13 00:51:28.864700 update_engine[1299]: I0913 00:51:28.859728 1299 main.cc:92] Flatcar Update Engine starting Sep 13 00:51:28.853888 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:51:28.855363 systemd[1]: Reached target network-online.target. Sep 13 00:51:28.858258 systemd[1]: Starting kubelet.service... Sep 13 00:51:28.866676 systemd[1]: Started update-engine.service. Sep 13 00:51:28.871506 update_engine[1299]: I0913 00:51:28.871129 1299 update_check_scheduler.cc:74] Next update check in 5m25s Sep 13 00:51:28.869197 systemd[1]: Started locksmithd.service. Sep 13 00:51:28.881399 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:51:28.963053 extend-filesystems[1321]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:51:28.963053 extend-filesystems[1321]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:51:28.963053 extend-filesystems[1321]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:51:28.970416 extend-filesystems[1287]: Resized filesystem in /dev/vda9 Sep 13 00:51:28.971585 env[1315]: time="2025-09-13T00:51:28.967136070Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:51:28.965742 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:51:28.965982 systemd[1]: Finished extend-filesystems.service. Sep 13 00:51:28.973758 bash[1347]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:51:28.974048 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:51:28.993148 env[1315]: time="2025-09-13T00:51:28.993106414Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:51:28.993850 env[1315]: time="2025-09-13T00:51:28.993832275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995243 env[1315]: time="2025-09-13T00:51:28.995215700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995243 env[1315]: time="2025-09-13T00:51:28.995240636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995497 env[1315]: time="2025-09-13T00:51:28.995469936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995497 env[1315]: time="2025-09-13T00:51:28.995487399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995557 env[1315]: time="2025-09-13T00:51:28.995498149Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:51:28.995557 env[1315]: time="2025-09-13T00:51:28.995507006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995593 env[1315]: time="2025-09-13T00:51:28.995567279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995799 env[1315]: time="2025-09-13T00:51:28.995786139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995944 env[1315]: time="2025-09-13T00:51:28.995929128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:28.995970 env[1315]: time="2025-09-13T00:51:28.995945057Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:51:28.995998 env[1315]: time="2025-09-13T00:51:28.995986375Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:51:28.996037 env[1315]: time="2025-09-13T00:51:28.995999349Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004329005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004354170Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004367860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004417230Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004432596Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004444949Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004456504Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004470645Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004483416Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004496198Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004507978Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004522977Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004611406Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:51:29.006321 env[1315]: time="2025-09-13T00:51:29.004830624Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005122221Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005149704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005161770Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005209086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005219906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005233117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005245623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005256238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005267028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005277143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005287176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005298435Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005790715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005807339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.006672 env[1315]: time="2025-09-13T00:51:29.005827415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.007001 env[1315]: time="2025-09-13T00:51:29.005838265Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:51:29.007001 env[1315]: time="2025-09-13T00:51:29.005851436Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:51:29.007001 env[1315]: time="2025-09-13T00:51:29.005860569Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:51:29.007001 env[1315]: time="2025-09-13T00:51:29.005883620Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:51:29.007001 env[1315]: time="2025-09-13T00:51:29.005921035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:51:29.007109 env[1315]: time="2025-09-13T00:51:29.006128238Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:51:29.007109 env[1315]: time="2025-09-13T00:51:29.006186270Z" level=info msg="Connect containerd service" Sep 13 00:51:29.007109 env[1315]: time="2025-09-13T00:51:29.006232309Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:51:29.008673 env[1315]: time="2025-09-13T00:51:29.007819475Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:51:29.008673 env[1315]: time="2025-09-13T00:51:29.008187740Z" level=info msg="Start subscribing containerd event" Sep 13 00:51:29.008673 env[1315]: time="2025-09-13T00:51:29.008235607Z" level=info msg="Start recovering state" Sep 13 00:51:29.008673 env[1315]: time="2025-09-13T00:51:29.008308425Z" level=info msg="Start event monitor" Sep 13 00:51:29.008673 env[1315]: time="2025-09-13T00:51:29.008329452Z" level=info msg="Start snapshots syncer" Sep 13 00:51:29.008673 env[1315]: time="2025-09-13T00:51:29.008343031Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:51:29.008673 env[1315]: time="2025-09-13T00:51:29.008351030Z" level=info msg="Start streaming server" Sep 13 00:51:29.009118 env[1315]: time="2025-09-13T00:51:29.008698197Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:51:29.009118 env[1315]: time="2025-09-13T00:51:29.008739658Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:51:29.009118 env[1315]: time="2025-09-13T00:51:29.008795137Z" level=info msg="containerd successfully booted in 0.053500s" Sep 13 00:51:29.008911 systemd[1]: Started containerd.service. Sep 13 00:51:29.034800 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:29.034858 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:51:29.105740 locksmithd[1346]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:51:29.323314 sshd_keygen[1309]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:51:29.323517 tar[1307]: linux-amd64/LICENSE Sep 13 00:51:29.323517 tar[1307]: linux-amd64/README.md Sep 13 00:51:29.328009 systemd[1]: Finished prepare-helm.service. Sep 13 00:51:29.341201 systemd[1]: Finished sshd-keygen.service. Sep 13 00:51:29.343451 systemd[1]: Starting issuegen.service... Sep 13 00:51:29.348435 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:51:29.348675 systemd[1]: Finished issuegen.service. Sep 13 00:51:29.350983 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:51:29.357331 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:51:29.359632 systemd[1]: Started getty@tty1.service. Sep 13 00:51:29.361386 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:51:29.362398 systemd[1]: Reached target getty.target. Sep 13 00:51:29.656054 systemd[1]: Started kubelet.service. Sep 13 00:51:29.657346 systemd[1]: Reached target multi-user.target. Sep 13 00:51:29.659409 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:51:29.664860 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:51:29.665059 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:51:29.667039 systemd[1]: Startup finished in 6.803s (kernel) + 6.751s (userspace) = 13.555s. Sep 13 00:51:30.057280 kubelet[1386]: E0913 00:51:30.057151 1386 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:30.059105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:30.059253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:30.175378 systemd[1]: Created slice system-sshd.slice. Sep 13 00:51:30.176468 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:60042.service. Sep 13 00:51:30.209466 sshd[1396]: Accepted publickey for core from 10.0.0.1 port 60042 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:51:30.210757 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:30.218883 systemd-logind[1298]: New session 1 of user core. Sep 13 00:51:30.219595 systemd[1]: Created slice user-500.slice. Sep 13 00:51:30.220468 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:51:30.228548 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:51:30.229886 systemd[1]: Starting user@500.service... Sep 13 00:51:30.232396 (systemd)[1400]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:30.299085 systemd[1400]: Queued start job for default target default.target. Sep 13 00:51:30.299258 systemd[1400]: Reached target paths.target. Sep 13 00:51:30.299274 systemd[1400]: Reached target sockets.target. Sep 13 00:51:30.299286 systemd[1400]: Reached target timers.target. Sep 13 00:51:30.299296 systemd[1400]: Reached target basic.target. Sep 13 00:51:30.299332 systemd[1400]: Reached target default.target. Sep 13 00:51:30.299351 systemd[1400]: Startup finished in 62ms. Sep 13 00:51:30.299432 systemd[1]: Started user@500.service. Sep 13 00:51:30.300248 systemd[1]: Started session-1.scope. Sep 13 00:51:30.350881 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:60044.service. Sep 13 00:51:30.381614 sshd[1410]: Accepted publickey for core from 10.0.0.1 port 60044 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:51:30.382606 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:30.385851 systemd-logind[1298]: New session 2 of user core. Sep 13 00:51:30.386528 systemd[1]: Started session-2.scope. Sep 13 00:51:30.440597 sshd[1410]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:30.443508 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:60050.service. Sep 13 00:51:30.443901 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:60044.service: Deactivated successfully. Sep 13 00:51:30.444839 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:51:30.445294 systemd-logind[1298]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:51:30.446142 systemd-logind[1298]: Removed session 2. Sep 13 00:51:30.473733 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 60050 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:51:30.474715 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:30.477872 systemd-logind[1298]: New session 3 of user core. Sep 13 00:51:30.478544 systemd[1]: Started session-3.scope. Sep 13 00:51:30.527836 sshd[1416]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:30.530081 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:60064.service. Sep 13 00:51:30.530451 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:60050.service: Deactivated successfully. Sep 13 00:51:30.531268 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:51:30.531306 systemd-logind[1298]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:51:30.532086 systemd-logind[1298]: Removed session 3. Sep 13 00:51:30.560150 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 60064 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:51:30.561117 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:30.564026 systemd-logind[1298]: New session 4 of user core. Sep 13 00:51:30.564720 systemd[1]: Started session-4.scope. Sep 13 00:51:30.617733 sshd[1422]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:30.619896 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:60066.service. Sep 13 00:51:30.620731 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:60064.service: Deactivated successfully. Sep 13 00:51:30.621604 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:51:30.621632 systemd-logind[1298]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:51:30.622412 systemd-logind[1298]: Removed session 4. Sep 13 00:51:30.650166 sshd[1429]: Accepted publickey for core from 10.0.0.1 port 60066 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:51:30.651091 sshd[1429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:30.653780 systemd-logind[1298]: New session 5 of user core. Sep 13 00:51:30.654391 systemd[1]: Started session-5.scope. Sep 13 00:51:30.709444 sudo[1435]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:51:30.709641 sudo[1435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:51:30.732183 systemd[1]: Starting docker.service... Sep 13 00:51:30.771455 env[1448]: time="2025-09-13T00:51:30.771386922Z" level=info msg="Starting up" Sep 13 00:51:30.772895 env[1448]: time="2025-09-13T00:51:30.772857958Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:51:30.772895 env[1448]: time="2025-09-13T00:51:30.772892082Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:51:30.772985 env[1448]: time="2025-09-13T00:51:30.772915626Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:51:30.772985 env[1448]: time="2025-09-13T00:51:30.772942015Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:51:30.774832 env[1448]: time="2025-09-13T00:51:30.774807250Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:51:30.774916 env[1448]: time="2025-09-13T00:51:30.774898124Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:51:30.775011 env[1448]: time="2025-09-13T00:51:30.774990620Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:51:30.775089 env[1448]: time="2025-09-13T00:51:30.775071475Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:51:31.603048 env[1448]: time="2025-09-13T00:51:31.603003019Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:51:31.603048 env[1448]: time="2025-09-13T00:51:31.603028660Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:51:31.603277 env[1448]: time="2025-09-13T00:51:31.603192052Z" level=info msg="Loading containers: start." Sep 13 00:51:31.849935 kernel: Initializing XFRM netlink socket Sep 13 00:51:31.877762 env[1448]: time="2025-09-13T00:51:31.877667032Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:51:31.926131 systemd-networkd[1076]: docker0: Link UP Sep 13 00:51:31.941175 env[1448]: time="2025-09-13T00:51:31.941148868Z" level=info msg="Loading containers: done." Sep 13 00:51:31.951137 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1171656739-merged.mount: Deactivated successfully. Sep 13 00:51:31.953749 env[1448]: time="2025-09-13T00:51:31.953720625Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:51:31.953868 env[1448]: time="2025-09-13T00:51:31.953848877Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:51:31.954016 env[1448]: time="2025-09-13T00:51:31.953999086Z" level=info msg="Daemon has completed initialization" Sep 13 00:51:31.971574 systemd[1]: Started docker.service. Sep 13 00:51:31.978566 env[1448]: time="2025-09-13T00:51:31.978506428Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:51:32.707176 env[1315]: time="2025-09-13T00:51:32.707127002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:51:33.386427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3351735727.mount: Deactivated successfully. Sep 13 00:51:35.051751 env[1315]: time="2025-09-13T00:51:35.051677325Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:35.053497 env[1315]: time="2025-09-13T00:51:35.053460532Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:35.055321 env[1315]: time="2025-09-13T00:51:35.055276144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:35.059000 env[1315]: time="2025-09-13T00:51:35.058942058Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:35.059766 env[1315]: time="2025-09-13T00:51:35.059733852Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:51:35.060450 env[1315]: time="2025-09-13T00:51:35.060414379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:51:36.634676 env[1315]: time="2025-09-13T00:51:36.634612997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:36.636750 env[1315]: time="2025-09-13T00:51:36.636680491Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:36.638807 env[1315]: time="2025-09-13T00:51:36.638782517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:36.641020 env[1315]: time="2025-09-13T00:51:36.640974594Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:36.641729 env[1315]: time="2025-09-13T00:51:36.641705674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:51:36.642343 env[1315]: time="2025-09-13T00:51:36.642315755Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:51:38.724605 env[1315]: time="2025-09-13T00:51:38.724521976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:38.743884 env[1315]: time="2025-09-13T00:51:38.743843857Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:38.756127 env[1315]: time="2025-09-13T00:51:38.756058966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:38.936305 env[1315]: time="2025-09-13T00:51:38.936238185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:38.937162 env[1315]: time="2025-09-13T00:51:38.937125076Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:51:38.937663 env[1315]: time="2025-09-13T00:51:38.937612740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:51:40.073227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:51:40.073406 systemd[1]: Stopped kubelet.service. Sep 13 00:51:40.074892 systemd[1]: Starting kubelet.service... Sep 13 00:51:40.168050 systemd[1]: Started kubelet.service. Sep 13 00:51:40.246554 kubelet[1587]: E0913 00:51:40.246478 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:40.250082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:40.250231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:41.059045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048162979.mount: Deactivated successfully. Sep 13 00:51:42.729013 env[1315]: time="2025-09-13T00:51:42.728929838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:42.730952 env[1315]: time="2025-09-13T00:51:42.730929497Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:42.732721 env[1315]: time="2025-09-13T00:51:42.732678305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:42.734291 env[1315]: time="2025-09-13T00:51:42.734232060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:42.734765 env[1315]: time="2025-09-13T00:51:42.734713091Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:51:42.735520 env[1315]: time="2025-09-13T00:51:42.735497805Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:51:43.407148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039267569.mount: Deactivated successfully. Sep 13 00:51:45.055568 env[1315]: time="2025-09-13T00:51:45.055496874Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.058203 env[1315]: time="2025-09-13T00:51:45.058152624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.060392 env[1315]: time="2025-09-13T00:51:45.060344486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.062392 env[1315]: time="2025-09-13T00:51:45.062340906Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.063234 env[1315]: time="2025-09-13T00:51:45.063177994Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:51:45.063764 env[1315]: time="2025-09-13T00:51:45.063731852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:51:45.671963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656540454.mount: Deactivated successfully. Sep 13 00:51:45.678310 env[1315]: time="2025-09-13T00:51:45.678275770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.680143 env[1315]: time="2025-09-13T00:51:45.680119920Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.681899 env[1315]: time="2025-09-13T00:51:45.681866149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.683254 env[1315]: time="2025-09-13T00:51:45.683219015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.683767 env[1315]: time="2025-09-13T00:51:45.683736901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:51:45.684244 env[1315]: time="2025-09-13T00:51:45.684215171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:51:46.292645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396476241.mount: Deactivated successfully. Sep 13 00:51:50.323098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:51:50.323285 systemd[1]: Stopped kubelet.service. Sep 13 00:51:50.324905 systemd[1]: Starting kubelet.service... Sep 13 00:51:50.407017 systemd[1]: Started kubelet.service. Sep 13 00:51:50.443346 kubelet[1602]: E0913 00:51:50.443270 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:50.445273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:50.445467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:51.561699 env[1315]: time="2025-09-13T00:51:51.561601886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:51.563547 env[1315]: time="2025-09-13T00:51:51.563511043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:51.565867 env[1315]: time="2025-09-13T00:51:51.565798806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:51.567622 env[1315]: time="2025-09-13T00:51:51.567583821Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:51.568515 env[1315]: time="2025-09-13T00:51:51.568475776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:51:54.202893 systemd[1]: Stopped kubelet.service. Sep 13 00:51:54.205091 systemd[1]: Starting kubelet.service... Sep 13 00:51:54.225115 systemd[1]: Reloading. Sep 13 00:51:54.290349 /usr/lib/systemd/system-generators/torcx-generator[1659]: time="2025-09-13T00:51:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:51:54.290730 /usr/lib/systemd/system-generators/torcx-generator[1659]: time="2025-09-13T00:51:54Z" level=info msg="torcx already run" Sep 13 00:51:54.636226 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:51:54.636244 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:51:54.653516 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:51:54.723395 systemd[1]: Started kubelet.service. Sep 13 00:51:54.726361 systemd[1]: Stopping kubelet.service... Sep 13 00:51:54.726575 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:51:54.726779 systemd[1]: Stopped kubelet.service. Sep 13 00:51:54.728128 systemd[1]: Starting kubelet.service... Sep 13 00:51:54.811561 systemd[1]: Started kubelet.service. Sep 13 00:51:54.850106 kubelet[1719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:51:54.850106 kubelet[1719]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:51:54.850106 kubelet[1719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:51:54.850484 kubelet[1719]: I0913 00:51:54.850149 1719 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:51:55.226728 kubelet[1719]: I0913 00:51:55.226664 1719 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:51:55.226728 kubelet[1719]: I0913 00:51:55.226709 1719 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:51:55.227359 kubelet[1719]: I0913 00:51:55.227330 1719 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:51:55.247389 kubelet[1719]: E0913 00:51:55.247337 1719 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:55.247500 kubelet[1719]: I0913 00:51:55.247413 1719 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:51:55.254120 kubelet[1719]: E0913 00:51:55.254088 1719 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:51:55.254120 kubelet[1719]: I0913 00:51:55.254113 1719 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:51:55.258619 kubelet[1719]: I0913 00:51:55.258593 1719 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:51:55.259373 kubelet[1719]: I0913 00:51:55.259354 1719 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:51:55.259493 kubelet[1719]: I0913 00:51:55.259465 1719 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:51:55.259654 kubelet[1719]: I0913 00:51:55.259489 1719 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:51:55.259779 kubelet[1719]: I0913 00:51:55.259655 1719 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:51:55.259779 kubelet[1719]: I0913 00:51:55.259663 1719 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:51:55.259779 kubelet[1719]: I0913 00:51:55.259750 1719 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:51:55.265090 kubelet[1719]: I0913 00:51:55.265064 1719 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:51:55.265090 kubelet[1719]: I0913 00:51:55.265083 1719 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:51:55.265270 kubelet[1719]: I0913 00:51:55.265130 1719 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:51:55.265270 kubelet[1719]: I0913 00:51:55.265143 1719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:51:55.277052 kubelet[1719]: I0913 00:51:55.277025 1719 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:51:55.277314 kubelet[1719]: W0913 00:51:55.277267 1719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:51:55.277362 kubelet[1719]: E0913 00:51:55.277323 1719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:55.277440 kubelet[1719]: W0913 00:51:55.277385 1719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:51:55.277492 kubelet[1719]: E0913 00:51:55.277462 1719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:55.277665 kubelet[1719]: I0913 00:51:55.277636 1719 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:51:55.278392 kubelet[1719]: W0913 00:51:55.278370 1719 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:51:55.279888 kubelet[1719]: I0913 00:51:55.279862 1719 server.go:1274] "Started kubelet" Sep 13 00:51:55.279957 kubelet[1719]: I0913 00:51:55.279930 1719 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:51:55.280055 kubelet[1719]: I0913 00:51:55.280021 1719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:51:55.280382 kubelet[1719]: I0913 00:51:55.280368 1719 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:51:55.280876 kubelet[1719]: I0913 00:51:55.280860 1719 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:51:55.283178 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:51:55.283295 kubelet[1719]: I0913 00:51:55.283268 1719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:51:55.285814 kubelet[1719]: I0913 00:51:55.285791 1719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:51:55.287699 kubelet[1719]: I0913 00:51:55.287682 1719 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:51:55.287866 kubelet[1719]: E0913 00:51:55.287849 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:55.288611 kubelet[1719]: E0913 00:51:55.288504 1719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Sep 13 00:51:55.288841 kubelet[1719]: E0913 00:51:55.288827 1719 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:51:55.289030 kubelet[1719]: I0913 00:51:55.288998 1719 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:51:55.289030 kubelet[1719]: I0913 00:51:55.289036 1719 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:51:55.289457 kubelet[1719]: W0913 00:51:55.289320 1719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:51:55.289457 kubelet[1719]: E0913 00:51:55.289364 1719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:55.290078 kubelet[1719]: I0913 00:51:55.290058 1719 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:51:55.290132 kubelet[1719]: I0913 00:51:55.290115 1719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:51:55.290860 kubelet[1719]: E0913 00:51:55.289924 1719 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b150d32de802 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:51:55.279837186 +0000 UTC m=+0.464898184,LastTimestamp:2025-09-13 00:51:55.279837186 +0000 UTC m=+0.464898184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:51:55.291268 kubelet[1719]: I0913 00:51:55.291251 1719 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:51:55.296035 kubelet[1719]: I0913 00:51:55.295993 1719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:51:55.297228 kubelet[1719]: I0913 00:51:55.296702 1719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:51:55.297228 kubelet[1719]: I0913 00:51:55.296718 1719 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:51:55.297228 kubelet[1719]: I0913 00:51:55.296738 1719 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:51:55.297228 kubelet[1719]: E0913 00:51:55.296774 1719 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:51:55.302638 kubelet[1719]: W0913 00:51:55.302519 1719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:51:55.302638 kubelet[1719]: E0913 00:51:55.302558 1719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:55.306039 kubelet[1719]: I0913 00:51:55.306012 1719 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:51:55.306039 kubelet[1719]: I0913 00:51:55.306031 1719 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:51:55.306039 kubelet[1719]: I0913 00:51:55.306042 1719 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:51:55.388875 kubelet[1719]: E0913 00:51:55.388821 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:55.397133 kubelet[1719]: E0913 00:51:55.397111 1719 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:51:55.489574 kubelet[1719]: E0913 00:51:55.489483 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:55.489769 kubelet[1719]: E0913 00:51:55.489663 1719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Sep 13 00:51:55.589899 kubelet[1719]: E0913 00:51:55.589862 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:55.598182 kubelet[1719]: E0913 00:51:55.598150 1719 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:51:55.690491 kubelet[1719]: E0913 00:51:55.690454 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:55.791572 kubelet[1719]: E0913 00:51:55.791480 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:55.890352 kubelet[1719]: E0913 00:51:55.890291 1719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Sep 13 00:51:55.892364 kubelet[1719]: E0913 00:51:55.892319 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:55.992888 kubelet[1719]: E0913 00:51:55.992825 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:55.999121 kubelet[1719]: E0913 00:51:55.999092 1719 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:51:56.046096 kubelet[1719]: I0913 00:51:56.045994 1719 policy_none.go:49] "None policy: Start" Sep 13 00:51:56.047236 kubelet[1719]: I0913 00:51:56.047208 1719 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:51:56.047236 kubelet[1719]: I0913 00:51:56.047241 1719 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:51:56.053135 kubelet[1719]: I0913 00:51:56.053103 1719 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:51:56.053300 kubelet[1719]: I0913 00:51:56.053264 1719 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:51:56.053300 kubelet[1719]: I0913 00:51:56.053280 1719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:51:56.054158 kubelet[1719]: I0913 00:51:56.054091 1719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:51:56.054823 kubelet[1719]: E0913 00:51:56.054802 1719 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:51:56.154734 kubelet[1719]: I0913 00:51:56.154664 1719 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:51:56.155055 kubelet[1719]: E0913 00:51:56.155031 1719 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 00:51:56.187857 kubelet[1719]: W0913 00:51:56.187790 1719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:51:56.187857 kubelet[1719]: E0913 00:51:56.187853 1719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:56.356289 kubelet[1719]: I0913 00:51:56.356237 1719 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:51:56.356523 kubelet[1719]: E0913 00:51:56.356500 1719 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 00:51:56.596421 kubelet[1719]: W0913 00:51:56.596351 1719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:51:56.596421 kubelet[1719]: E0913 00:51:56.596426 1719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:56.645028 kubelet[1719]: W0913 00:51:56.644880 1719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:51:56.645028 kubelet[1719]: E0913 00:51:56.644944 1719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:56.691643 kubelet[1719]: E0913 00:51:56.691594 1719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Sep 13 00:51:56.758137 kubelet[1719]: I0913 00:51:56.758109 1719 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:51:56.758356 kubelet[1719]: E0913 00:51:56.758334 1719 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 00:51:56.862072 kubelet[1719]: W0913 00:51:56.862014 1719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 13 00:51:56.862194 kubelet[1719]: E0913 00:51:56.862073 1719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:56.898735 kubelet[1719]: I0913 00:51:56.898618 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:51:56.898735 kubelet[1719]: I0913 00:51:56.898644 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:51:56.898735 kubelet[1719]: I0913 00:51:56.898676 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:51:56.898735 kubelet[1719]: I0913 00:51:56.898692 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:51:56.898735 kubelet[1719]: I0913 00:51:56.898705 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cc341e99a4836fe05115dfab3fb431d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cc341e99a4836fe05115dfab3fb431d\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:51:56.899282 kubelet[1719]: I0913 00:51:56.898720 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cc341e99a4836fe05115dfab3fb431d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cc341e99a4836fe05115dfab3fb431d\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:51:56.899282 kubelet[1719]: I0913 00:51:56.898732 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:51:56.899282 kubelet[1719]: I0913 00:51:56.898745 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cc341e99a4836fe05115dfab3fb431d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6cc341e99a4836fe05115dfab3fb431d\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:51:56.899282 kubelet[1719]: I0913 00:51:56.898757 1719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:51:57.105924 kubelet[1719]: E0913 00:51:57.105866 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:57.106097 kubelet[1719]: E0913 00:51:57.105991 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:57.106648 kubelet[1719]: E0913 00:51:57.106290 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:57.106719 env[1315]: time="2025-09-13T00:51:57.106521297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6cc341e99a4836fe05115dfab3fb431d,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:57.106719 env[1315]: time="2025-09-13T00:51:57.106591903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:57.106719 env[1315]: time="2025-09-13T00:51:57.106530058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:57.382985 kubelet[1719]: E0913 00:51:57.382935 1719 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:51:57.560203 kubelet[1719]: I0913 00:51:57.560175 1719 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:51:57.560478 kubelet[1719]: E0913 00:51:57.560434 1719 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 13 00:51:57.573503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869374235.mount: Deactivated successfully. Sep 13 00:51:57.580483 env[1315]: time="2025-09-13T00:51:57.580436389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.582222 env[1315]: time="2025-09-13T00:51:57.582182814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.583684 env[1315]: time="2025-09-13T00:51:57.583660039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.584674 env[1315]: time="2025-09-13T00:51:57.584636248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.586094 env[1315]: time="2025-09-13T00:51:57.586072395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.587206 env[1315]: time="2025-09-13T00:51:57.587171222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.588340 env[1315]: time="2025-09-13T00:51:57.588312357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.589493 env[1315]: time="2025-09-13T00:51:57.589463818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.591300 env[1315]: time="2025-09-13T00:51:57.591274564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.593140 env[1315]: time="2025-09-13T00:51:57.593113396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.594373 env[1315]: time="2025-09-13T00:51:57.594339130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.594897 env[1315]: time="2025-09-13T00:51:57.594868792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:57.615489 env[1315]: time="2025-09-13T00:51:57.615429647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:57.615604 env[1315]: time="2025-09-13T00:51:57.615491111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:57.615604 env[1315]: time="2025-09-13T00:51:57.615513144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:57.615665 env[1315]: time="2025-09-13T00:51:57.615633325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63655546006e73630a5a2bf327aaf4b1b65cd9c1d5de3586ece8f46ad54ce239 pid=1762 runtime=io.containerd.runc.v2 Sep 13 00:51:57.621667 env[1315]: time="2025-09-13T00:51:57.621512241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:57.621667 env[1315]: time="2025-09-13T00:51:57.621550019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:57.621667 env[1315]: time="2025-09-13T00:51:57.621560003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:57.621856 env[1315]: time="2025-09-13T00:51:57.621689427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/343ccd51536922e52cb5b5482a2ff9b87082a3fd019abbc40fc3d583ae7d0724 pid=1779 runtime=io.containerd.runc.v2 Sep 13 00:51:57.625211 env[1315]: time="2025-09-13T00:51:57.625149901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:57.625367 env[1315]: time="2025-09-13T00:51:57.625185053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:57.625367 env[1315]: time="2025-09-13T00:51:57.625195468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:57.625367 env[1315]: time="2025-09-13T00:51:57.625306588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d58eeede818f5f6f0901bc320902bee24d55874699c511931830aa6a8f6fb547 pid=1798 runtime=io.containerd.runc.v2 Sep 13 00:51:57.665225 env[1315]: time="2025-09-13T00:51:57.665057453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"63655546006e73630a5a2bf327aaf4b1b65cd9c1d5de3586ece8f46ad54ce239\"" Sep 13 00:51:57.666891 kubelet[1719]: E0913 00:51:57.666865 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:57.668867 env[1315]: time="2025-09-13T00:51:57.668832937Z" level=info msg="CreateContainer within sandbox \"63655546006e73630a5a2bf327aaf4b1b65cd9c1d5de3586ece8f46ad54ce239\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:51:57.681602 env[1315]: time="2025-09-13T00:51:57.681561766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6cc341e99a4836fe05115dfab3fb431d,Namespace:kube-system,Attempt:0,} returns sandbox id \"343ccd51536922e52cb5b5482a2ff9b87082a3fd019abbc40fc3d583ae7d0724\"" Sep 13 00:51:57.682482 kubelet[1719]: E0913 00:51:57.682369 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:57.684024 env[1315]: time="2025-09-13T00:51:57.683993769Z" level=info msg="CreateContainer within sandbox \"343ccd51536922e52cb5b5482a2ff9b87082a3fd019abbc40fc3d583ae7d0724\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:51:57.685302 env[1315]: time="2025-09-13T00:51:57.685254775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d58eeede818f5f6f0901bc320902bee24d55874699c511931830aa6a8f6fb547\"" Sep 13 00:51:57.685860 env[1315]: time="2025-09-13T00:51:57.685833303Z" level=info msg="CreateContainer within sandbox \"63655546006e73630a5a2bf327aaf4b1b65cd9c1d5de3586ece8f46ad54ce239\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d7c3ce6c55755bbece757c18d1d2571c7400c4a362d77b829593a026bbf3942e\"" Sep 13 00:51:57.685923 kubelet[1719]: E0913 00:51:57.685870 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:57.687001 env[1315]: time="2025-09-13T00:51:57.686643903Z" level=info msg="StartContainer for \"d7c3ce6c55755bbece757c18d1d2571c7400c4a362d77b829593a026bbf3942e\"" Sep 13 00:51:57.687167 env[1315]: time="2025-09-13T00:51:57.687129943Z" level=info msg="CreateContainer within sandbox \"d58eeede818f5f6f0901bc320902bee24d55874699c511931830aa6a8f6fb547\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:51:57.697747 env[1315]: time="2025-09-13T00:51:57.697704410Z" level=info msg="CreateContainer within sandbox \"343ccd51536922e52cb5b5482a2ff9b87082a3fd019abbc40fc3d583ae7d0724\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"131576f3e93067d5d428a501c179d6e6472ba8f84fd10a95ae91b963e1e7c67c\"" Sep 13 00:51:57.698321 env[1315]: time="2025-09-13T00:51:57.698290404Z" level=info msg="StartContainer for \"131576f3e93067d5d428a501c179d6e6472ba8f84fd10a95ae91b963e1e7c67c\"" Sep 13 00:51:57.706327 env[1315]: time="2025-09-13T00:51:57.706290955Z" level=info msg="CreateContainer within sandbox \"d58eeede818f5f6f0901bc320902bee24d55874699c511931830aa6a8f6fb547\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d5cef41b2b37f7ab833f6a4dc37fec963650d138840e1ee5b0651af5e7eff0f8\"" Sep 13 00:51:57.706788 env[1315]: time="2025-09-13T00:51:57.706769998Z" level=info msg="StartContainer for \"d5cef41b2b37f7ab833f6a4dc37fec963650d138840e1ee5b0651af5e7eff0f8\"" Sep 13 00:51:57.748195 env[1315]: time="2025-09-13T00:51:57.748151167Z" level=info msg="StartContainer for \"d7c3ce6c55755bbece757c18d1d2571c7400c4a362d77b829593a026bbf3942e\" returns successfully" Sep 13 00:51:57.748793 env[1315]: time="2025-09-13T00:51:57.748748458Z" level=info msg="StartContainer for \"131576f3e93067d5d428a501c179d6e6472ba8f84fd10a95ae91b963e1e7c67c\" returns successfully" Sep 13 00:51:57.768389 env[1315]: time="2025-09-13T00:51:57.768102051Z" level=info msg="StartContainer for \"d5cef41b2b37f7ab833f6a4dc37fec963650d138840e1ee5b0651af5e7eff0f8\" returns successfully" Sep 13 00:51:58.310375 kubelet[1719]: E0913 00:51:58.310281 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:58.312706 kubelet[1719]: E0913 00:51:58.312616 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:58.326937 kubelet[1719]: E0913 00:51:58.326878 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:58.868795 kubelet[1719]: E0913 00:51:58.868754 1719 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:51:59.201484 kubelet[1719]: I0913 00:51:59.201359 1719 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:51:59.210977 kubelet[1719]: I0913 00:51:59.210944 1719 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:51:59.210977 kubelet[1719]: E0913 00:51:59.210974 1719 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:51:59.219620 kubelet[1719]: E0913 00:51:59.219590 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:59.319792 kubelet[1719]: E0913 00:51:59.319746 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:51:59.329209 kubelet[1719]: E0913 00:51:59.329189 1719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:51:59.420948 kubelet[1719]: E0913 00:51:59.420863 1719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:52:00.274201 kubelet[1719]: I0913 00:52:00.274165 1719 apiserver.go:52] "Watching apiserver" Sep 13 00:52:00.290085 kubelet[1719]: I0913 00:52:00.290065 1719 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:52:00.752026 systemd[1]: Reloading. Sep 13 00:52:00.814862 /usr/lib/systemd/system-generators/torcx-generator[2012]: time="2025-09-13T00:52:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:52:00.845706 /usr/lib/systemd/system-generators/torcx-generator[2012]: time="2025-09-13T00:52:00Z" level=info msg="torcx already run" Sep 13 00:52:00.875529 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:52:00.875543 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:52:00.892331 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:52:00.968073 systemd[1]: Stopping kubelet.service... Sep 13 00:52:00.991157 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:52:00.991372 systemd[1]: Stopped kubelet.service. Sep 13 00:52:00.992739 systemd[1]: Starting kubelet.service... Sep 13 00:52:01.073092 systemd[1]: Started kubelet.service. Sep 13 00:52:01.104695 kubelet[2070]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:52:01.104695 kubelet[2070]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:52:01.104695 kubelet[2070]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:52:01.105184 kubelet[2070]: I0913 00:52:01.104736 2070 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:52:01.109641 kubelet[2070]: I0913 00:52:01.109613 2070 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:52:01.109641 kubelet[2070]: I0913 00:52:01.109631 2070 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:52:01.109817 kubelet[2070]: I0913 00:52:01.109801 2070 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:52:01.203467 kubelet[2070]: I0913 00:52:01.203424 2070 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:52:01.205960 kubelet[2070]: I0913 00:52:01.205943 2070 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:52:01.208404 kubelet[2070]: E0913 00:52:01.208382 2070 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:52:01.208404 kubelet[2070]: I0913 00:52:01.208402 2070 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:52:01.211374 kubelet[2070]: I0913 00:52:01.211357 2070 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:52:01.211656 kubelet[2070]: I0913 00:52:01.211636 2070 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:52:01.211736 kubelet[2070]: I0913 00:52:01.211716 2070 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:52:01.211868 kubelet[2070]: I0913 00:52:01.211735 2070 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:52:01.211971 kubelet[2070]: I0913 00:52:01.211868 2070 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:52:01.211971 kubelet[2070]: I0913 00:52:01.211875 2070 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:52:01.211971 kubelet[2070]: I0913 00:52:01.211895 2070 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:52:01.212066 kubelet[2070]: I0913 00:52:01.211974 2070 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:52:01.212066 kubelet[2070]: I0913 00:52:01.211983 2070 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:52:01.212066 kubelet[2070]: I0913 00:52:01.212001 2070 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:52:01.212066 kubelet[2070]: I0913 00:52:01.212018 2070 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:52:01.215455 kubelet[2070]: I0913 00:52:01.212840 2070 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:52:01.215455 kubelet[2070]: I0913 00:52:01.213329 2070 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:52:01.216184 kubelet[2070]: I0913 00:52:01.216157 2070 server.go:1274] "Started kubelet" Sep 13 00:52:01.217892 kubelet[2070]: I0913 00:52:01.217701 2070 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:52:01.217892 kubelet[2070]: I0913 00:52:01.219139 2070 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:52:01.219688 kubelet[2070]: E0913 00:52:01.219610 2070 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:52:01.221259 kubelet[2070]: I0913 00:52:01.221236 2070 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:52:01.221490 kubelet[2070]: I0913 00:52:01.221466 2070 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:52:01.222447 kubelet[2070]: I0913 00:52:01.221858 2070 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:52:01.222447 kubelet[2070]: I0913 00:52:01.222182 2070 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:52:01.222935 kubelet[2070]: I0913 00:52:01.222858 2070 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:52:01.223600 kubelet[2070]: I0913 00:52:01.223577 2070 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:52:01.225573 kubelet[2070]: I0913 00:52:01.225546 2070 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:52:01.226704 kubelet[2070]: I0913 00:52:01.226682 2070 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:52:01.226797 kubelet[2070]: I0913 00:52:01.226776 2070 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:52:01.228633 kubelet[2070]: I0913 00:52:01.228612 2070 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:52:01.238499 kubelet[2070]: I0913 00:52:01.238453 2070 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:52:01.239259 kubelet[2070]: I0913 00:52:01.239233 2070 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:52:01.239259 kubelet[2070]: I0913 00:52:01.239249 2070 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:52:01.239259 kubelet[2070]: I0913 00:52:01.239263 2070 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:52:01.239601 kubelet[2070]: E0913 00:52:01.239296 2070 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:52:01.269003 kubelet[2070]: I0913 00:52:01.268969 2070 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:52:01.269003 kubelet[2070]: I0913 00:52:01.268987 2070 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:52:01.269003 kubelet[2070]: I0913 00:52:01.269002 2070 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:52:01.269197 kubelet[2070]: I0913 00:52:01.269150 2070 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:52:01.269197 kubelet[2070]: I0913 00:52:01.269159 2070 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:52:01.269197 kubelet[2070]: I0913 00:52:01.269179 2070 policy_none.go:49] "None policy: Start" Sep 13 00:52:01.269927 kubelet[2070]: I0913 00:52:01.269896 2070 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:52:01.269980 kubelet[2070]: I0913 00:52:01.269931 2070 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:52:01.270078 kubelet[2070]: I0913 00:52:01.270066 2070 state_mem.go:75] "Updated machine memory state" Sep 13 00:52:01.271014 kubelet[2070]: I0913 00:52:01.270995 2070 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:52:01.271168 kubelet[2070]: I0913 00:52:01.271150 2070 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:52:01.271220 kubelet[2070]: I0913 00:52:01.271162 2070 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:52:01.271333 kubelet[2070]: I0913 00:52:01.271313 2070 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:52:01.378254 kubelet[2070]: I0913 00:52:01.378144 2070 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:52:01.384329 kubelet[2070]: I0913 00:52:01.384308 2070 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:52:01.384408 kubelet[2070]: I0913 00:52:01.384360 2070 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:52:01.427125 kubelet[2070]: I0913 00:52:01.427070 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:52:01.427216 kubelet[2070]: I0913 00:52:01.427136 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:52:01.427216 kubelet[2070]: I0913 00:52:01.427162 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:52:01.427310 kubelet[2070]: I0913 00:52:01.427264 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:52:01.427337 kubelet[2070]: I0913 00:52:01.427318 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cc341e99a4836fe05115dfab3fb431d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cc341e99a4836fe05115dfab3fb431d\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:52:01.427361 kubelet[2070]: I0913 00:52:01.427340 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cc341e99a4836fe05115dfab3fb431d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6cc341e99a4836fe05115dfab3fb431d\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:52:01.427361 kubelet[2070]: I0913 00:52:01.427356 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:52:01.427413 kubelet[2070]: I0913 00:52:01.427368 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:52:01.427413 kubelet[2070]: I0913 00:52:01.427390 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cc341e99a4836fe05115dfab3fb431d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6cc341e99a4836fe05115dfab3fb431d\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:52:01.647321 kubelet[2070]: E0913 00:52:01.646814 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:01.647321 kubelet[2070]: E0913 00:52:01.646845 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:01.647321 kubelet[2070]: E0913 00:52:01.646823 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:01.744142 sudo[2105]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:52:01.744347 sudo[2105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:52:02.208667 sudo[2105]: pam_unix(sudo:session): session closed for user root Sep 13 00:52:02.212621 kubelet[2070]: I0913 00:52:02.212598 2070 apiserver.go:52] "Watching apiserver" Sep 13 00:52:02.222562 kubelet[2070]: I0913 00:52:02.222523 2070 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:52:02.252228 kubelet[2070]: E0913 00:52:02.250404 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:02.252228 kubelet[2070]: E0913 00:52:02.250404 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:02.252228 kubelet[2070]: E0913 00:52:02.250634 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:02.273513 kubelet[2070]: I0913 00:52:02.273437 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.273408061 podStartE2EDuration="1.273408061s" podCreationTimestamp="2025-09-13 00:52:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:02.27313331 +0000 UTC m=+1.195523780" watchObservedRunningTime="2025-09-13 00:52:02.273408061 +0000 UTC m=+1.195798541" Sep 13 00:52:02.282571 kubelet[2070]: I0913 00:52:02.282511 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.282489711 podStartE2EDuration="1.282489711s" podCreationTimestamp="2025-09-13 00:52:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:02.282203625 +0000 UTC m=+1.204594105" watchObservedRunningTime="2025-09-13 00:52:02.282489711 +0000 UTC m=+1.204880191" Sep 13 00:52:02.390414 kubelet[2070]: I0913 00:52:02.390314 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.390292735 podStartE2EDuration="1.390292735s" podCreationTimestamp="2025-09-13 00:52:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:02.382594994 +0000 UTC m=+1.304985474" watchObservedRunningTime="2025-09-13 00:52:02.390292735 +0000 UTC m=+1.312683255" Sep 13 00:52:03.252590 kubelet[2070]: E0913 00:52:03.252541 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:03.815051 sudo[1435]: pam_unix(sudo:session): session closed for user root Sep 13 00:52:03.816095 sshd[1429]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:03.818770 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:60066.service: Deactivated successfully. Sep 13 00:52:03.819531 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:52:03.820792 systemd-logind[1298]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:52:03.823793 systemd-logind[1298]: Removed session 5. Sep 13 00:52:04.520657 kubelet[2070]: E0913 00:52:04.520603 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:07.810812 kubelet[2070]: I0913 00:52:07.810781 2070 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:52:07.811269 env[1315]: time="2025-09-13T00:52:07.811229704Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:52:07.811465 kubelet[2070]: I0913 00:52:07.811398 2070 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:52:08.226783 kubelet[2070]: E0913 00:52:08.226742 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:08.257608 kubelet[2070]: E0913 00:52:08.257573 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:08.375361 kubelet[2070]: I0913 00:52:08.375316 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74mq5\" (UniqueName: \"kubernetes.io/projected/4350dfdb-a495-4913-9b60-a23517c49928-kube-api-access-74mq5\") pod \"kube-proxy-8mqfv\" (UID: \"4350dfdb-a495-4913-9b60-a23517c49928\") " pod="kube-system/kube-proxy-8mqfv" Sep 13 00:52:08.375361 kubelet[2070]: I0913 00:52:08.375361 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cni-path\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375583 kubelet[2070]: I0913 00:52:08.375391 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-etc-cni-netd\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375583 kubelet[2070]: I0913 00:52:08.375410 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-host-proc-sys-kernel\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375583 kubelet[2070]: I0913 00:52:08.375426 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2b53021-2597-475b-ac6f-62830e202f36-clustermesh-secrets\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375583 kubelet[2070]: I0913 00:52:08.375442 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cilium-run\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375583 kubelet[2070]: I0913 00:52:08.375458 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4350dfdb-a495-4913-9b60-a23517c49928-xtables-lock\") pod \"kube-proxy-8mqfv\" (UID: \"4350dfdb-a495-4913-9b60-a23517c49928\") " pod="kube-system/kube-proxy-8mqfv" Sep 13 00:52:08.375583 kubelet[2070]: I0913 00:52:08.375511 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-xtables-lock\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375734 kubelet[2070]: I0913 00:52:08.375562 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-lib-modules\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375734 kubelet[2070]: I0913 00:52:08.375583 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2b53021-2597-475b-ac6f-62830e202f36-hubble-tls\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375734 kubelet[2070]: I0913 00:52:08.375600 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-hostproc\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375734 kubelet[2070]: I0913 00:52:08.375614 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cilium-cgroup\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375734 kubelet[2070]: I0913 00:52:08.375634 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2b53021-2597-475b-ac6f-62830e202f36-cilium-config-path\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375734 kubelet[2070]: I0913 00:52:08.375670 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-host-proc-sys-net\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375966 kubelet[2070]: I0913 00:52:08.375702 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xgr\" (UniqueName: \"kubernetes.io/projected/e2b53021-2597-475b-ac6f-62830e202f36-kube-api-access-j7xgr\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.375966 kubelet[2070]: I0913 00:52:08.375722 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4350dfdb-a495-4913-9b60-a23517c49928-kube-proxy\") pod \"kube-proxy-8mqfv\" (UID: \"4350dfdb-a495-4913-9b60-a23517c49928\") " pod="kube-system/kube-proxy-8mqfv" Sep 13 00:52:08.375966 kubelet[2070]: I0913 00:52:08.375741 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4350dfdb-a495-4913-9b60-a23517c49928-lib-modules\") pod \"kube-proxy-8mqfv\" (UID: \"4350dfdb-a495-4913-9b60-a23517c49928\") " pod="kube-system/kube-proxy-8mqfv" Sep 13 00:52:08.375966 kubelet[2070]: I0913 00:52:08.375771 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-bpf-maps\") pod \"cilium-8px8l\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " pod="kube-system/cilium-8px8l" Sep 13 00:52:08.476503 kubelet[2070]: I0913 00:52:08.476475 2070 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:52:08.522653 kubelet[2070]: E0913 00:52:08.522560 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:08.583781 kubelet[2070]: E0913 00:52:08.583747 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:08.584305 env[1315]: time="2025-09-13T00:52:08.584257713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mqfv,Uid:4350dfdb-a495-4913-9b60-a23517c49928,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:08.590489 kubelet[2070]: E0913 00:52:08.590463 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:08.590738 env[1315]: time="2025-09-13T00:52:08.590715325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8px8l,Uid:e2b53021-2597-475b-ac6f-62830e202f36,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:09.002176 env[1315]: time="2025-09-13T00:52:09.002110026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:09.002176 env[1315]: time="2025-09-13T00:52:09.002151583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:09.002176 env[1315]: time="2025-09-13T00:52:09.002164049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:09.002644 env[1315]: time="2025-09-13T00:52:09.002298750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c921764440b6f34630353e89b7cbdc067b2b20d366a27bbd6e06ab337e6f5b4 pid=2168 runtime=io.containerd.runc.v2 Sep 13 00:52:09.010566 env[1315]: time="2025-09-13T00:52:09.009698053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:09.010566 env[1315]: time="2025-09-13T00:52:09.009732304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:09.010566 env[1315]: time="2025-09-13T00:52:09.009741644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:09.010566 env[1315]: time="2025-09-13T00:52:09.009846634Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc pid=2193 runtime=io.containerd.runc.v2 Sep 13 00:52:09.047956 env[1315]: time="2025-09-13T00:52:09.047622906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mqfv,Uid:4350dfdb-a495-4913-9b60-a23517c49928,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c921764440b6f34630353e89b7cbdc067b2b20d366a27bbd6e06ab337e6f5b4\"" Sep 13 00:52:09.048371 kubelet[2070]: E0913 00:52:09.048353 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:09.051334 env[1315]: time="2025-09-13T00:52:09.051303944Z" level=info msg="CreateContainer within sandbox \"1c921764440b6f34630353e89b7cbdc067b2b20d366a27bbd6e06ab337e6f5b4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:52:09.051799 env[1315]: time="2025-09-13T00:52:09.051744474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8px8l,Uid:e2b53021-2597-475b-ac6f-62830e202f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\"" Sep 13 00:52:09.052575 kubelet[2070]: E0913 00:52:09.052552 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:09.053348 env[1315]: time="2025-09-13T00:52:09.053325366Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:52:09.069117 env[1315]: time="2025-09-13T00:52:09.069071380Z" level=info msg="CreateContainer within sandbox \"1c921764440b6f34630353e89b7cbdc067b2b20d366a27bbd6e06ab337e6f5b4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34d3122c4192c9e764cfcd040a37e54d27a022814c9bac15370a04be3b11db07\"" Sep 13 00:52:09.069593 env[1315]: time="2025-09-13T00:52:09.069567056Z" level=info msg="StartContainer for \"34d3122c4192c9e764cfcd040a37e54d27a022814c9bac15370a04be3b11db07\"" Sep 13 00:52:09.080632 kubelet[2070]: I0913 00:52:09.080571 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltctn\" (UniqueName: \"kubernetes.io/projected/0dce787e-94cc-4ec8-8328-562c862576bf-kube-api-access-ltctn\") pod \"cilium-operator-5d85765b45-grdw7\" (UID: \"0dce787e-94cc-4ec8-8328-562c862576bf\") " pod="kube-system/cilium-operator-5d85765b45-grdw7" Sep 13 00:52:09.080632 kubelet[2070]: I0913 00:52:09.080606 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dce787e-94cc-4ec8-8328-562c862576bf-cilium-config-path\") pod \"cilium-operator-5d85765b45-grdw7\" (UID: \"0dce787e-94cc-4ec8-8328-562c862576bf\") " pod="kube-system/cilium-operator-5d85765b45-grdw7" Sep 13 00:52:09.110240 env[1315]: time="2025-09-13T00:52:09.110182699Z" level=info msg="StartContainer for \"34d3122c4192c9e764cfcd040a37e54d27a022814c9bac15370a04be3b11db07\" returns successfully" Sep 13 00:52:09.261201 kubelet[2070]: E0913 00:52:09.260926 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:09.261201 kubelet[2070]: E0913 00:52:09.261071 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:09.279423 kubelet[2070]: I0913 00:52:09.279304 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8mqfv" podStartSLOduration=1.279265017 podStartE2EDuration="1.279265017s" podCreationTimestamp="2025-09-13 00:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:09.268938592 +0000 UTC m=+8.191329072" watchObservedRunningTime="2025-09-13 00:52:09.279265017 +0000 UTC m=+8.201655497" Sep 13 00:52:09.322586 kubelet[2070]: E0913 00:52:09.322551 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:09.323207 env[1315]: time="2025-09-13T00:52:09.323178278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-grdw7,Uid:0dce787e-94cc-4ec8-8328-562c862576bf,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:09.338822 env[1315]: time="2025-09-13T00:52:09.338768256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:09.339179 env[1315]: time="2025-09-13T00:52:09.339062040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:09.339179 env[1315]: time="2025-09-13T00:52:09.339111925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:09.339681 env[1315]: time="2025-09-13T00:52:09.339616608Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d pid=2338 runtime=io.containerd.runc.v2 Sep 13 00:52:09.389233 env[1315]: time="2025-09-13T00:52:09.388088469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-grdw7,Uid:0dce787e-94cc-4ec8-8328-562c862576bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d\"" Sep 13 00:52:09.389414 kubelet[2070]: E0913 00:52:09.388548 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:10.263213 kubelet[2070]: E0913 00:52:10.263173 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:14.123133 update_engine[1299]: I0913 00:52:14.123045 1299 update_attempter.cc:509] Updating boot flags... Sep 13 00:52:14.173722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770501690.mount: Deactivated successfully. Sep 13 00:52:14.524469 kubelet[2070]: E0913 00:52:14.524224 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:18.733198 env[1315]: time="2025-09-13T00:52:18.733147086Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:18.735386 env[1315]: time="2025-09-13T00:52:18.735341798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:18.737294 env[1315]: time="2025-09-13T00:52:18.737265528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:18.737833 env[1315]: time="2025-09-13T00:52:18.737800473Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:52:18.740791 env[1315]: time="2025-09-13T00:52:18.739899143Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:52:18.743854 env[1315]: time="2025-09-13T00:52:18.743823515Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:52:18.759643 env[1315]: time="2025-09-13T00:52:18.759561200Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\"" Sep 13 00:52:18.760025 env[1315]: time="2025-09-13T00:52:18.760000443Z" level=info msg="StartContainer for \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\"" Sep 13 00:52:18.804263 env[1315]: time="2025-09-13T00:52:18.804200453Z" level=info msg="StartContainer for \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\" returns successfully" Sep 13 00:52:19.241562 env[1315]: time="2025-09-13T00:52:19.241498625Z" level=info msg="shim disconnected" id=95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381 Sep 13 00:52:19.241562 env[1315]: time="2025-09-13T00:52:19.241558535Z" level=warning msg="cleaning up after shim disconnected" id=95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381 namespace=k8s.io Sep 13 00:52:19.241562 env[1315]: time="2025-09-13T00:52:19.241568365Z" level=info msg="cleaning up dead shim" Sep 13 00:52:19.250897 env[1315]: time="2025-09-13T00:52:19.250823979Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513 runtime=io.containerd.runc.v2\n" Sep 13 00:52:19.280263 kubelet[2070]: E0913 00:52:19.280223 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:19.282680 env[1315]: time="2025-09-13T00:52:19.282113204Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:52:19.306968 env[1315]: time="2025-09-13T00:52:19.306897549Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\"" Sep 13 00:52:19.307555 env[1315]: time="2025-09-13T00:52:19.307520196Z" level=info msg="StartContainer for \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\"" Sep 13 00:52:19.349137 env[1315]: time="2025-09-13T00:52:19.349086183Z" level=info msg="StartContainer for \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\" returns successfully" Sep 13 00:52:19.358865 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:52:19.359282 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:52:19.359530 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:52:19.361068 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:52:19.370046 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:52:19.383141 env[1315]: time="2025-09-13T00:52:19.383090710Z" level=info msg="shim disconnected" id=05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0 Sep 13 00:52:19.383141 env[1315]: time="2025-09-13T00:52:19.383135811Z" level=warning msg="cleaning up after shim disconnected" id=05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0 namespace=k8s.io Sep 13 00:52:19.383141 env[1315]: time="2025-09-13T00:52:19.383145660Z" level=info msg="cleaning up dead shim" Sep 13 00:52:19.389895 env[1315]: time="2025-09-13T00:52:19.389856123Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2576 runtime=io.containerd.runc.v2\n" Sep 13 00:52:19.757444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381-rootfs.mount: Deactivated successfully. Sep 13 00:52:20.283719 kubelet[2070]: E0913 00:52:20.283294 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:20.285968 env[1315]: time="2025-09-13T00:52:20.285881374Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:52:20.310194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024140594.mount: Deactivated successfully. Sep 13 00:52:20.323034 env[1315]: time="2025-09-13T00:52:20.322993428Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\"" Sep 13 00:52:20.324090 env[1315]: time="2025-09-13T00:52:20.324068507Z" level=info msg="StartContainer for \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\"" Sep 13 00:52:20.378936 env[1315]: time="2025-09-13T00:52:20.378878045Z" level=info msg="StartContainer for \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\" returns successfully" Sep 13 00:52:20.434061 env[1315]: time="2025-09-13T00:52:20.433991441Z" level=info msg="shim disconnected" id=fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398 Sep 13 00:52:20.434061 env[1315]: time="2025-09-13T00:52:20.434059227Z" level=warning msg="cleaning up after shim disconnected" id=fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398 namespace=k8s.io Sep 13 00:52:20.434345 env[1315]: time="2025-09-13T00:52:20.434073866Z" level=info msg="cleaning up dead shim" Sep 13 00:52:20.441875 env[1315]: time="2025-09-13T00:52:20.441816332Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2633 runtime=io.containerd.runc.v2\n" Sep 13 00:52:20.757446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398-rootfs.mount: Deactivated successfully. Sep 13 00:52:21.064308 env[1315]: time="2025-09-13T00:52:21.064153358Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:21.066110 env[1315]: time="2025-09-13T00:52:21.066064567Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:21.067897 env[1315]: time="2025-09-13T00:52:21.067873694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:21.068416 env[1315]: time="2025-09-13T00:52:21.068392529Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:52:21.070624 env[1315]: time="2025-09-13T00:52:21.070585089Z" level=info msg="CreateContainer within sandbox \"3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:52:21.086599 env[1315]: time="2025-09-13T00:52:21.086527300Z" level=info msg="CreateContainer within sandbox \"3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\"" Sep 13 00:52:21.087344 env[1315]: time="2025-09-13T00:52:21.087277556Z" level=info msg="StartContainer for \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\"" Sep 13 00:52:21.129129 env[1315]: time="2025-09-13T00:52:21.129062824Z" level=info msg="StartContainer for \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\" returns successfully" Sep 13 00:52:21.285527 kubelet[2070]: E0913 00:52:21.285477 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:21.287295 kubelet[2070]: E0913 00:52:21.287276 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:21.288847 env[1315]: time="2025-09-13T00:52:21.288810077Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:52:21.517242 env[1315]: time="2025-09-13T00:52:21.517190109Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\"" Sep 13 00:52:21.518639 kubelet[2070]: I0913 00:52:21.518561 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-grdw7" podStartSLOduration=1.8383554119999999 podStartE2EDuration="13.518541843s" podCreationTimestamp="2025-09-13 00:52:08 +0000 UTC" firstStartedPulling="2025-09-13 00:52:09.38929101 +0000 UTC m=+8.311681490" lastFinishedPulling="2025-09-13 00:52:21.069477441 +0000 UTC m=+19.991867921" observedRunningTime="2025-09-13 00:52:21.499790684 +0000 UTC m=+20.422181164" watchObservedRunningTime="2025-09-13 00:52:21.518541843 +0000 UTC m=+20.440932324" Sep 13 00:52:21.523274 env[1315]: time="2025-09-13T00:52:21.523226332Z" level=info msg="StartContainer for \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\"" Sep 13 00:52:21.568412 env[1315]: time="2025-09-13T00:52:21.568370847Z" level=info msg="StartContainer for \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\" returns successfully" Sep 13 00:52:21.609275 env[1315]: time="2025-09-13T00:52:21.609229147Z" level=info msg="shim disconnected" id=5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d Sep 13 00:52:21.609497 env[1315]: time="2025-09-13T00:52:21.609475819Z" level=warning msg="cleaning up after shim disconnected" id=5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d namespace=k8s.io Sep 13 00:52:21.609581 env[1315]: time="2025-09-13T00:52:21.609563223Z" level=info msg="cleaning up dead shim" Sep 13 00:52:21.617556 env[1315]: time="2025-09-13T00:52:21.617504153Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2730 runtime=io.containerd.runc.v2\n" Sep 13 00:52:22.298226 kubelet[2070]: E0913 00:52:22.298177 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:22.298670 kubelet[2070]: E0913 00:52:22.298184 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:22.299932 env[1315]: time="2025-09-13T00:52:22.299873226Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:52:22.315047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219867193.mount: Deactivated successfully. Sep 13 00:52:22.323647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414781417.mount: Deactivated successfully. Sep 13 00:52:22.328639 env[1315]: time="2025-09-13T00:52:22.328589977Z" level=info msg="CreateContainer within sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\"" Sep 13 00:52:22.330057 env[1315]: time="2025-09-13T00:52:22.329023038Z" level=info msg="StartContainer for \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\"" Sep 13 00:52:22.369106 env[1315]: time="2025-09-13T00:52:22.369035259Z" level=info msg="StartContainer for \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\" returns successfully" Sep 13 00:52:22.433512 kubelet[2070]: I0913 00:52:22.433461 2070 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:52:22.478323 kubelet[2070]: I0913 00:52:22.476784 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j28b\" (UniqueName: \"kubernetes.io/projected/dc5130bc-4b31-4b0a-88a6-453a4424895a-kube-api-access-5j28b\") pod \"coredns-7c65d6cfc9-84xtm\" (UID: \"dc5130bc-4b31-4b0a-88a6-453a4424895a\") " pod="kube-system/coredns-7c65d6cfc9-84xtm" Sep 13 00:52:22.478323 kubelet[2070]: I0913 00:52:22.476817 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0642c652-4405-4a5d-a771-067dafea82a6-config-volume\") pod \"coredns-7c65d6cfc9-xsltc\" (UID: \"0642c652-4405-4a5d-a771-067dafea82a6\") " pod="kube-system/coredns-7c65d6cfc9-xsltc" Sep 13 00:52:22.478323 kubelet[2070]: I0913 00:52:22.476831 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krbj\" (UniqueName: \"kubernetes.io/projected/0642c652-4405-4a5d-a771-067dafea82a6-kube-api-access-7krbj\") pod \"coredns-7c65d6cfc9-xsltc\" (UID: \"0642c652-4405-4a5d-a771-067dafea82a6\") " pod="kube-system/coredns-7c65d6cfc9-xsltc" Sep 13 00:52:22.478323 kubelet[2070]: I0913 00:52:22.476845 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc5130bc-4b31-4b0a-88a6-453a4424895a-config-volume\") pod \"coredns-7c65d6cfc9-84xtm\" (UID: \"dc5130bc-4b31-4b0a-88a6-453a4424895a\") " pod="kube-system/coredns-7c65d6cfc9-84xtm" Sep 13 00:52:22.761203 kubelet[2070]: E0913 00:52:22.761160 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:22.761820 env[1315]: time="2025-09-13T00:52:22.761779674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xsltc,Uid:0642c652-4405-4a5d-a771-067dafea82a6,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:22.763430 kubelet[2070]: E0913 00:52:22.763405 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:22.764059 env[1315]: time="2025-09-13T00:52:22.764021562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-84xtm,Uid:dc5130bc-4b31-4b0a-88a6-453a4424895a,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:23.300977 kubelet[2070]: E0913 00:52:23.300934 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:24.302738 kubelet[2070]: E0913 00:52:24.302695 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:25.085445 systemd-networkd[1076]: cilium_host: Link UP Sep 13 00:52:25.085558 systemd-networkd[1076]: cilium_net: Link UP Sep 13 00:52:25.090068 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:52:25.090178 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:52:25.088944 systemd-networkd[1076]: cilium_net: Gained carrier Sep 13 00:52:25.089184 systemd-networkd[1076]: cilium_host: Gained carrier Sep 13 00:52:25.089301 systemd-networkd[1076]: cilium_net: Gained IPv6LL Sep 13 00:52:25.089459 systemd-networkd[1076]: cilium_host: Gained IPv6LL Sep 13 00:52:25.156592 systemd-networkd[1076]: cilium_vxlan: Link UP Sep 13 00:52:25.156599 systemd-networkd[1076]: cilium_vxlan: Gained carrier Sep 13 00:52:25.304615 kubelet[2070]: E0913 00:52:25.304576 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:25.343944 kernel: NET: Registered PF_ALG protocol family Sep 13 00:52:25.884086 systemd-networkd[1076]: lxc_health: Link UP Sep 13 00:52:25.903068 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:52:25.903341 systemd-networkd[1076]: lxc_health: Gained carrier Sep 13 00:52:26.362786 systemd-networkd[1076]: lxcdb404c976db4: Link UP Sep 13 00:52:26.363002 systemd-networkd[1076]: lxc06397a03571c: Link UP Sep 13 00:52:26.370593 kernel: eth0: renamed from tmpb4579 Sep 13 00:52:26.374937 kernel: eth0: renamed from tmp10dfc Sep 13 00:52:26.380553 systemd-networkd[1076]: lxcdb404c976db4: Gained carrier Sep 13 00:52:26.381373 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdb404c976db4: link becomes ready Sep 13 00:52:26.381422 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:52:26.381443 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc06397a03571c: link becomes ready Sep 13 00:52:26.382517 systemd-networkd[1076]: lxc06397a03571c: Gained carrier Sep 13 00:52:26.593091 kubelet[2070]: E0913 00:52:26.592632 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:26.620846 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:59220.service. Sep 13 00:52:26.624667 kubelet[2070]: I0913 00:52:26.624627 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8px8l" podStartSLOduration=8.937875525 podStartE2EDuration="18.624609588s" podCreationTimestamp="2025-09-13 00:52:08 +0000 UTC" firstStartedPulling="2025-09-13 00:52:09.053023836 +0000 UTC m=+7.975414316" lastFinishedPulling="2025-09-13 00:52:18.739757899 +0000 UTC m=+17.662148379" observedRunningTime="2025-09-13 00:52:23.318046806 +0000 UTC m=+22.240437276" watchObservedRunningTime="2025-09-13 00:52:26.624609588 +0000 UTC m=+25.547000068" Sep 13 00:52:26.661260 sshd[3283]: Accepted publickey for core from 10.0.0.1 port 59220 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:26.664152 sshd[3283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:26.671836 systemd[1]: Started session-6.scope. Sep 13 00:52:26.673099 systemd-logind[1298]: New session 6 of user core. Sep 13 00:52:26.810088 sshd[3283]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:26.812636 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:59220.service: Deactivated successfully. Sep 13 00:52:26.813305 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:52:26.813591 systemd-logind[1298]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:52:26.814183 systemd-logind[1298]: Removed session 6. Sep 13 00:52:27.090075 systemd-networkd[1076]: cilium_vxlan: Gained IPv6LL Sep 13 00:52:27.154203 systemd-networkd[1076]: lxc_health: Gained IPv6LL Sep 13 00:52:27.308150 kubelet[2070]: E0913 00:52:27.308103 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:27.795514 systemd-networkd[1076]: lxcdb404c976db4: Gained IPv6LL Sep 13 00:52:27.858090 systemd-networkd[1076]: lxc06397a03571c: Gained IPv6LL Sep 13 00:52:28.310246 kubelet[2070]: E0913 00:52:28.310197 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:29.697786 env[1315]: time="2025-09-13T00:52:29.697722294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:29.697786 env[1315]: time="2025-09-13T00:52:29.697761221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:29.697786 env[1315]: time="2025-09-13T00:52:29.697774888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:29.698245 env[1315]: time="2025-09-13T00:52:29.697924140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10dfcfc67e89594d96137670051f0d8f8c03be369d39fcb056589a2c45f42d0e pid=3326 runtime=io.containerd.runc.v2 Sep 13 00:52:29.702001 env[1315]: time="2025-09-13T00:52:29.701840730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:29.702001 env[1315]: time="2025-09-13T00:52:29.701873784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:29.702001 env[1315]: time="2025-09-13T00:52:29.701882872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:29.702127 env[1315]: time="2025-09-13T00:52:29.702058406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4579f0e111e5ad7ba31f6720b5e83346d0d10fb85fdbf9a324dd1118140f4da pid=3335 runtime=io.containerd.runc.v2 Sep 13 00:52:29.725472 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:52:29.725895 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:52:29.747460 env[1315]: time="2025-09-13T00:52:29.747408045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-84xtm,Uid:dc5130bc-4b31-4b0a-88a6-453a4424895a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4579f0e111e5ad7ba31f6720b5e83346d0d10fb85fdbf9a324dd1118140f4da\"" Sep 13 00:52:29.750545 kubelet[2070]: E0913 00:52:29.750512 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:29.753517 env[1315]: time="2025-09-13T00:52:29.752694810Z" level=info msg="CreateContainer within sandbox \"b4579f0e111e5ad7ba31f6720b5e83346d0d10fb85fdbf9a324dd1118140f4da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:52:29.753616 env[1315]: time="2025-09-13T00:52:29.753588142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xsltc,Uid:0642c652-4405-4a5d-a771-067dafea82a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"10dfcfc67e89594d96137670051f0d8f8c03be369d39fcb056589a2c45f42d0e\"" Sep 13 00:52:29.754069 kubelet[2070]: E0913 00:52:29.754044 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:29.755176 env[1315]: time="2025-09-13T00:52:29.755143993Z" level=info msg="CreateContainer within sandbox \"10dfcfc67e89594d96137670051f0d8f8c03be369d39fcb056589a2c45f42d0e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:52:29.772319 env[1315]: time="2025-09-13T00:52:29.772268289Z" level=info msg="CreateContainer within sandbox \"b4579f0e111e5ad7ba31f6720b5e83346d0d10fb85fdbf9a324dd1118140f4da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90d57a7e927f70bfd4e6c8e88315ccb506fbdb186fe1f3b508332b12651b97eb\"" Sep 13 00:52:29.773365 env[1315]: time="2025-09-13T00:52:29.772593808Z" level=info msg="StartContainer for \"90d57a7e927f70bfd4e6c8e88315ccb506fbdb186fe1f3b508332b12651b97eb\"" Sep 13 00:52:29.776265 env[1315]: time="2025-09-13T00:52:29.776224857Z" level=info msg="CreateContainer within sandbox \"10dfcfc67e89594d96137670051f0d8f8c03be369d39fcb056589a2c45f42d0e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"923415d7b31c40e7c16747ecbc66b4e4a084b3c0918df91602ddd76884513b62\"" Sep 13 00:52:29.777138 env[1315]: time="2025-09-13T00:52:29.776666673Z" level=info msg="StartContainer for \"923415d7b31c40e7c16747ecbc66b4e4a084b3c0918df91602ddd76884513b62\"" Sep 13 00:52:29.816686 env[1315]: time="2025-09-13T00:52:29.816619429Z" level=info msg="StartContainer for \"90d57a7e927f70bfd4e6c8e88315ccb506fbdb186fe1f3b508332b12651b97eb\" returns successfully" Sep 13 00:52:29.821594 env[1315]: time="2025-09-13T00:52:29.821549526Z" level=info msg="StartContainer for \"923415d7b31c40e7c16747ecbc66b4e4a084b3c0918df91602ddd76884513b62\" returns successfully" Sep 13 00:52:30.314505 kubelet[2070]: E0913 00:52:30.314359 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:30.315995 kubelet[2070]: E0913 00:52:30.315948 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:30.324997 kubelet[2070]: I0913 00:52:30.324931 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-84xtm" podStartSLOduration=22.324898776 podStartE2EDuration="22.324898776s" podCreationTimestamp="2025-09-13 00:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:30.324473303 +0000 UTC m=+29.246863783" watchObservedRunningTime="2025-09-13 00:52:30.324898776 +0000 UTC m=+29.247289256" Sep 13 00:52:31.318071 kubelet[2070]: E0913 00:52:31.318042 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:31.318558 kubelet[2070]: E0913 00:52:31.318162 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:31.814266 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:38734.service. Sep 13 00:52:31.846642 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 38734 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:31.847639 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:31.850956 systemd-logind[1298]: New session 7 of user core. Sep 13 00:52:31.851679 systemd[1]: Started session-7.scope. Sep 13 00:52:31.978414 sshd[3487]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:31.980553 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:38734.service: Deactivated successfully. Sep 13 00:52:31.981627 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:52:31.981670 systemd-logind[1298]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:52:31.982388 systemd-logind[1298]: Removed session 7. Sep 13 00:52:32.320016 kubelet[2070]: E0913 00:52:32.319960 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:32.320579 kubelet[2070]: E0913 00:52:32.320517 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:52:36.982354 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:38742.service. Sep 13 00:52:37.013428 sshd[3502]: Accepted publickey for core from 10.0.0.1 port 38742 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:37.014637 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:37.018235 systemd-logind[1298]: New session 8 of user core. Sep 13 00:52:37.019156 systemd[1]: Started session-8.scope. Sep 13 00:52:37.131287 sshd[3502]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:37.133500 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:38742.service: Deactivated successfully. Sep 13 00:52:37.134583 systemd-logind[1298]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:52:37.134659 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:52:37.135352 systemd-logind[1298]: Removed session 8. Sep 13 00:52:42.134275 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:34854.service. Sep 13 00:52:42.164086 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 34854 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:42.165081 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:42.168226 systemd-logind[1298]: New session 9 of user core. Sep 13 00:52:42.168877 systemd[1]: Started session-9.scope. Sep 13 00:52:42.363746 sshd[3519]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:42.366036 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:34854.service: Deactivated successfully. Sep 13 00:52:42.366889 systemd-logind[1298]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:52:42.366940 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:52:42.367783 systemd-logind[1298]: Removed session 9. Sep 13 00:52:47.367869 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:34864.service. Sep 13 00:52:47.406365 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:47.407614 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:47.411877 systemd-logind[1298]: New session 10 of user core. Sep 13 00:52:47.412950 systemd[1]: Started session-10.scope. Sep 13 00:52:47.528017 sshd[3535]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:47.531268 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:34872.service. Sep 13 00:52:47.531882 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:34864.service: Deactivated successfully. Sep 13 00:52:47.533020 systemd-logind[1298]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:52:47.533037 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:52:47.533871 systemd-logind[1298]: Removed session 10. Sep 13 00:52:47.564416 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 34872 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:47.565681 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:47.569548 systemd-logind[1298]: New session 11 of user core. Sep 13 00:52:47.570565 systemd[1]: Started session-11.scope. Sep 13 00:52:47.736885 sshd[3550]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:47.741461 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:34886.service. Sep 13 00:52:47.744752 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:34872.service: Deactivated successfully. Sep 13 00:52:47.746001 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:52:47.746051 systemd-logind[1298]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:52:47.748150 systemd-logind[1298]: Removed session 11. Sep 13 00:52:47.780152 sshd[3561]: Accepted publickey for core from 10.0.0.1 port 34886 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:47.781274 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:47.784803 systemd-logind[1298]: New session 12 of user core. Sep 13 00:52:47.785557 systemd[1]: Started session-12.scope. Sep 13 00:52:47.908157 sshd[3561]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:47.910246 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:34886.service: Deactivated successfully. Sep 13 00:52:47.911418 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:52:47.911894 systemd-logind[1298]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:52:47.912629 systemd-logind[1298]: Removed session 12. Sep 13 00:52:52.911970 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:36392.service. Sep 13 00:52:52.943150 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 36392 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:52.944088 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:52.947534 systemd-logind[1298]: New session 13 of user core. Sep 13 00:52:52.948473 systemd[1]: Started session-13.scope. Sep 13 00:52:53.060945 sshd[3577]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:53.063429 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:36392.service: Deactivated successfully. Sep 13 00:52:53.064447 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:52:53.065512 systemd-logind[1298]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:52:53.066423 systemd-logind[1298]: Removed session 13. Sep 13 00:52:58.065130 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:36396.service. Sep 13 00:52:58.175275 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 36396 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:52:58.176248 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:52:58.180170 systemd-logind[1298]: New session 14 of user core. Sep 13 00:52:58.180824 systemd[1]: Started session-14.scope. Sep 13 00:52:58.292456 sshd[3591]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:58.294595 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:36396.service: Deactivated successfully. Sep 13 00:52:58.295974 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:52:58.296496 systemd-logind[1298]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:52:58.297407 systemd-logind[1298]: Removed session 14. Sep 13 00:53:03.296526 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:49576.service. Sep 13 00:53:03.374176 sshd[3607]: Accepted publickey for core from 10.0.0.1 port 49576 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:03.375253 sshd[3607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:03.378188 systemd-logind[1298]: New session 15 of user core. Sep 13 00:53:03.379123 systemd[1]: Started session-15.scope. Sep 13 00:53:03.490306 sshd[3607]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:03.492540 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:49578.service. Sep 13 00:53:03.494668 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:49576.service: Deactivated successfully. Sep 13 00:53:03.495464 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:53:03.496358 systemd-logind[1298]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:53:03.497142 systemd-logind[1298]: Removed session 15. Sep 13 00:53:03.524830 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 49578 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:03.525870 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:03.529321 systemd-logind[1298]: New session 16 of user core. Sep 13 00:53:03.530247 systemd[1]: Started session-16.scope. Sep 13 00:53:03.755010 sshd[3619]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:03.757368 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:49582.service. Sep 13 00:53:03.758140 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:49578.service: Deactivated successfully. Sep 13 00:53:03.758686 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:53:03.759553 systemd-logind[1298]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:53:03.760468 systemd-logind[1298]: Removed session 16. Sep 13 00:53:03.789652 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 49582 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:03.790822 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:03.794075 systemd-logind[1298]: New session 17 of user core. Sep 13 00:53:03.794718 systemd[1]: Started session-17.scope. Sep 13 00:53:04.846973 sshd[3631]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:04.848700 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:49586.service. Sep 13 00:53:04.855223 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:49582.service: Deactivated successfully. Sep 13 00:53:04.856305 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:53:04.856817 systemd-logind[1298]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:53:04.857582 systemd-logind[1298]: Removed session 17. Sep 13 00:53:04.888779 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 49586 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:04.889869 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:04.892967 systemd-logind[1298]: New session 18 of user core. Sep 13 00:53:04.893717 systemd[1]: Started session-18.scope. Sep 13 00:53:05.117440 sshd[3649]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:05.119313 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:49588.service. Sep 13 00:53:05.121888 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:49586.service: Deactivated successfully. Sep 13 00:53:05.122485 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:53:05.123314 systemd-logind[1298]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:53:05.125177 systemd-logind[1298]: Removed session 18. Sep 13 00:53:05.150480 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 49588 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:05.151584 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:05.154888 systemd-logind[1298]: New session 19 of user core. Sep 13 00:53:05.155625 systemd[1]: Started session-19.scope. Sep 13 00:53:05.258584 sshd[3663]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:05.260739 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:49588.service: Deactivated successfully. Sep 13 00:53:05.261560 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:53:05.262467 systemd-logind[1298]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:53:05.263183 systemd-logind[1298]: Removed session 19. Sep 13 00:53:10.262010 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:52860.service. Sep 13 00:53:10.293043 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 52860 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:10.294363 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:10.297977 systemd-logind[1298]: New session 20 of user core. Sep 13 00:53:10.298889 systemd[1]: Started session-20.scope. Sep 13 00:53:10.429020 sshd[3681]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:10.431453 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:52860.service: Deactivated successfully. Sep 13 00:53:10.432578 systemd-logind[1298]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:53:10.432617 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:53:10.433489 systemd-logind[1298]: Removed session 20. Sep 13 00:53:11.240725 kubelet[2070]: E0913 00:53:11.240681 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:15.432637 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:52872.service. Sep 13 00:53:15.462412 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 52872 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:15.463298 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:15.466893 systemd-logind[1298]: New session 21 of user core. Sep 13 00:53:15.467785 systemd[1]: Started session-21.scope. Sep 13 00:53:15.566978 sshd[3699]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:15.569123 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:52872.service: Deactivated successfully. Sep 13 00:53:15.570227 systemd-logind[1298]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:53:15.570307 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:53:15.571218 systemd-logind[1298]: Removed session 21. Sep 13 00:53:16.240843 kubelet[2070]: E0913 00:53:16.240795 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:20.569680 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:58888.service. Sep 13 00:53:20.601112 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 58888 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:20.602374 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:20.606231 systemd-logind[1298]: New session 22 of user core. Sep 13 00:53:20.607258 systemd[1]: Started session-22.scope. Sep 13 00:53:20.713247 sshd[3713]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:20.715654 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:58888.service: Deactivated successfully. Sep 13 00:53:20.716499 systemd-logind[1298]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:53:20.716584 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:53:20.717691 systemd-logind[1298]: Removed session 22. Sep 13 00:53:22.240317 kubelet[2070]: E0913 00:53:22.240272 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:25.240434 kubelet[2070]: E0913 00:53:25.240367 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:25.717154 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:58898.service. Sep 13 00:53:25.749743 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 58898 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:25.751170 sshd[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:25.755021 systemd-logind[1298]: New session 23 of user core. Sep 13 00:53:25.756073 systemd[1]: Started session-23.scope. Sep 13 00:53:25.911689 sshd[3727]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:25.914364 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:58900.service. Sep 13 00:53:25.916803 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:58898.service: Deactivated successfully. Sep 13 00:53:25.918070 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:53:25.918681 systemd-logind[1298]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:53:25.919576 systemd-logind[1298]: Removed session 23. Sep 13 00:53:25.949145 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 58900 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:25.950128 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:25.953676 systemd-logind[1298]: New session 24 of user core. Sep 13 00:53:25.954417 systemd[1]: Started session-24.scope. Sep 13 00:53:27.353883 kubelet[2070]: I0913 00:53:27.353806 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xsltc" podStartSLOduration=79.353768336 podStartE2EDuration="1m19.353768336s" podCreationTimestamp="2025-09-13 00:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:30.349795967 +0000 UTC m=+29.272186457" watchObservedRunningTime="2025-09-13 00:53:27.353768336 +0000 UTC m=+86.276158826" Sep 13 00:53:27.359973 env[1315]: time="2025-09-13T00:53:27.359923948Z" level=info msg="StopContainer for \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\" with timeout 30 (s)" Sep 13 00:53:27.361518 env[1315]: time="2025-09-13T00:53:27.361468987Z" level=info msg="Stop container \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\" with signal terminated" Sep 13 00:53:27.373321 systemd[1]: run-containerd-runc-k8s.io-6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f-runc.u1tNKb.mount: Deactivated successfully. Sep 13 00:53:27.391328 env[1315]: time="2025-09-13T00:53:27.391254108Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:53:27.393608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39-rootfs.mount: Deactivated successfully. Sep 13 00:53:27.397630 env[1315]: time="2025-09-13T00:53:27.397580044Z" level=info msg="StopContainer for \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\" with timeout 2 (s)" Sep 13 00:53:27.397823 env[1315]: time="2025-09-13T00:53:27.397800914Z" level=info msg="Stop container \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\" with signal terminated" Sep 13 00:53:27.403094 systemd-networkd[1076]: lxc_health: Link DOWN Sep 13 00:53:27.403101 systemd-networkd[1076]: lxc_health: Lost carrier Sep 13 00:53:27.406486 env[1315]: time="2025-09-13T00:53:27.406418220Z" level=info msg="shim disconnected" id=fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39 Sep 13 00:53:27.406561 env[1315]: time="2025-09-13T00:53:27.406493352Z" level=warning msg="cleaning up after shim disconnected" id=fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39 namespace=k8s.io Sep 13 00:53:27.406561 env[1315]: time="2025-09-13T00:53:27.406517669Z" level=info msg="cleaning up dead shim" Sep 13 00:53:27.412256 env[1315]: time="2025-09-13T00:53:27.412209850Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3802 runtime=io.containerd.runc.v2\n" Sep 13 00:53:27.415172 env[1315]: time="2025-09-13T00:53:27.415119144Z" level=info msg="StopContainer for \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\" returns successfully" Sep 13 00:53:27.415895 env[1315]: time="2025-09-13T00:53:27.415864503Z" level=info msg="StopPodSandbox for \"3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d\"" Sep 13 00:53:27.416038 env[1315]: time="2025-09-13T00:53:27.415995422Z" level=info msg="Container to stop \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:27.418760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d-shm.mount: Deactivated successfully. Sep 13 00:53:27.444576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d-rootfs.mount: Deactivated successfully. Sep 13 00:53:27.449261 env[1315]: time="2025-09-13T00:53:27.449216150Z" level=info msg="shim disconnected" id=6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f Sep 13 00:53:27.449868 env[1315]: time="2025-09-13T00:53:27.449838704Z" level=warning msg="cleaning up after shim disconnected" id=6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f namespace=k8s.io Sep 13 00:53:27.449868 env[1315]: time="2025-09-13T00:53:27.449857389Z" level=info msg="cleaning up dead shim" Sep 13 00:53:27.450096 env[1315]: time="2025-09-13T00:53:27.449304748Z" level=info msg="shim disconnected" id=3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d Sep 13 00:53:27.450096 env[1315]: time="2025-09-13T00:53:27.449968992Z" level=warning msg="cleaning up after shim disconnected" id=3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d namespace=k8s.io Sep 13 00:53:27.450096 env[1315]: time="2025-09-13T00:53:27.449976405Z" level=info msg="cleaning up dead shim" Sep 13 00:53:27.456303 env[1315]: time="2025-09-13T00:53:27.456261123Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3849 runtime=io.containerd.runc.v2\n" Sep 13 00:53:27.456510 env[1315]: time="2025-09-13T00:53:27.456484418Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3848 runtime=io.containerd.runc.v2\n" Sep 13 00:53:27.456778 env[1315]: time="2025-09-13T00:53:27.456743020Z" level=info msg="TearDown network for sandbox \"3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d\" successfully" Sep 13 00:53:27.456862 env[1315]: time="2025-09-13T00:53:27.456843982Z" level=info msg="StopPodSandbox for \"3decca68c9379a4b918cce6ec75dc7730dbb54b2c646e6250842ad3f84514f5d\" returns successfully" Sep 13 00:53:27.459037 env[1315]: time="2025-09-13T00:53:27.458994133Z" level=info msg="StopContainer for \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\" returns successfully" Sep 13 00:53:27.459662 env[1315]: time="2025-09-13T00:53:27.459468516Z" level=info msg="StopPodSandbox for \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\"" Sep 13 00:53:27.459662 env[1315]: time="2025-09-13T00:53:27.459534181Z" level=info msg="Container to stop \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:27.459662 env[1315]: time="2025-09-13T00:53:27.459548769Z" level=info msg="Container to stop \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:27.459662 env[1315]: time="2025-09-13T00:53:27.459557525Z" level=info msg="Container to stop \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:27.459662 env[1315]: time="2025-09-13T00:53:27.459568086Z" level=info msg="Container to stop \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:27.459662 env[1315]: time="2025-09-13T00:53:27.459577293Z" level=info msg="Container to stop \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:27.480641 env[1315]: time="2025-09-13T00:53:27.480583530Z" level=info msg="shim disconnected" id=91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc Sep 13 00:53:27.480641 env[1315]: time="2025-09-13T00:53:27.480635690Z" level=warning msg="cleaning up after shim disconnected" id=91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc namespace=k8s.io Sep 13 00:53:27.480851 env[1315]: time="2025-09-13T00:53:27.480645789Z" level=info msg="cleaning up dead shim" Sep 13 00:53:27.486232 env[1315]: time="2025-09-13T00:53:27.486195939Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3893 runtime=io.containerd.runc.v2\n" Sep 13 00:53:27.486523 env[1315]: time="2025-09-13T00:53:27.486490409Z" level=info msg="TearDown network for sandbox \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" successfully" Sep 13 00:53:27.486523 env[1315]: time="2025-09-13T00:53:27.486517321Z" level=info msg="StopPodSandbox for \"91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc\" returns successfully" Sep 13 00:53:27.560147 kubelet[2070]: I0913 00:53:27.560093 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2b53021-2597-475b-ac6f-62830e202f36-hubble-tls\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560147 kubelet[2070]: I0913 00:53:27.560130 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-lib-modules\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560147 kubelet[2070]: I0913 00:53:27.560143 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cilium-cgroup\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560147 kubelet[2070]: I0913 00:53:27.560158 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-bpf-maps\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560405 kubelet[2070]: I0913 00:53:27.560172 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltctn\" (UniqueName: \"kubernetes.io/projected/0dce787e-94cc-4ec8-8328-562c862576bf-kube-api-access-ltctn\") pod \"0dce787e-94cc-4ec8-8328-562c862576bf\" (UID: \"0dce787e-94cc-4ec8-8328-562c862576bf\") " Sep 13 00:53:27.560405 kubelet[2070]: I0913 00:53:27.560187 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-xtables-lock\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560405 kubelet[2070]: I0913 00:53:27.560202 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dce787e-94cc-4ec8-8328-562c862576bf-cilium-config-path\") pod \"0dce787e-94cc-4ec8-8328-562c862576bf\" (UID: \"0dce787e-94cc-4ec8-8328-562c862576bf\") " Sep 13 00:53:27.560405 kubelet[2070]: I0913 00:53:27.560217 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cni-path\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560405 kubelet[2070]: I0913 00:53:27.560229 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-host-proc-sys-kernel\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560405 kubelet[2070]: I0913 00:53:27.560240 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cilium-run\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560543 kubelet[2070]: I0913 00:53:27.560255 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2b53021-2597-475b-ac6f-62830e202f36-clustermesh-secrets\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560543 kubelet[2070]: I0913 00:53:27.560269 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7xgr\" (UniqueName: \"kubernetes.io/projected/e2b53021-2597-475b-ac6f-62830e202f36-kube-api-access-j7xgr\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560543 kubelet[2070]: I0913 00:53:27.560285 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-etc-cni-netd\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560543 kubelet[2070]: I0913 00:53:27.560296 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-host-proc-sys-net\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560543 kubelet[2070]: I0913 00:53:27.560320 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2b53021-2597-475b-ac6f-62830e202f36-cilium-config-path\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560543 kubelet[2070]: I0913 00:53:27.560333 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-hostproc\") pod \"e2b53021-2597-475b-ac6f-62830e202f36\" (UID: \"e2b53021-2597-475b-ac6f-62830e202f36\") " Sep 13 00:53:27.560678 kubelet[2070]: I0913 00:53:27.560387 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.560678 kubelet[2070]: I0913 00:53:27.560419 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.560678 kubelet[2070]: I0913 00:53:27.560502 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.560678 kubelet[2070]: I0913 00:53:27.560564 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.560678 kubelet[2070]: I0913 00:53:27.560590 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.560787 kubelet[2070]: I0913 00:53:27.560731 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.560787 kubelet[2070]: I0913 00:53:27.560756 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.560787 kubelet[2070]: I0913 00:53:27.560772 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.560858 kubelet[2070]: I0913 00:53:27.560787 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.562178 kubelet[2070]: I0913 00:53:27.562152 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dce787e-94cc-4ec8-8328-562c862576bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0dce787e-94cc-4ec8-8328-562c862576bf" (UID: "0dce787e-94cc-4ec8-8328-562c862576bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:53:27.562242 kubelet[2070]: I0913 00:53:27.562186 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:27.562885 kubelet[2070]: I0913 00:53:27.562860 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dce787e-94cc-4ec8-8328-562c862576bf-kube-api-access-ltctn" (OuterVolumeSpecName: "kube-api-access-ltctn") pod "0dce787e-94cc-4ec8-8328-562c862576bf" (UID: "0dce787e-94cc-4ec8-8328-562c862576bf"). InnerVolumeSpecName "kube-api-access-ltctn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:27.563020 kubelet[2070]: I0913 00:53:27.562993 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b53021-2597-475b-ac6f-62830e202f36-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:53:27.563121 kubelet[2070]: I0913 00:53:27.563066 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b53021-2597-475b-ac6f-62830e202f36-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:27.563878 kubelet[2070]: I0913 00:53:27.563858 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b53021-2597-475b-ac6f-62830e202f36-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:53:27.564317 kubelet[2070]: I0913 00:53:27.564286 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b53021-2597-475b-ac6f-62830e202f36-kube-api-access-j7xgr" (OuterVolumeSpecName: "kube-api-access-j7xgr") pod "e2b53021-2597-475b-ac6f-62830e202f36" (UID: "e2b53021-2597-475b-ac6f-62830e202f36"). InnerVolumeSpecName "kube-api-access-j7xgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:27.660828 kubelet[2070]: I0913 00:53:27.660692 2070 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.660828 kubelet[2070]: I0913 00:53:27.660725 2070 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dce787e-94cc-4ec8-8328-562c862576bf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.660828 kubelet[2070]: I0913 00:53:27.660734 2070 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.660828 kubelet[2070]: I0913 00:53:27.660745 2070 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.660828 kubelet[2070]: I0913 00:53:27.660752 2070 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.660828 kubelet[2070]: I0913 00:53:27.660760 2070 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2b53021-2597-475b-ac6f-62830e202f36-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.660828 kubelet[2070]: I0913 00:53:27.660768 2070 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7xgr\" (UniqueName: \"kubernetes.io/projected/e2b53021-2597-475b-ac6f-62830e202f36-kube-api-access-j7xgr\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.660828 kubelet[2070]: I0913 00:53:27.660778 2070 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.661252 kubelet[2070]: I0913 00:53:27.660786 2070 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.661252 kubelet[2070]: I0913 00:53:27.660798 2070 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2b53021-2597-475b-ac6f-62830e202f36-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.661252 kubelet[2070]: I0913 00:53:27.660815 2070 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.661252 kubelet[2070]: I0913 00:53:27.660826 2070 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2b53021-2597-475b-ac6f-62830e202f36-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.661252 kubelet[2070]: I0913 00:53:27.660833 2070 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.661252 kubelet[2070]: I0913 00:53:27.660840 2070 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.661252 kubelet[2070]: I0913 00:53:27.660846 2070 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2b53021-2597-475b-ac6f-62830e202f36-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:27.661252 kubelet[2070]: I0913 00:53:27.660854 2070 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltctn\" (UniqueName: \"kubernetes.io/projected/0dce787e-94cc-4ec8-8328-562c862576bf-kube-api-access-ltctn\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:28.366996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f-rootfs.mount: Deactivated successfully. Sep 13 00:53:28.367180 systemd[1]: var-lib-kubelet-pods-0dce787e\x2d94cc\x2d4ec8\x2d8328\x2d562c862576bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dltctn.mount: Deactivated successfully. Sep 13 00:53:28.367267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc-rootfs.mount: Deactivated successfully. Sep 13 00:53:28.367341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91a9c5179f1fea4b12b22155cf316946ad1708bb2d3069240de404a3e85f3cbc-shm.mount: Deactivated successfully. Sep 13 00:53:28.367426 systemd[1]: var-lib-kubelet-pods-e2b53021\x2d2597\x2d475b\x2dac6f\x2d62830e202f36-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj7xgr.mount: Deactivated successfully. Sep 13 00:53:28.367507 systemd[1]: var-lib-kubelet-pods-e2b53021\x2d2597\x2d475b\x2dac6f\x2d62830e202f36-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:53:28.367584 systemd[1]: var-lib-kubelet-pods-e2b53021\x2d2597\x2d475b\x2dac6f\x2d62830e202f36-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:53:28.426828 kubelet[2070]: I0913 00:53:28.426797 2070 scope.go:117] "RemoveContainer" containerID="fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39" Sep 13 00:53:28.428226 env[1315]: time="2025-09-13T00:53:28.428138100Z" level=info msg="RemoveContainer for \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\"" Sep 13 00:53:28.736501 env[1315]: time="2025-09-13T00:53:28.736353551Z" level=info msg="RemoveContainer for \"fcf028a5384d22d3e293ea2797db43fdf429c7e39d633eec42d3066591066c39\" returns successfully" Sep 13 00:53:28.737718 kubelet[2070]: I0913 00:53:28.736868 2070 scope.go:117] "RemoveContainer" containerID="6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f" Sep 13 00:53:28.739196 env[1315]: time="2025-09-13T00:53:28.739154160Z" level=info msg="RemoveContainer for \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\"" Sep 13 00:53:28.812171 env[1315]: time="2025-09-13T00:53:28.812042259Z" level=info msg="RemoveContainer for \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\" returns successfully" Sep 13 00:53:28.812566 kubelet[2070]: I0913 00:53:28.812510 2070 scope.go:117] "RemoveContainer" containerID="5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d" Sep 13 00:53:28.813780 env[1315]: time="2025-09-13T00:53:28.813731263Z" level=info msg="RemoveContainer for \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\"" Sep 13 00:53:28.817287 env[1315]: time="2025-09-13T00:53:28.817255109Z" level=info msg="RemoveContainer for \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\" returns successfully" Sep 13 00:53:28.817432 kubelet[2070]: I0913 00:53:28.817399 2070 scope.go:117] "RemoveContainer" containerID="fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398" Sep 13 00:53:28.818273 env[1315]: time="2025-09-13T00:53:28.818243940Z" level=info msg="RemoveContainer for \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\"" Sep 13 00:53:28.821529 env[1315]: time="2025-09-13T00:53:28.821479727Z" level=info msg="RemoveContainer for \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\" returns successfully" Sep 13 00:53:28.821743 kubelet[2070]: I0913 00:53:28.821702 2070 scope.go:117] "RemoveContainer" containerID="05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0" Sep 13 00:53:28.822729 env[1315]: time="2025-09-13T00:53:28.822704167Z" level=info msg="RemoveContainer for \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\"" Sep 13 00:53:28.825356 env[1315]: time="2025-09-13T00:53:28.825327970Z" level=info msg="RemoveContainer for \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\" returns successfully" Sep 13 00:53:28.825494 kubelet[2070]: I0913 00:53:28.825468 2070 scope.go:117] "RemoveContainer" containerID="95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381" Sep 13 00:53:28.826521 env[1315]: time="2025-09-13T00:53:28.826481495Z" level=info msg="RemoveContainer for \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\"" Sep 13 00:53:28.829323 env[1315]: time="2025-09-13T00:53:28.829288617Z" level=info msg="RemoveContainer for \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\" returns successfully" Sep 13 00:53:28.829468 kubelet[2070]: I0913 00:53:28.829447 2070 scope.go:117] "RemoveContainer" containerID="6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f" Sep 13 00:53:28.829726 env[1315]: time="2025-09-13T00:53:28.829633062Z" level=error msg="ContainerStatus for \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\": not found" Sep 13 00:53:28.829859 kubelet[2070]: E0913 00:53:28.829833 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\": not found" containerID="6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f" Sep 13 00:53:28.830005 kubelet[2070]: I0913 00:53:28.829871 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f"} err="failed to get container status \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f2feb4d885aafe037755f2e7af57d79ab8273725b69fcf3b6710ca66c924f9f\": not found" Sep 13 00:53:28.830005 kubelet[2070]: I0913 00:53:28.829998 2070 scope.go:117] "RemoveContainer" containerID="5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d" Sep 13 00:53:28.830222 env[1315]: time="2025-09-13T00:53:28.830169222Z" level=error msg="ContainerStatus for \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\": not found" Sep 13 00:53:28.830345 kubelet[2070]: E0913 00:53:28.830318 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\": not found" containerID="5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d" Sep 13 00:53:28.830402 kubelet[2070]: I0913 00:53:28.830351 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d"} err="failed to get container status \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cc0a43d221688a85f72a23e773217a676ec77310b56fa23bd314ff5b7f1d70d\": not found" Sep 13 00:53:28.830402 kubelet[2070]: I0913 00:53:28.830378 2070 scope.go:117] "RemoveContainer" containerID="fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398" Sep 13 00:53:28.830553 env[1315]: time="2025-09-13T00:53:28.830517305Z" level=error msg="ContainerStatus for \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\": not found" Sep 13 00:53:28.830699 kubelet[2070]: E0913 00:53:28.830678 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\": not found" containerID="fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398" Sep 13 00:53:28.830699 kubelet[2070]: I0913 00:53:28.830695 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398"} err="failed to get container status \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa70d000234bb3edfbbcad82e3352dbf5e731554cb98eb9f7cd127b66e30b398\": not found" Sep 13 00:53:28.830794 kubelet[2070]: I0913 00:53:28.830707 2070 scope.go:117] "RemoveContainer" containerID="05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0" Sep 13 00:53:28.830874 env[1315]: time="2025-09-13T00:53:28.830838185Z" level=error msg="ContainerStatus for \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\": not found" Sep 13 00:53:28.830991 kubelet[2070]: E0913 00:53:28.830969 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\": not found" containerID="05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0" Sep 13 00:53:28.831039 kubelet[2070]: I0913 00:53:28.830996 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0"} err="failed to get container status \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\": rpc error: code = NotFound desc = an error occurred when try to find container \"05c7d3246fc10338564b3a2d66a5ffaa3441832bab1b181ca024d1610eaf5be0\": not found" Sep 13 00:53:28.831039 kubelet[2070]: I0913 00:53:28.831024 2070 scope.go:117] "RemoveContainer" containerID="95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381" Sep 13 00:53:28.831269 env[1315]: time="2025-09-13T00:53:28.831200524Z" level=error msg="ContainerStatus for \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\": not found" Sep 13 00:53:28.831375 kubelet[2070]: E0913 00:53:28.831350 2070 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\": not found" containerID="95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381" Sep 13 00:53:28.831425 kubelet[2070]: I0913 00:53:28.831376 2070 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381"} err="failed to get container status \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\": rpc error: code = NotFound desc = an error occurred when try to find container \"95e94225c3e168ca47dd22a6b10c87a02bca1ac2e3bfa88bcffd09dd5d1cc381\": not found" Sep 13 00:53:29.241456 kubelet[2070]: I0913 00:53:29.241405 2070 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dce787e-94cc-4ec8-8328-562c862576bf" path="/var/lib/kubelet/pods/0dce787e-94cc-4ec8-8328-562c862576bf/volumes" Sep 13 00:53:29.241769 kubelet[2070]: I0913 00:53:29.241742 2070 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b53021-2597-475b-ac6f-62830e202f36" path="/var/lib/kubelet/pods/e2b53021-2597-475b-ac6f-62830e202f36/volumes" Sep 13 00:53:29.422187 sshd[3740]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:29.424348 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:58914.service. Sep 13 00:53:29.425866 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:58900.service: Deactivated successfully. Sep 13 00:53:29.426638 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:53:29.426730 systemd-logind[1298]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:53:29.427641 systemd-logind[1298]: Removed session 24. Sep 13 00:53:29.456849 sshd[3909]: Accepted publickey for core from 10.0.0.1 port 58914 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:29.458134 sshd[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:29.461500 systemd-logind[1298]: New session 25 of user core. Sep 13 00:53:29.462220 systemd[1]: Started session-25.scope. Sep 13 00:53:30.316981 sshd[3909]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:30.318513 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:47116.service. Sep 13 00:53:30.331782 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:58914.service: Deactivated successfully. Sep 13 00:53:30.333452 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:53:30.334360 systemd-logind[1298]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:53:30.335186 systemd-logind[1298]: Removed session 25. Sep 13 00:53:30.336467 kubelet[2070]: E0913 00:53:30.336445 2070 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2b53021-2597-475b-ac6f-62830e202f36" containerName="mount-bpf-fs" Sep 13 00:53:30.336787 kubelet[2070]: E0913 00:53:30.336773 2070 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2b53021-2597-475b-ac6f-62830e202f36" containerName="cilium-agent" Sep 13 00:53:30.336886 kubelet[2070]: E0913 00:53:30.336873 2070 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dce787e-94cc-4ec8-8328-562c862576bf" containerName="cilium-operator" Sep 13 00:53:30.337796 kubelet[2070]: E0913 00:53:30.337397 2070 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2b53021-2597-475b-ac6f-62830e202f36" containerName="clean-cilium-state" Sep 13 00:53:30.337901 kubelet[2070]: E0913 00:53:30.337870 2070 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2b53021-2597-475b-ac6f-62830e202f36" containerName="mount-cgroup" Sep 13 00:53:30.337901 kubelet[2070]: E0913 00:53:30.337887 2070 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2b53021-2597-475b-ac6f-62830e202f36" containerName="apply-sysctl-overwrites" Sep 13 00:53:30.337901 kubelet[2070]: I0913 00:53:30.337929 2070 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b53021-2597-475b-ac6f-62830e202f36" containerName="cilium-agent" Sep 13 00:53:30.338164 kubelet[2070]: I0913 00:53:30.337945 2070 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dce787e-94cc-4ec8-8328-562c862576bf" containerName="cilium-operator" Sep 13 00:53:30.363275 sshd[3922]: Accepted publickey for core from 10.0.0.1 port 47116 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:30.363983 sshd[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:30.378174 kubelet[2070]: I0913 00:53:30.377990 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cni-path\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.378410 kubelet[2070]: I0913 00:53:30.378388 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-xtables-lock\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.378533 kubelet[2070]: I0913 00:53:30.378491 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-host-proc-sys-net\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.378882 kubelet[2070]: I0913 00:53:30.378856 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-hostproc\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.379387 kubelet[2070]: I0913 00:53:30.379370 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-host-proc-sys-kernel\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.379483 kubelet[2070]: I0913 00:53:30.379466 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-etc-cni-netd\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.379568 kubelet[2070]: I0913 00:53:30.379552 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-ipsec-secrets\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.379665 kubelet[2070]: I0913 00:53:30.379648 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p74j\" (UniqueName: \"kubernetes.io/projected/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-kube-api-access-5p74j\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.379763 kubelet[2070]: I0913 00:53:30.379746 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-config-path\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.379858 kubelet[2070]: I0913 00:53:30.379840 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-cgroup\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.379972 kubelet[2070]: I0913 00:53:30.379956 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-run\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.380087 kubelet[2070]: I0913 00:53:30.380068 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-lib-modules\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.380196 kubelet[2070]: I0913 00:53:30.380177 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-clustermesh-secrets\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.380293 kubelet[2070]: I0913 00:53:30.380277 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-hubble-tls\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.380380 kubelet[2070]: I0913 00:53:30.380364 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-bpf-maps\") pod \"cilium-sjbzl\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " pod="kube-system/cilium-sjbzl" Sep 13 00:53:30.400184 systemd[1]: Started session-26.scope. Sep 13 00:53:30.400531 systemd-logind[1298]: New session 26 of user core. Sep 13 00:53:30.602840 sshd[3922]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:30.605170 systemd[1]: Started sshd@26-10.0.0.117:22-10.0.0.1:47122.service. Sep 13 00:53:30.606129 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:47116.service: Deactivated successfully. Sep 13 00:53:30.607415 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:53:30.607959 systemd-logind[1298]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:53:30.609598 systemd-logind[1298]: Removed session 26. Sep 13 00:53:30.623144 kubelet[2070]: E0913 00:53:30.621817 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:30.623295 env[1315]: time="2025-09-13T00:53:30.622374006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sjbzl,Uid:6e04939f-95d2-4a3d-b4c2-3838eeff2d65,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:30.640947 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 47122 ssh2: RSA SHA256:CR+JM5wLnrC3kI7UG7YAo/UCxAY2Mc7qc50wGPy2QIA Sep 13 00:53:30.641232 env[1315]: time="2025-09-13T00:53:30.640196980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:30.641232 env[1315]: time="2025-09-13T00:53:30.640228260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:30.641232 env[1315]: time="2025-09-13T00:53:30.640237076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:30.641232 env[1315]: time="2025-09-13T00:53:30.640358296Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c pid=3951 runtime=io.containerd.runc.v2 Sep 13 00:53:30.640645 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:30.650090 systemd[1]: Started session-27.scope. Sep 13 00:53:30.650520 systemd-logind[1298]: New session 27 of user core. Sep 13 00:53:30.673427 env[1315]: time="2025-09-13T00:53:30.673369519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sjbzl,Uid:6e04939f-95d2-4a3d-b4c2-3838eeff2d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c\"" Sep 13 00:53:30.674225 kubelet[2070]: E0913 00:53:30.674179 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:30.677131 env[1315]: time="2025-09-13T00:53:30.677077055Z" level=info msg="CreateContainer within sandbox \"b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:53:30.691839 env[1315]: time="2025-09-13T00:53:30.691786092Z" level=info msg="CreateContainer within sandbox \"b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6\"" Sep 13 00:53:30.692354 env[1315]: time="2025-09-13T00:53:30.692307925Z" level=info msg="StartContainer for \"57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6\"" Sep 13 00:53:30.875416 env[1315]: time="2025-09-13T00:53:30.875285076Z" level=info msg="StartContainer for \"57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6\" returns successfully" Sep 13 00:53:31.291530 kubelet[2070]: E0913 00:53:31.291427 2070 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:53:31.394517 env[1315]: time="2025-09-13T00:53:31.394456841Z" level=info msg="shim disconnected" id=57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6 Sep 13 00:53:31.394517 env[1315]: time="2025-09-13T00:53:31.394510753Z" level=warning msg="cleaning up after shim disconnected" id=57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6 namespace=k8s.io Sep 13 00:53:31.394517 env[1315]: time="2025-09-13T00:53:31.394519931Z" level=info msg="cleaning up dead shim" Sep 13 00:53:31.401834 env[1315]: time="2025-09-13T00:53:31.401796141Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4043 runtime=io.containerd.runc.v2\n" Sep 13 00:53:31.438935 env[1315]: time="2025-09-13T00:53:31.438886225Z" level=info msg="StopPodSandbox for \"b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c\"" Sep 13 00:53:31.439210 env[1315]: time="2025-09-13T00:53:31.438958352Z" level=info msg="Container to stop \"57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:53:31.465092 env[1315]: time="2025-09-13T00:53:31.465016385Z" level=info msg="shim disconnected" id=b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c Sep 13 00:53:31.465092 env[1315]: time="2025-09-13T00:53:31.465069286Z" level=warning msg="cleaning up after shim disconnected" id=b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c namespace=k8s.io Sep 13 00:53:31.465092 env[1315]: time="2025-09-13T00:53:31.465079255Z" level=info msg="cleaning up dead shim" Sep 13 00:53:31.471873 env[1315]: time="2025-09-13T00:53:31.471812443Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4075 runtime=io.containerd.runc.v2\n" Sep 13 00:53:31.472239 env[1315]: time="2025-09-13T00:53:31.472180002Z" level=info msg="TearDown network for sandbox \"b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c\" successfully" Sep 13 00:53:31.472239 env[1315]: time="2025-09-13T00:53:31.472211392Z" level=info msg="StopPodSandbox for \"b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c\" returns successfully" Sep 13 00:53:31.485901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c-rootfs.mount: Deactivated successfully. Sep 13 00:53:31.486059 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b63dfae159b8c124c9f2461f0e992a1a99ecaae1acb0cc09c664d2d7730b1b0c-shm.mount: Deactivated successfully. Sep 13 00:53:31.589993 kubelet[2070]: I0913 00:53:31.589947 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cni-path\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.589993 kubelet[2070]: I0913 00:53:31.590000 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-hostproc\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590453 kubelet[2070]: I0913 00:53:31.590039 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-bpf-maps\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590453 kubelet[2070]: I0913 00:53:31.590068 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-ipsec-secrets\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590453 kubelet[2070]: I0913 00:53:31.590095 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-hubble-tls\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590453 kubelet[2070]: I0913 00:53:31.590112 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-xtables-lock\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590453 kubelet[2070]: I0913 00:53:31.590135 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p74j\" (UniqueName: \"kubernetes.io/projected/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-kube-api-access-5p74j\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590453 kubelet[2070]: I0913 00:53:31.590153 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-cgroup\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590595 kubelet[2070]: I0913 00:53:31.590171 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-run\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590595 kubelet[2070]: I0913 00:53:31.590166 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-hostproc" (OuterVolumeSpecName: "hostproc") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.590595 kubelet[2070]: I0913 00:53:31.590156 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cni-path" (OuterVolumeSpecName: "cni-path") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.590595 kubelet[2070]: I0913 00:53:31.590192 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-host-proc-sys-kernel\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590595 kubelet[2070]: I0913 00:53:31.590266 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-config-path\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590595 kubelet[2070]: I0913 00:53:31.590290 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-etc-cni-netd\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590732 kubelet[2070]: I0913 00:53:31.590305 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-lib-modules\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590732 kubelet[2070]: I0913 00:53:31.590319 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-host-proc-sys-net\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590732 kubelet[2070]: I0913 00:53:31.590336 2070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-clustermesh-secrets\") pod \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\" (UID: \"6e04939f-95d2-4a3d-b4c2-3838eeff2d65\") " Sep 13 00:53:31.590732 kubelet[2070]: I0913 00:53:31.590381 2070 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.590732 kubelet[2070]: I0913 00:53:31.590390 2070 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.592787 kubelet[2070]: I0913 00:53:31.590216 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.592787 kubelet[2070]: I0913 00:53:31.590235 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.592787 kubelet[2070]: I0913 00:53:31.590240 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.592787 kubelet[2070]: I0913 00:53:31.591027 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.592787 kubelet[2070]: I0913 00:53:31.590978 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.594337 kubelet[2070]: I0913 00:53:31.590984 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.594337 kubelet[2070]: I0913 00:53:31.591094 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.594337 kubelet[2070]: I0913 00:53:31.591118 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:53:31.594337 kubelet[2070]: I0913 00:53:31.592761 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:53:31.594337 kubelet[2070]: I0913 00:53:31.593781 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-kube-api-access-5p74j" (OuterVolumeSpecName: "kube-api-access-5p74j") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "kube-api-access-5p74j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:31.595637 systemd[1]: var-lib-kubelet-pods-6e04939f\x2d95d2\x2d4a3d\x2db4c2\x2d3838eeff2d65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5p74j.mount: Deactivated successfully. Sep 13 00:53:31.596841 kubelet[2070]: I0913 00:53:31.596209 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:53:31.596841 kubelet[2070]: I0913 00:53:31.596334 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:53:31.595952 systemd[1]: var-lib-kubelet-pods-6e04939f\x2d95d2\x2d4a3d\x2db4c2\x2d3838eeff2d65-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:53:31.596102 systemd[1]: var-lib-kubelet-pods-6e04939f\x2d95d2\x2d4a3d\x2db4c2\x2d3838eeff2d65-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:53:31.597335 kubelet[2070]: I0913 00:53:31.597303 2070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6e04939f-95d2-4a3d-b4c2-3838eeff2d65" (UID: "6e04939f-95d2-4a3d-b4c2-3838eeff2d65"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:53:31.599666 systemd[1]: var-lib-kubelet-pods-6e04939f\x2d95d2\x2d4a3d\x2db4c2\x2d3838eeff2d65-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:53:31.690819 kubelet[2070]: I0913 00:53:31.690760 2070 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.690819 kubelet[2070]: I0913 00:53:31.690796 2070 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.690819 kubelet[2070]: I0913 00:53:31.690803 2070 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.690819 kubelet[2070]: I0913 00:53:31.690810 2070 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.690819 kubelet[2070]: I0913 00:53:31.690818 2070 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.690819 kubelet[2070]: I0913 00:53:31.690825 2070 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.690819 kubelet[2070]: I0913 00:53:31.690832 2070 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.690819 kubelet[2070]: I0913 00:53:31.690839 2070 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.691271 kubelet[2070]: I0913 00:53:31.690847 2070 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p74j\" (UniqueName: \"kubernetes.io/projected/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-kube-api-access-5p74j\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.691271 kubelet[2070]: I0913 00:53:31.690854 2070 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.691271 kubelet[2070]: I0913 00:53:31.690861 2070 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.691271 kubelet[2070]: I0913 00:53:31.690868 2070 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:31.691271 kubelet[2070]: I0913 00:53:31.690877 2070 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e04939f-95d2-4a3d-b4c2-3838eeff2d65-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:53:32.441229 kubelet[2070]: I0913 00:53:32.441199 2070 scope.go:117] "RemoveContainer" containerID="57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6" Sep 13 00:53:32.442000 env[1315]: time="2025-09-13T00:53:32.441950145Z" level=info msg="RemoveContainer for \"57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6\"" Sep 13 00:53:32.445549 env[1315]: time="2025-09-13T00:53:32.445504241Z" level=info msg="RemoveContainer for \"57de4869315fd946701e39992e3eb31f092e2ff95574a794a310d7c510de9fc6\" returns successfully" Sep 13 00:53:32.470643 kubelet[2070]: E0913 00:53:32.470588 2070 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e04939f-95d2-4a3d-b4c2-3838eeff2d65" containerName="mount-cgroup" Sep 13 00:53:32.470643 kubelet[2070]: I0913 00:53:32.470636 2070 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e04939f-95d2-4a3d-b4c2-3838eeff2d65" containerName="mount-cgroup" Sep 13 00:53:32.504130 kubelet[2070]: I0913 00:53:32.495595 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-cni-path\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504130 kubelet[2070]: I0913 00:53:32.495642 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-cilium-cgroup\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504130 kubelet[2070]: I0913 00:53:32.495661 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b08b66cd-7817-4b8f-8ccb-78c19a62a802-clustermesh-secrets\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504130 kubelet[2070]: I0913 00:53:32.495676 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b08b66cd-7817-4b8f-8ccb-78c19a62a802-cilium-config-path\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504130 kubelet[2070]: I0913 00:53:32.495690 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b08b66cd-7817-4b8f-8ccb-78c19a62a802-cilium-ipsec-secrets\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504130 kubelet[2070]: I0913 00:53:32.495705 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-bpf-maps\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504451 kubelet[2070]: I0913 00:53:32.495718 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-host-proc-sys-net\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504451 kubelet[2070]: I0913 00:53:32.495734 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-host-proc-sys-kernel\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504451 kubelet[2070]: I0913 00:53:32.495747 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-xtables-lock\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504451 kubelet[2070]: I0913 00:53:32.495759 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-cilium-run\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504451 kubelet[2070]: I0913 00:53:32.495774 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-hostproc\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504451 kubelet[2070]: I0913 00:53:32.495789 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-lib-modules\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504640 kubelet[2070]: I0913 00:53:32.495808 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b08b66cd-7817-4b8f-8ccb-78c19a62a802-hubble-tls\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504640 kubelet[2070]: I0913 00:53:32.495820 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b08b66cd-7817-4b8f-8ccb-78c19a62a802-etc-cni-netd\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.504640 kubelet[2070]: I0913 00:53:32.495832 2070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cvq9\" (UniqueName: \"kubernetes.io/projected/b08b66cd-7817-4b8f-8ccb-78c19a62a802-kube-api-access-2cvq9\") pod \"cilium-hpkwj\" (UID: \"b08b66cd-7817-4b8f-8ccb-78c19a62a802\") " pod="kube-system/cilium-hpkwj" Sep 13 00:53:32.695654 kubelet[2070]: I0913 00:53:32.695522 2070 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:53:32Z","lastTransitionTime":"2025-09-13T00:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:53:32.773259 kubelet[2070]: E0913 00:53:32.773210 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:32.773777 env[1315]: time="2025-09-13T00:53:32.773740984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hpkwj,Uid:b08b66cd-7817-4b8f-8ccb-78c19a62a802,Namespace:kube-system,Attempt:0,}" Sep 13 00:53:32.787865 env[1315]: time="2025-09-13T00:53:32.787794228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:53:32.787865 env[1315]: time="2025-09-13T00:53:32.787834144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:53:32.787865 env[1315]: time="2025-09-13T00:53:32.787846888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:53:32.788091 env[1315]: time="2025-09-13T00:53:32.788041629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572 pid=4103 runtime=io.containerd.runc.v2 Sep 13 00:53:32.821210 env[1315]: time="2025-09-13T00:53:32.821155491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hpkwj,Uid:b08b66cd-7817-4b8f-8ccb-78c19a62a802,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\"" Sep 13 00:53:32.821956 kubelet[2070]: E0913 00:53:32.821931 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:32.823975 env[1315]: time="2025-09-13T00:53:32.823887031Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:53:32.837096 env[1315]: time="2025-09-13T00:53:32.837047737Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91743fa61b074a12fcae83876e07e9b1eb3e9c68e5fbc913a0b3e900f21f8bc2\"" Sep 13 00:53:32.837562 env[1315]: time="2025-09-13T00:53:32.837538912Z" level=info msg="StartContainer for \"91743fa61b074a12fcae83876e07e9b1eb3e9c68e5fbc913a0b3e900f21f8bc2\"" Sep 13 00:53:32.880880 env[1315]: time="2025-09-13T00:53:32.879935595Z" level=info msg="StartContainer for \"91743fa61b074a12fcae83876e07e9b1eb3e9c68e5fbc913a0b3e900f21f8bc2\" returns successfully" Sep 13 00:53:32.909610 env[1315]: time="2025-09-13T00:53:32.909549474Z" level=info msg="shim disconnected" id=91743fa61b074a12fcae83876e07e9b1eb3e9c68e5fbc913a0b3e900f21f8bc2 Sep 13 00:53:32.909610 env[1315]: time="2025-09-13T00:53:32.909607334Z" level=warning msg="cleaning up after shim disconnected" id=91743fa61b074a12fcae83876e07e9b1eb3e9c68e5fbc913a0b3e900f21f8bc2 namespace=k8s.io Sep 13 00:53:32.909610 env[1315]: time="2025-09-13T00:53:32.909619057Z" level=info msg="cleaning up dead shim" Sep 13 00:53:32.916948 env[1315]: time="2025-09-13T00:53:32.915900185Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4186 runtime=io.containerd.runc.v2\n" Sep 13 00:53:33.241755 kubelet[2070]: I0913 00:53:33.241719 2070 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e04939f-95d2-4a3d-b4c2-3838eeff2d65" path="/var/lib/kubelet/pods/6e04939f-95d2-4a3d-b4c2-3838eeff2d65/volumes" Sep 13 00:53:33.445878 kubelet[2070]: E0913 00:53:33.445847 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:33.447436 env[1315]: time="2025-09-13T00:53:33.447404748Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:53:33.651057 env[1315]: time="2025-09-13T00:53:33.650997507Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a4989ee44268579179625d726bd7c974947d749c299f5b993339720f6c7d61b8\"" Sep 13 00:53:33.653175 env[1315]: time="2025-09-13T00:53:33.653150136Z" level=info msg="StartContainer for \"a4989ee44268579179625d726bd7c974947d749c299f5b993339720f6c7d61b8\"" Sep 13 00:53:33.694940 env[1315]: time="2025-09-13T00:53:33.694854781Z" level=info msg="StartContainer for \"a4989ee44268579179625d726bd7c974947d749c299f5b993339720f6c7d61b8\" returns successfully" Sep 13 00:53:33.714059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4989ee44268579179625d726bd7c974947d749c299f5b993339720f6c7d61b8-rootfs.mount: Deactivated successfully. Sep 13 00:53:33.719122 env[1315]: time="2025-09-13T00:53:33.719066981Z" level=info msg="shim disconnected" id=a4989ee44268579179625d726bd7c974947d749c299f5b993339720f6c7d61b8 Sep 13 00:53:33.719218 env[1315]: time="2025-09-13T00:53:33.719129300Z" level=warning msg="cleaning up after shim disconnected" id=a4989ee44268579179625d726bd7c974947d749c299f5b993339720f6c7d61b8 namespace=k8s.io Sep 13 00:53:33.719218 env[1315]: time="2025-09-13T00:53:33.719146152Z" level=info msg="cleaning up dead shim" Sep 13 00:53:33.725793 env[1315]: time="2025-09-13T00:53:33.725751599Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4247 runtime=io.containerd.runc.v2\n" Sep 13 00:53:34.239722 kubelet[2070]: E0913 00:53:34.239649 2070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-84xtm" podUID="dc5130bc-4b31-4b0a-88a6-453a4424895a" Sep 13 00:53:34.449018 kubelet[2070]: E0913 00:53:34.448977 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:34.450537 env[1315]: time="2025-09-13T00:53:34.450485876Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:53:34.464626 env[1315]: time="2025-09-13T00:53:34.464574213Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0cf3b28b1eb25573dde510a529c1df0becb1006bb48837a79dd9bde39689f0f1\"" Sep 13 00:53:34.466618 env[1315]: time="2025-09-13T00:53:34.466573771Z" level=info msg="StartContainer for \"0cf3b28b1eb25573dde510a529c1df0becb1006bb48837a79dd9bde39689f0f1\"" Sep 13 00:53:34.510680 env[1315]: time="2025-09-13T00:53:34.510553286Z" level=info msg="StartContainer for \"0cf3b28b1eb25573dde510a529c1df0becb1006bb48837a79dd9bde39689f0f1\" returns successfully" Sep 13 00:53:34.536823 env[1315]: time="2025-09-13T00:53:34.536760921Z" level=info msg="shim disconnected" id=0cf3b28b1eb25573dde510a529c1df0becb1006bb48837a79dd9bde39689f0f1 Sep 13 00:53:34.536823 env[1315]: time="2025-09-13T00:53:34.536807329Z" level=warning msg="cleaning up after shim disconnected" id=0cf3b28b1eb25573dde510a529c1df0becb1006bb48837a79dd9bde39689f0f1 namespace=k8s.io Sep 13 00:53:34.536823 env[1315]: time="2025-09-13T00:53:34.536815214Z" level=info msg="cleaning up dead shim" Sep 13 00:53:34.543762 env[1315]: time="2025-09-13T00:53:34.543700916Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4305 runtime=io.containerd.runc.v2\n" Sep 13 00:53:35.453055 kubelet[2070]: E0913 00:53:35.453010 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:35.455382 env[1315]: time="2025-09-13T00:53:35.455315362Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:53:35.794670 env[1315]: time="2025-09-13T00:53:35.794547401Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b89f82364b194e5c89c6bc6af7b49db87adda385bd4185065d94f67effcb2d7\"" Sep 13 00:53:35.795280 env[1315]: time="2025-09-13T00:53:35.795247685Z" level=info msg="StartContainer for \"1b89f82364b194e5c89c6bc6af7b49db87adda385bd4185065d94f67effcb2d7\"" Sep 13 00:53:35.833362 env[1315]: time="2025-09-13T00:53:35.833321532Z" level=info msg="StartContainer for \"1b89f82364b194e5c89c6bc6af7b49db87adda385bd4185065d94f67effcb2d7\" returns successfully" Sep 13 00:53:35.848575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b89f82364b194e5c89c6bc6af7b49db87adda385bd4185065d94f67effcb2d7-rootfs.mount: Deactivated successfully. Sep 13 00:53:35.854045 env[1315]: time="2025-09-13T00:53:35.853986426Z" level=info msg="shim disconnected" id=1b89f82364b194e5c89c6bc6af7b49db87adda385bd4185065d94f67effcb2d7 Sep 13 00:53:35.854181 env[1315]: time="2025-09-13T00:53:35.854051621Z" level=warning msg="cleaning up after shim disconnected" id=1b89f82364b194e5c89c6bc6af7b49db87adda385bd4185065d94f67effcb2d7 namespace=k8s.io Sep 13 00:53:35.854181 env[1315]: time="2025-09-13T00:53:35.854061860Z" level=info msg="cleaning up dead shim" Sep 13 00:53:35.860504 env[1315]: time="2025-09-13T00:53:35.860473991Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:53:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4361 runtime=io.containerd.runc.v2\n" Sep 13 00:53:36.239902 kubelet[2070]: E0913 00:53:36.239842 2070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-84xtm" podUID="dc5130bc-4b31-4b0a-88a6-453a4424895a" Sep 13 00:53:36.292579 kubelet[2070]: E0913 00:53:36.292529 2070 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:53:36.458138 kubelet[2070]: E0913 00:53:36.456034 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:36.460728 env[1315]: time="2025-09-13T00:53:36.460684970Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:53:36.824843 env[1315]: time="2025-09-13T00:53:36.824263753Z" level=info msg="CreateContainer within sandbox \"6e42d4db69c48a1e41ff2edb10839f1d6f52e36afedc61adda78a10ef857a572\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"31e6c8d2aafe1bb1bad703b6aee52e9bec2cef6ca0b2b062979c46b57a1132cb\"" Sep 13 00:53:36.825970 env[1315]: time="2025-09-13T00:53:36.825513875Z" level=info msg="StartContainer for \"31e6c8d2aafe1bb1bad703b6aee52e9bec2cef6ca0b2b062979c46b57a1132cb\"" Sep 13 00:53:36.866045 env[1315]: time="2025-09-13T00:53:36.865969936Z" level=info msg="StartContainer for \"31e6c8d2aafe1bb1bad703b6aee52e9bec2cef6ca0b2b062979c46b57a1132cb\" returns successfully" Sep 13 00:53:37.108940 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:53:37.240513 kubelet[2070]: E0913 00:53:37.240468 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:37.459961 kubelet[2070]: E0913 00:53:37.459933 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:37.533940 kubelet[2070]: I0913 00:53:37.533802 2070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hpkwj" podStartSLOduration=5.533786268 podStartE2EDuration="5.533786268s" podCreationTimestamp="2025-09-13 00:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:53:37.533430551 +0000 UTC m=+96.455821031" watchObservedRunningTime="2025-09-13 00:53:37.533786268 +0000 UTC m=+96.456176748" Sep 13 00:53:38.240041 kubelet[2070]: E0913 00:53:38.239962 2070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-84xtm" podUID="dc5130bc-4b31-4b0a-88a6-453a4424895a" Sep 13 00:53:38.774846 kubelet[2070]: E0913 00:53:38.774798 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:39.864497 systemd-networkd[1076]: lxc_health: Link UP Sep 13 00:53:39.872137 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:53:39.871872 systemd-networkd[1076]: lxc_health: Gained carrier Sep 13 00:53:40.240100 kubelet[2070]: E0913 00:53:40.239951 2070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-84xtm" podUID="dc5130bc-4b31-4b0a-88a6-453a4424895a" Sep 13 00:53:40.775627 kubelet[2070]: E0913 00:53:40.775580 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:41.204290 systemd-networkd[1076]: lxc_health: Gained IPv6LL Sep 13 00:53:41.372031 systemd[1]: run-containerd-runc-k8s.io-31e6c8d2aafe1bb1bad703b6aee52e9bec2cef6ca0b2b062979c46b57a1132cb-runc.vZUt1N.mount: Deactivated successfully. Sep 13 00:53:41.467645 kubelet[2070]: E0913 00:53:41.467527 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:42.240755 kubelet[2070]: E0913 00:53:42.240721 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:42.469199 kubelet[2070]: E0913 00:53:42.469167 2070 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:53:45.544533 systemd[1]: run-containerd-runc-k8s.io-31e6c8d2aafe1bb1bad703b6aee52e9bec2cef6ca0b2b062979c46b57a1132cb-runc.s1Ni6U.mount: Deactivated successfully. Sep 13 00:53:45.590320 sshd[3940]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:45.592686 systemd[1]: sshd@26-10.0.0.117:22-10.0.0.1:47122.service: Deactivated successfully. Sep 13 00:53:45.593729 systemd-logind[1298]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:53:45.593755 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:53:45.594840 systemd-logind[1298]: Removed session 27.