Dec 16 13:08:29.001769 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:08:29.001800 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:08:29.001814 kernel: BIOS-provided physical RAM map: Dec 16 13:08:29.001823 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 13:08:29.001832 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 13:08:29.001851 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 13:08:29.001862 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 16 13:08:29.001871 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 16 13:08:29.001885 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 13:08:29.001894 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 13:08:29.001903 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:08:29.001921 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 13:08:29.001930 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:08:29.001938 kernel: NX (Execute Disable) protection: active Dec 16 13:08:29.001949 kernel: APIC: Static calls initialized Dec 16 13:08:29.001958 kernel: SMBIOS 2.8 present. Dec 16 13:08:29.001974 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 16 13:08:29.001983 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:08:29.001993 kernel: Hypervisor detected: KVM Dec 16 13:08:29.002002 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 16 13:08:29.002011 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:08:29.002020 kernel: kvm-clock: using sched offset of 4741246306 cycles Dec 16 13:08:29.002030 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:08:29.002039 kernel: tsc: Detected 2794.750 MHz processor Dec 16 13:08:29.002073 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:08:29.002081 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:08:29.002092 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 16 13:08:29.002099 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 13:08:29.002107 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:08:29.002114 kernel: Using GB pages for direct mapping Dec 16 13:08:29.002122 kernel: ACPI: Early table checksum verification disabled Dec 16 13:08:29.002129 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 16 13:08:29.002137 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:29.002144 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:29.002151 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:29.002161 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 16 13:08:29.002173 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:29.002181 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:29.002188 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:29.002196 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:29.002206 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Dec 16 13:08:29.002216 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Dec 16 13:08:29.002224 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 16 13:08:29.002232 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Dec 16 13:08:29.002239 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Dec 16 13:08:29.002247 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Dec 16 13:08:29.002254 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Dec 16 13:08:29.002262 kernel: No NUMA configuration found Dec 16 13:08:29.002269 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 16 13:08:29.002280 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Dec 16 13:08:29.002287 kernel: Zone ranges: Dec 16 13:08:29.002295 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:08:29.002302 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 16 13:08:29.002310 kernel: Normal empty Dec 16 13:08:29.002318 kernel: Device empty Dec 16 13:08:29.002325 kernel: Movable zone start for each node Dec 16 13:08:29.002333 kernel: Early memory node ranges Dec 16 13:08:29.002340 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 13:08:29.002348 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 16 13:08:29.002358 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 16 13:08:29.002365 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:08:29.002373 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 13:08:29.002380 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 16 13:08:29.002392 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:08:29.002399 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:08:29.002411 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:08:29.002419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:08:29.002428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:08:29.002439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:08:29.002447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:08:29.002454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:08:29.002462 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:08:29.002469 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:08:29.002477 kernel: TSC deadline timer available Dec 16 13:08:29.002484 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:08:29.002492 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:08:29.002503 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:08:29.002523 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:08:29.002539 kernel: CPU topo: Num. cores per package: 4 Dec 16 13:08:29.002547 kernel: CPU topo: Num. threads per package: 4 Dec 16 13:08:29.002554 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 13:08:29.002561 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:08:29.002569 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:08:29.002576 kernel: kvm-guest: setup PV sched yield Dec 16 13:08:29.002584 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 13:08:29.002591 kernel: Booting paravirtualized kernel on KVM Dec 16 13:08:29.002599 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:08:29.002609 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 13:08:29.002617 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 13:08:29.002624 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 13:08:29.002632 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 13:08:29.002639 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:08:29.002646 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:08:29.002655 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:08:29.002670 kernel: random: crng init done Dec 16 13:08:29.002682 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:08:29.002692 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:08:29.002701 kernel: Fallback order for Node 0: 0 Dec 16 13:08:29.002710 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Dec 16 13:08:29.002718 kernel: Policy zone: DMA32 Dec 16 13:08:29.002725 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:08:29.002733 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 13:08:29.002740 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:08:29.002748 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:08:29.002758 kernel: Dynamic Preempt: voluntary Dec 16 13:08:29.002766 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:08:29.002774 kernel: rcu: RCU event tracing is enabled. Dec 16 13:08:29.002782 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 13:08:29.002789 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:08:29.002801 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:08:29.002808 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:08:29.002816 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:08:29.002823 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 13:08:29.002833 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:08:29.002850 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:08:29.002858 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:08:29.002866 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 13:08:29.002981 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:08:29.002998 kernel: Console: colour VGA+ 80x25 Dec 16 13:08:29.003019 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:08:29.003027 kernel: ACPI: Core revision 20240827 Dec 16 13:08:29.003035 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:08:29.003043 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:08:29.003066 kernel: x2apic enabled Dec 16 13:08:29.003075 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:08:29.003090 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:08:29.003098 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:08:29.003106 kernel: kvm-guest: setup PV IPIs Dec 16 13:08:29.003114 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:08:29.003122 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 16 13:08:29.003138 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Dec 16 13:08:29.003162 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:08:29.003170 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:08:29.003177 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:08:29.003185 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:08:29.003193 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:08:29.003201 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:08:29.003209 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 13:08:29.003219 kernel: active return thunk: retbleed_return_thunk Dec 16 13:08:29.003227 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 13:08:29.003235 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:08:29.003243 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:08:29.003251 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 13:08:29.003259 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 13:08:29.003267 kernel: active return thunk: srso_return_thunk Dec 16 13:08:29.003294 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 13:08:29.003302 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:08:29.003313 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:08:29.003330 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:08:29.003349 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:08:29.003364 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 13:08:29.003372 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:08:29.003386 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:08:29.003394 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:08:29.003402 kernel: landlock: Up and running. Dec 16 13:08:29.003410 kernel: SELinux: Initializing. Dec 16 13:08:29.003423 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:08:29.003448 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:08:29.003459 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 13:08:29.003478 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:08:29.003486 kernel: ... version: 0 Dec 16 13:08:29.003494 kernel: ... bit width: 48 Dec 16 13:08:29.003521 kernel: ... generic registers: 6 Dec 16 13:08:29.003529 kernel: ... value mask: 0000ffffffffffff Dec 16 13:08:29.003537 kernel: ... max period: 00007fffffffffff Dec 16 13:08:29.003548 kernel: ... fixed-purpose events: 0 Dec 16 13:08:29.003564 kernel: ... event mask: 000000000000003f Dec 16 13:08:29.003579 kernel: signal: max sigframe size: 1776 Dec 16 13:08:29.003588 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:08:29.003596 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:08:29.003617 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:08:29.003625 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:08:29.003633 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:08:29.003641 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 13:08:29.003659 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 13:08:29.003667 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Dec 16 13:08:29.003683 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145096K reserved, 0K cma-reserved) Dec 16 13:08:29.003691 kernel: devtmpfs: initialized Dec 16 13:08:29.003699 kernel: x86/mm: Memory block size: 128MB Dec 16 13:08:29.003707 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:08:29.003715 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 13:08:29.003723 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:08:29.003744 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:08:29.003754 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:08:29.003762 kernel: audit: type=2000 audit(1765890505.458:1): state=initialized audit_enabled=0 res=1 Dec 16 13:08:29.003778 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:08:29.003793 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:08:29.003801 kernel: cpuidle: using governor menu Dec 16 13:08:29.003809 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:08:29.003830 kernel: dca service started, version 1.12.1 Dec 16 13:08:29.003846 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 13:08:29.003862 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 13:08:29.003874 kernel: PCI: Using configuration type 1 for base access Dec 16 13:08:29.003881 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:08:29.003896 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:08:29.003904 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:08:29.003912 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:08:29.003920 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:08:29.003928 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:08:29.003936 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:08:29.003943 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:08:29.003966 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:08:29.003975 kernel: ACPI: Interpreter enabled Dec 16 13:08:29.003994 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:08:29.004004 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:08:29.004023 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:08:29.004033 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:08:29.004044 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:08:29.004070 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:08:29.004441 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:08:29.004644 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:08:29.004882 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:08:29.004896 kernel: PCI host bridge to bus 0000:00 Dec 16 13:08:29.005243 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:08:29.005387 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:08:29.005508 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:08:29.005646 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 16 13:08:29.005770 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 13:08:29.005921 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 16 13:08:29.006127 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:08:29.006336 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:08:29.006551 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:08:29.006794 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 16 13:08:29.007025 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 16 13:08:29.007263 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 16 13:08:29.007449 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:08:29.007638 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:08:29.007872 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 13:08:29.008065 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 16 13:08:29.008279 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 16 13:08:29.008512 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:08:29.008696 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Dec 16 13:08:29.008823 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 16 13:08:29.008958 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 16 13:08:29.009113 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:08:29.009239 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Dec 16 13:08:29.009367 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Dec 16 13:08:29.009499 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 16 13:08:29.009621 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 16 13:08:29.009761 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:08:29.009902 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:08:29.010129 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:08:29.010320 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Dec 16 13:08:29.010506 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Dec 16 13:08:29.010730 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:08:29.010939 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 13:08:29.010960 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:08:29.010972 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:08:29.010984 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:08:29.010996 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:08:29.011017 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:08:29.011029 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:08:29.011042 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:08:29.011072 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:08:29.011085 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:08:29.011100 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:08:29.011112 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:08:29.011124 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:08:29.011140 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:08:29.011155 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:08:29.011170 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:08:29.011183 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:08:29.011203 kernel: iommu: Default domain type: Translated Dec 16 13:08:29.011221 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:08:29.011233 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:08:29.011245 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:08:29.011260 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 13:08:29.011272 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 16 13:08:29.011484 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:08:29.011618 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:08:29.011772 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:08:29.011797 kernel: vgaarb: loaded Dec 16 13:08:29.011806 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:08:29.011823 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:08:29.011832 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:08:29.011849 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:08:29.011858 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:08:29.011871 kernel: pnp: PnP ACPI init Dec 16 13:08:29.012023 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 13:08:29.012036 kernel: pnp: PnP ACPI: found 6 devices Dec 16 13:08:29.012044 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:08:29.012071 kernel: NET: Registered PF_INET protocol family Dec 16 13:08:29.012080 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:08:29.012089 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 13:08:29.012097 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:08:29.012109 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:08:29.012117 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 13:08:29.012125 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 13:08:29.012134 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:08:29.012142 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:08:29.012150 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:08:29.012158 kernel: NET: Registered PF_XDP protocol family Dec 16 13:08:29.012276 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:08:29.012392 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:08:29.012539 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:08:29.012724 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 16 13:08:29.012917 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 13:08:29.013079 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 16 13:08:29.013095 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:08:29.013106 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 16 13:08:29.013118 kernel: Initialise system trusted keyrings Dec 16 13:08:29.013129 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 13:08:29.013149 kernel: Key type asymmetric registered Dec 16 13:08:29.013160 kernel: Asymmetric key parser 'x509' registered Dec 16 13:08:29.013171 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:08:29.013182 kernel: io scheduler mq-deadline registered Dec 16 13:08:29.013193 kernel: io scheduler kyber registered Dec 16 13:08:29.013203 kernel: io scheduler bfq registered Dec 16 13:08:29.013214 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:08:29.013226 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:08:29.013237 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:08:29.013251 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 13:08:29.013262 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:08:29.013273 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:08:29.013284 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:08:29.013296 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:08:29.013307 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:08:29.013485 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 13:08:29.013502 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:08:29.013654 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 13:08:29.013871 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T13:08:28 UTC (1765890508) Dec 16 13:08:29.014040 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 13:08:29.014073 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:08:29.014083 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:08:29.014091 kernel: Segment Routing with IPv6 Dec 16 13:08:29.014099 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:08:29.014108 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:08:29.014119 kernel: Key type dns_resolver registered Dec 16 13:08:29.014138 kernel: IPI shorthand broadcast: enabled Dec 16 13:08:29.014150 kernel: sched_clock: Marking stable (3235003231, 211713492)->(3498576941, -51860218) Dec 16 13:08:29.014160 kernel: registered taskstats version 1 Dec 16 13:08:29.014171 kernel: Loading compiled-in X.509 certificates Dec 16 13:08:29.014182 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:08:29.014192 kernel: Demotion targets for Node 0: null Dec 16 13:08:29.014203 kernel: Key type .fscrypt registered Dec 16 13:08:29.014213 kernel: Key type fscrypt-provisioning registered Dec 16 13:08:29.014224 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:08:29.014236 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:08:29.014245 kernel: ima: No architecture policies found Dec 16 13:08:29.014253 kernel: clk: Disabling unused clocks Dec 16 13:08:29.014261 kernel: Warning: unable to open an initial console. Dec 16 13:08:29.014269 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:08:29.014278 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:08:29.014286 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:08:29.014294 kernel: Run /init as init process Dec 16 13:08:29.014302 kernel: with arguments: Dec 16 13:08:29.014313 kernel: /init Dec 16 13:08:29.014321 kernel: with environment: Dec 16 13:08:29.014330 kernel: HOME=/ Dec 16 13:08:29.014340 kernel: TERM=linux Dec 16 13:08:29.014353 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:08:29.014369 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:08:29.014399 systemd[1]: Detected virtualization kvm. Dec 16 13:08:29.014410 systemd[1]: Detected architecture x86-64. Dec 16 13:08:29.014422 systemd[1]: Running in initrd. Dec 16 13:08:29.014433 systemd[1]: No hostname configured, using default hostname. Dec 16 13:08:29.014448 systemd[1]: Hostname set to . Dec 16 13:08:29.014459 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:08:29.014470 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:08:29.014482 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:08:29.014496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:08:29.014508 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:08:29.014520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:08:29.014531 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:08:29.014541 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:08:29.014551 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:08:29.014563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:08:29.014571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:08:29.014580 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:08:29.014589 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:08:29.014600 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:08:29.014617 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:08:29.014630 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:08:29.014641 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:08:29.014651 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:08:29.014664 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:08:29.014673 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:08:29.014682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:08:29.014691 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:08:29.014700 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:08:29.014709 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:08:29.014718 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:08:29.014730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:08:29.014739 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:08:29.014748 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:08:29.014757 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:08:29.014766 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:08:29.014774 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:08:29.014786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:08:29.014795 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:08:29.014804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:08:29.014813 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:08:29.014822 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:08:29.014878 systemd-journald[199]: Collecting audit messages is disabled. Dec 16 13:08:29.014903 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:08:29.014913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:08:29.014923 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:08:29.014935 kernel: Bridge firewalling registered Dec 16 13:08:29.014949 systemd-journald[199]: Journal started Dec 16 13:08:29.014974 systemd-journald[199]: Runtime Journal (/run/log/journal/bbc0391bd1ef4490b647b12f07dbb5db) is 6M, max 48.3M, 42.2M free. Dec 16 13:08:28.985575 systemd-modules-load[202]: Inserted module 'overlay' Dec 16 13:08:29.089886 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:08:29.089919 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:08:29.013961 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 16 13:08:29.095258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:29.101544 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:08:29.106094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:08:29.112681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:08:29.113127 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:08:29.127033 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:08:29.129189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:08:29.131994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:08:29.149254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:08:29.154598 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:08:29.156955 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:08:29.182283 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:08:29.202706 systemd-resolved[243]: Positive Trust Anchors: Dec 16 13:08:29.202718 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:08:29.202759 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:08:29.205498 systemd-resolved[243]: Defaulting to hostname 'linux'. Dec 16 13:08:29.206744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:08:29.208913 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:08:29.307110 kernel: SCSI subsystem initialized Dec 16 13:08:29.317088 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:08:29.329084 kernel: iscsi: registered transport (tcp) Dec 16 13:08:29.351085 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:08:29.352087 kernel: QLogic iSCSI HBA Driver Dec 16 13:08:29.376038 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:08:29.408541 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:08:29.410329 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:08:29.465089 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:08:29.470259 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:08:29.531118 kernel: raid6: avx2x4 gen() 27714 MB/s Dec 16 13:08:29.548114 kernel: raid6: avx2x2 gen() 25867 MB/s Dec 16 13:08:29.566036 kernel: raid6: avx2x1 gen() 22753 MB/s Dec 16 13:08:29.566231 kernel: raid6: using algorithm avx2x4 gen() 27714 MB/s Dec 16 13:08:29.583890 kernel: raid6: .... xor() 6572 MB/s, rmw enabled Dec 16 13:08:29.583998 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:08:29.606101 kernel: xor: automatically using best checksumming function avx Dec 16 13:08:29.785113 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:08:29.795669 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:08:29.798483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:08:29.836348 systemd-udevd[452]: Using default interface naming scheme 'v255'. Dec 16 13:08:29.842614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:08:29.846139 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:08:29.879454 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Dec 16 13:08:29.914979 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:08:29.918782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:08:30.004663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:08:30.011435 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:08:30.042088 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 13:08:30.046566 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 13:08:30.049073 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:08:30.049098 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:08:30.050691 kernel: GPT:9289727 != 19775487 Dec 16 13:08:30.050713 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:08:30.052551 kernel: GPT:9289727 != 19775487 Dec 16 13:08:30.052573 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:08:30.054452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:08:30.062105 kernel: AES CTR mode by8 optimization enabled Dec 16 13:08:30.091171 kernel: libata version 3.00 loaded. Dec 16 13:08:30.099639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:08:30.103978 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:08:30.110486 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:08:30.110502 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:08:30.110662 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:08:30.110805 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:08:30.104250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:30.117213 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:08:30.123510 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:08:30.129930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:08:30.135588 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:08:30.137502 kernel: scsi host0: ahci Dec 16 13:08:30.137711 kernel: scsi host1: ahci Dec 16 13:08:30.140081 kernel: scsi host2: ahci Dec 16 13:08:30.141096 kernel: scsi host3: ahci Dec 16 13:08:30.142084 kernel: scsi host4: ahci Dec 16 13:08:30.145227 kernel: scsi host5: ahci Dec 16 13:08:30.145447 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Dec 16 13:08:30.145463 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Dec 16 13:08:30.148149 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Dec 16 13:08:30.148177 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Dec 16 13:08:30.151076 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Dec 16 13:08:30.151104 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Dec 16 13:08:30.157893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 13:08:30.198434 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 13:08:30.248591 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 13:08:30.248685 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 13:08:30.256078 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:30.271034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:08:30.272024 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:08:30.306269 disk-uuid[616]: Primary Header is updated. Dec 16 13:08:30.306269 disk-uuid[616]: Secondary Entries is updated. Dec 16 13:08:30.306269 disk-uuid[616]: Secondary Header is updated. Dec 16 13:08:30.313077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:08:30.317068 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:08:30.461495 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 13:08:30.461596 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:30.461632 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:30.461647 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:30.463088 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:30.465101 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:08:30.466319 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 13:08:30.466343 kernel: ata3.00: applying bridge limits Dec 16 13:08:30.468087 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:30.469093 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:08:30.470632 kernel: ata3.00: configured for UDMA/100 Dec 16 13:08:30.473078 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 13:08:30.561033 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 13:08:30.561502 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:08:30.574079 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 13:08:30.965716 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:08:30.966605 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:08:30.971457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:08:30.975413 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:08:30.980923 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:08:31.006496 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:08:31.370164 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:08:31.370753 disk-uuid[617]: The operation has completed successfully. Dec 16 13:08:31.400686 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:08:31.400827 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:08:31.443333 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:08:31.478467 sh[645]: Success Dec 16 13:08:31.498690 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:08:31.498746 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:08:31.500542 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:08:31.512093 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:08:31.546818 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:08:31.550639 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:08:31.569310 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:08:31.580297 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (657) Dec 16 13:08:31.580336 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:08:31.580352 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:08:31.588135 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:08:31.588231 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:08:31.589976 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:08:31.592189 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:08:31.595322 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:08:31.596306 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:08:31.600073 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:08:31.634043 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (688) Dec 16 13:08:31.634139 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:31.634152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:08:31.640212 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:08:31.640290 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:08:31.648102 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:31.653000 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:08:31.659370 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:08:31.758520 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:08:31.767514 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:08:31.767647 ignition[743]: Ignition 2.22.0 Dec 16 13:08:31.767653 ignition[743]: Stage: fetch-offline Dec 16 13:08:31.767684 ignition[743]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:31.767693 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:08:31.767787 ignition[743]: parsed url from cmdline: "" Dec 16 13:08:31.767791 ignition[743]: no config URL provided Dec 16 13:08:31.767797 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:08:31.767806 ignition[743]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:08:31.767829 ignition[743]: op(1): [started] loading QEMU firmware config module Dec 16 13:08:31.767834 ignition[743]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 13:08:31.777824 ignition[743]: op(1): [finished] loading QEMU firmware config module Dec 16 13:08:31.777849 ignition[743]: QEMU firmware config was not found. Ignoring... Dec 16 13:08:31.820589 systemd-networkd[832]: lo: Link UP Dec 16 13:08:31.820601 systemd-networkd[832]: lo: Gained carrier Dec 16 13:08:31.822559 systemd-networkd[832]: Enumeration completed Dec 16 13:08:31.822666 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:08:31.827678 systemd-networkd[832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:08:31.827686 systemd-networkd[832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:08:31.831092 systemd[1]: Reached target network.target - Network. Dec 16 13:08:31.833168 systemd-networkd[832]: eth0: Link UP Dec 16 13:08:31.836646 systemd-networkd[832]: eth0: Gained carrier Dec 16 13:08:31.836661 systemd-networkd[832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:08:31.863143 systemd-networkd[832]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:08:31.898960 ignition[743]: parsing config with SHA512: a7082f0d8cf20f4b2b917c1795fe87e8bc017a7a3af5b87283cbfe77fbf05c0c3618434ccf0fa6582a6f4e6f2fb98803fcda6a72de04b31156babbb0f18bd504 Dec 16 13:08:31.905719 unknown[743]: fetched base config from "system" Dec 16 13:08:31.905733 unknown[743]: fetched user config from "qemu" Dec 16 13:08:31.906213 ignition[743]: fetch-offline: fetch-offline passed Dec 16 13:08:31.910213 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:08:31.906287 ignition[743]: Ignition finished successfully Dec 16 13:08:31.922812 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 13:08:31.924245 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:08:31.969229 ignition[841]: Ignition 2.22.0 Dec 16 13:08:31.969245 ignition[841]: Stage: kargs Dec 16 13:08:31.969422 ignition[841]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:31.969436 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:08:31.970331 ignition[841]: kargs: kargs passed Dec 16 13:08:31.970386 ignition[841]: Ignition finished successfully Dec 16 13:08:31.979122 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:08:31.984728 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:08:32.038636 ignition[849]: Ignition 2.22.0 Dec 16 13:08:32.038665 ignition[849]: Stage: disks Dec 16 13:08:32.038935 ignition[849]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:32.038961 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:08:32.040360 ignition[849]: disks: disks passed Dec 16 13:08:32.040444 ignition[849]: Ignition finished successfully Dec 16 13:08:32.050471 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:08:32.054145 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:08:32.054272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:08:32.060197 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:08:32.063984 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:08:32.064154 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:08:32.070293 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:08:32.106017 systemd-fsck[859]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:08:32.114924 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:08:32.118800 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:08:32.254105 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:08:32.254857 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:08:32.257351 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:08:32.262183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:08:32.265429 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:08:32.268255 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:08:32.268308 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:08:32.286304 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Dec 16 13:08:32.286344 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:32.286372 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:08:32.268340 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:08:32.275465 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:08:32.294850 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:08:32.294874 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:08:32.287510 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:08:32.297464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:08:32.334385 initrd-setup-root[891]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:08:32.341399 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:08:32.347503 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:08:32.354028 initrd-setup-root[912]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:08:32.458868 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:08:32.462760 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:08:32.463542 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:08:32.495182 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:32.507216 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:08:32.527258 ignition[980]: INFO : Ignition 2.22.0 Dec 16 13:08:32.527258 ignition[980]: INFO : Stage: mount Dec 16 13:08:32.529865 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:32.529865 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:08:32.529865 ignition[980]: INFO : mount: mount passed Dec 16 13:08:32.529865 ignition[980]: INFO : Ignition finished successfully Dec 16 13:08:32.538680 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:08:32.542042 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:08:32.578899 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:08:32.584197 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:08:32.623398 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Dec 16 13:08:32.623457 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:32.623469 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:08:32.629458 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:08:32.629539 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:08:32.631566 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:08:32.672879 ignition[1010]: INFO : Ignition 2.22.0 Dec 16 13:08:32.672879 ignition[1010]: INFO : Stage: files Dec 16 13:08:32.675926 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:32.675926 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:08:32.675926 ignition[1010]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:08:32.675926 ignition[1010]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:08:32.675926 ignition[1010]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:08:32.686588 ignition[1010]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:08:32.686588 ignition[1010]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:08:32.686588 ignition[1010]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:08:32.686588 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:08:32.686588 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:08:32.677764 unknown[1010]: wrote ssh authorized keys file for user: core Dec 16 13:08:32.725316 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:08:32.788685 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:08:32.788685 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:08:32.795317 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:08:32.878553 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:08:32.987444 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:08:32.987444 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:08:32.994307 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:08:33.025578 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:08:33.025578 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:08:33.025578 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:08:33.251321 systemd-networkd[832]: eth0: Gained IPv6LL Dec 16 13:08:33.343391 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:08:33.753065 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:08:33.757482 ignition[1010]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:08:33.757482 ignition[1010]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:08:33.763378 ignition[1010]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:08:33.763378 ignition[1010]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:08:33.763378 ignition[1010]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 13:08:33.763378 ignition[1010]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:08:33.763378 ignition[1010]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:08:33.763378 ignition[1010]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 13:08:33.763378 ignition[1010]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 13:08:33.784101 ignition[1010]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:08:33.787555 ignition[1010]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:08:33.790311 ignition[1010]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 13:08:33.790311 ignition[1010]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:08:33.790311 ignition[1010]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:08:33.790311 ignition[1010]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:08:33.790311 ignition[1010]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:08:33.790311 ignition[1010]: INFO : files: files passed Dec 16 13:08:33.790311 ignition[1010]: INFO : Ignition finished successfully Dec 16 13:08:33.791850 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:08:33.802531 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:08:33.811688 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:08:33.827700 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:08:33.827888 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:08:33.833139 initrd-setup-root-after-ignition[1038]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 13:08:33.838552 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:08:33.838552 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:08:33.846090 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:08:33.840831 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:08:33.841922 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:08:33.850872 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:08:33.925038 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:08:33.925243 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:08:33.929593 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:08:33.933393 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:08:33.933807 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:08:33.940621 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:08:33.981319 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:08:33.982975 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:08:34.005923 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:08:34.006183 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:08:34.011747 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:08:34.013727 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:08:34.013851 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:08:34.021847 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:08:34.022019 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:08:34.025291 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:08:34.025876 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:08:34.031363 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:08:34.031919 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:08:34.038282 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:08:34.038833 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:08:34.039458 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:08:34.050005 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:08:34.053256 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:08:34.054841 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:08:34.055011 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:08:34.061929 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:08:34.063811 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:08:34.067449 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:08:34.067602 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:08:34.069388 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:08:34.069520 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:08:34.076386 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:08:34.076515 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:08:34.078037 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:08:34.082618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:08:34.086134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:08:34.089454 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:08:34.091464 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:08:34.094928 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:08:34.095033 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:08:34.097706 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:08:34.097820 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:08:34.098306 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:08:34.098443 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:08:34.103546 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:08:34.103766 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:08:34.107971 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:08:34.111995 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:08:34.116096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:08:34.119901 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:08:34.127072 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:08:34.129084 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:08:34.142246 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:08:34.142389 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:08:34.148421 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:08:34.154235 ignition[1065]: INFO : Ignition 2.22.0 Dec 16 13:08:34.154235 ignition[1065]: INFO : Stage: umount Dec 16 13:08:34.156835 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:34.156835 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:08:34.161084 ignition[1065]: INFO : umount: umount passed Dec 16 13:08:34.161084 ignition[1065]: INFO : Ignition finished successfully Dec 16 13:08:34.164736 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:08:34.166647 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:08:34.170955 systemd[1]: Stopped target network.target - Network. Dec 16 13:08:34.171110 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:08:34.171200 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:08:34.174239 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:08:34.174308 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:08:34.177725 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:08:34.177806 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:08:34.181333 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:08:34.181390 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:08:34.185990 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:08:34.190591 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:08:34.205981 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:08:34.206204 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:08:34.214466 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:08:34.214784 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:08:34.214929 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:08:34.219657 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:08:34.220805 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:08:34.222363 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:08:34.222422 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:08:34.231199 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:08:34.231656 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:08:34.231727 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:08:34.267582 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:08:34.267663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:08:34.276415 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:08:34.276468 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:08:34.278116 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:08:34.278166 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:08:34.285143 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:08:34.290221 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:08:34.290294 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:08:34.299076 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:08:34.299236 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:08:34.305769 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:08:34.305967 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:08:34.307628 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:08:34.307675 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:08:34.325498 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:08:34.325539 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:08:34.327218 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:08:34.327271 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:08:34.335385 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:08:34.335441 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:08:34.340150 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:08:34.340200 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:08:34.345998 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:08:34.346813 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:08:34.346882 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:08:34.354434 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:08:34.354488 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:08:34.360316 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:08:34.360365 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:08:34.385258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:08:34.385310 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:08:34.387425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:08:34.387489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:34.396525 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:08:34.396585 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:08:34.396647 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:08:34.396700 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:08:34.397088 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:08:34.397209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:08:34.438553 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:08:34.438715 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:08:34.439387 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:08:34.444733 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:08:34.444817 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:08:34.449039 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:08:34.477925 systemd[1]: Switching root. Dec 16 13:08:34.522489 systemd-journald[199]: Journal stopped Dec 16 13:08:36.365287 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Dec 16 13:08:36.365391 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:08:36.365414 kernel: SELinux: policy capability open_perms=1 Dec 16 13:08:36.365450 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:08:36.365466 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:08:36.365487 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:08:36.365505 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:08:36.365578 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:08:36.365597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:08:36.365613 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:08:36.365628 kernel: audit: type=1403 audit(1765890515.138:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:08:36.365652 systemd[1]: Successfully loaded SELinux policy in 70.812ms. Dec 16 13:08:36.365696 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.816ms. Dec 16 13:08:36.365714 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:08:36.365739 systemd[1]: Detected virtualization kvm. Dec 16 13:08:36.365754 systemd[1]: Detected architecture x86-64. Dec 16 13:08:36.365770 systemd[1]: Detected first boot. Dec 16 13:08:36.365786 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:08:36.365805 zram_generator::config[1111]: No configuration found. Dec 16 13:08:36.365824 kernel: Guest personality initialized and is inactive Dec 16 13:08:36.365851 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:08:36.365867 kernel: Initialized host personality Dec 16 13:08:36.365882 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:08:36.365901 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:08:36.365925 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:08:36.365941 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:08:36.365957 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:08:36.365972 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:08:36.366001 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:08:36.366025 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:08:36.366042 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:08:36.366090 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:08:36.366115 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:08:36.366132 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:08:36.366147 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:08:36.366163 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:08:36.366183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:08:36.366200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:08:36.366222 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:08:36.366238 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:08:36.366255 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:08:36.366272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:08:36.366292 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:08:36.366312 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:08:36.366329 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:08:36.366345 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:08:36.366369 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:08:36.366389 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:08:36.366405 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:08:36.366420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:08:36.366435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:08:36.366449 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:08:36.366464 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:08:36.366479 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:08:36.366494 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:08:36.366517 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:08:36.366542 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:08:36.366560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:08:36.366577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:08:36.366593 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:08:36.366610 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:08:36.366625 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:08:36.366641 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:08:36.366669 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:36.366696 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:08:36.366714 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:08:36.366730 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:08:36.366747 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:08:36.366764 systemd[1]: Reached target machines.target - Containers. Dec 16 13:08:36.366781 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:08:36.366797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:08:36.366814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:08:36.366841 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:08:36.366859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:08:36.366876 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:08:36.366892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:08:36.366910 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:08:36.366926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:08:36.366943 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:08:36.366958 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:08:36.366984 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:08:36.367000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:08:36.367016 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:08:36.367032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:08:36.367050 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:08:36.367088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:08:36.367111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:08:36.367139 kernel: loop: module loaded Dec 16 13:08:36.367159 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:08:36.367175 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:08:36.367191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:08:36.367208 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:08:36.367223 systemd[1]: Stopped verity-setup.service. Dec 16 13:08:36.367240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:36.367265 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:08:36.367280 kernel: ACPI: bus type drm_connector registered Dec 16 13:08:36.367296 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:08:36.367312 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:08:36.367328 kernel: fuse: init (API version 7.41) Dec 16 13:08:36.367352 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:08:36.367368 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:08:36.367413 systemd-journald[1182]: Collecting audit messages is disabled. Dec 16 13:08:36.367459 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:08:36.367477 systemd-journald[1182]: Journal started Dec 16 13:08:36.367510 systemd-journald[1182]: Runtime Journal (/run/log/journal/bbc0391bd1ef4490b647b12f07dbb5db) is 6M, max 48.3M, 42.2M free. Dec 16 13:08:35.936459 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:08:35.958970 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 13:08:35.959926 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:08:36.370103 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:08:36.372953 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:08:36.375615 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:08:36.378443 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:08:36.378689 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:08:36.381414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:08:36.381634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:08:36.384176 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:08:36.384398 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:08:36.386774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:08:36.387014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:08:36.389625 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:08:36.389904 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:08:36.392003 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:08:36.392291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:08:36.394468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:08:36.396688 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:08:36.399270 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:08:36.401882 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:08:36.418075 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:08:36.421723 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:08:36.424719 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:08:36.426715 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:08:36.426758 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:08:36.428359 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:08:36.435341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:08:36.438614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:08:36.440229 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:08:36.447267 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:08:36.449808 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:08:36.451460 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:08:36.453603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:08:36.456205 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:08:36.461331 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:08:36.466210 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:08:36.471201 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:08:36.474336 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:08:36.479508 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:08:36.497098 kernel: loop0: detected capacity change from 0 to 110984 Dec 16 13:08:36.497297 systemd-journald[1182]: Time spent on flushing to /var/log/journal/bbc0391bd1ef4490b647b12f07dbb5db is 26.461ms for 994 entries. Dec 16 13:08:36.497297 systemd-journald[1182]: System Journal (/var/log/journal/bbc0391bd1ef4490b647b12f07dbb5db) is 8M, max 195.6M, 187.6M free. Dec 16 13:08:36.546395 systemd-journald[1182]: Received client request to flush runtime journal. Dec 16 13:08:36.546513 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:08:36.486097 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:08:36.500042 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:08:36.510375 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:08:36.518840 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 16 13:08:36.518854 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 16 13:08:36.522312 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:08:36.534889 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:08:36.539229 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:08:36.554291 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:08:36.563114 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:08:36.564401 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:08:36.586751 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:08:36.591725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:08:36.596361 kernel: loop2: detected capacity change from 0 to 219144 Dec 16 13:08:36.623613 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 16 13:08:36.623636 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 16 13:08:36.628541 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:08:36.632078 kernel: loop3: detected capacity change from 0 to 110984 Dec 16 13:08:36.649105 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 13:08:36.660082 kernel: loop5: detected capacity change from 0 to 219144 Dec 16 13:08:36.670299 (sd-merge)[1256]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 13:08:36.670935 (sd-merge)[1256]: Merged extensions into '/usr'. Dec 16 13:08:36.677770 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:08:36.677798 systemd[1]: Reloading... Dec 16 13:08:36.759092 zram_generator::config[1283]: No configuration found. Dec 16 13:08:36.938822 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:08:36.991682 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:08:36.991866 systemd[1]: Reloading finished in 312 ms. Dec 16 13:08:37.024757 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:08:37.097225 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:08:37.112355 systemd[1]: Starting ensure-sysext.service... Dec 16 13:08:37.115307 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:08:37.136077 systemd[1]: Reload requested from client PID 1321 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:08:37.136098 systemd[1]: Reloading... Dec 16 13:08:37.198896 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:08:37.201230 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:08:37.202253 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:08:37.202754 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:08:37.207227 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:08:37.207877 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Dec 16 13:08:37.208172 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Dec 16 13:08:37.217938 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:08:37.218186 systemd-tmpfiles[1322]: Skipping /boot Dec 16 13:08:37.237939 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:08:37.237959 systemd-tmpfiles[1322]: Skipping /boot Dec 16 13:08:37.240127 zram_generator::config[1347]: No configuration found. Dec 16 13:08:37.428571 systemd[1]: Reloading finished in 292 ms. Dec 16 13:08:37.451100 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:08:37.472954 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:08:37.484095 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:08:37.487613 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:08:37.506828 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:08:37.513476 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:08:37.519993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:08:37.527316 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:08:37.534856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:37.535117 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:08:37.538170 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:08:37.546368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:08:37.553714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:08:37.556235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:08:37.556379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:08:37.566795 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Dec 16 13:08:37.568834 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:08:37.571333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:37.573223 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:08:37.576101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:08:37.576347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:08:37.579763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:08:37.580156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:08:37.583373 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:08:37.587531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:08:37.590886 augenrules[1417]: No rules Dec 16 13:08:37.592414 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:08:37.592715 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:08:37.598359 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:08:37.600901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:08:37.618643 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:08:37.629323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:37.631619 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:08:37.635299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:08:37.636909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:08:37.640301 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:08:37.649597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:08:37.653289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:08:37.655681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:08:37.655821 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:08:37.658751 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:08:37.665279 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:08:37.667070 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:08:37.667185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:37.669006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:08:37.673266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:08:37.675756 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:08:37.678238 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:08:37.678468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:08:37.681941 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:08:37.682184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:08:37.684495 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:08:37.684774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:08:37.698170 augenrules[1453]: /sbin/augenrules: No change Dec 16 13:08:37.716544 augenrules[1483]: No rules Dec 16 13:08:37.728025 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:08:37.728325 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:08:37.732741 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:08:37.743640 systemd[1]: Finished ensure-sysext.service. Dec 16 13:08:37.752249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:08:37.752322 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:08:37.758098 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:08:37.761075 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:08:37.839344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:08:37.845183 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:08:37.855077 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:08:37.860637 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:08:37.860699 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:08:37.886365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:08:37.918452 systemd-networkd[1461]: lo: Link UP Dec 16 13:08:37.918466 systemd-networkd[1461]: lo: Gained carrier Dec 16 13:08:37.920430 systemd-networkd[1461]: Enumeration completed Dec 16 13:08:37.920538 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:08:37.924530 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:08:37.924541 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:08:37.925150 systemd-networkd[1461]: eth0: Link UP Dec 16 13:08:37.925381 systemd-networkd[1461]: eth0: Gained carrier Dec 16 13:08:37.925407 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:08:37.925908 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:08:37.930802 systemd-resolved[1392]: Positive Trust Anchors: Dec 16 13:08:37.930821 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:08:37.930851 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:08:37.932521 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:08:37.937074 systemd-resolved[1392]: Defaulting to hostname 'linux'. Dec 16 13:08:37.940315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:08:37.942636 systemd-networkd[1461]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:08:37.942901 systemd[1]: Reached target network.target - Network. Dec 16 13:08:37.944489 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:08:37.954254 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:08:37.954611 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:08:37.977592 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:08:37.988489 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:08:37.990732 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:08:37.991674 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 13:08:37.992454 systemd-timesyncd[1497]: Initial clock synchronization to Tue 2025-12-16 13:08:38.271162 UTC. Dec 16 13:08:37.992810 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:08:37.995200 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:08:37.997506 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:08:37.999717 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:08:38.002224 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:08:38.002266 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:08:38.004154 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:08:38.006337 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:08:38.008294 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:08:38.010404 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:08:38.013174 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:08:38.019598 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:08:38.029467 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:08:38.032101 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:08:38.034631 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:08:38.050270 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:08:38.052782 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:08:38.055966 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:08:38.069294 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:08:38.073256 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:08:38.074972 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:08:38.075067 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:08:38.079428 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:08:38.084167 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:08:38.088370 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:08:38.093460 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:08:38.107226 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:08:38.109185 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:08:38.111365 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:08:38.115362 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:08:38.117542 jq[1536]: false Dec 16 13:08:38.135413 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:08:38.139005 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:08:38.144529 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Refreshing passwd entry cache Dec 16 13:08:38.142608 oslogin_cache_refresh[1538]: Refreshing passwd entry cache Dec 16 13:08:38.147532 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:08:38.191741 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Failure getting users, quitting Dec 16 13:08:38.191793 oslogin_cache_refresh[1538]: Failure getting users, quitting Dec 16 13:08:38.191914 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:08:38.191980 oslogin_cache_refresh[1538]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:08:38.192191 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Refreshing group entry cache Dec 16 13:08:38.192280 oslogin_cache_refresh[1538]: Refreshing group entry cache Dec 16 13:08:38.199950 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Failure getting groups, quitting Dec 16 13:08:38.199950 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:08:38.199937 oslogin_cache_refresh[1538]: Failure getting groups, quitting Dec 16 13:08:38.200060 extend-filesystems[1537]: Found /dev/vda6 Dec 16 13:08:38.199954 oslogin_cache_refresh[1538]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:08:38.207337 extend-filesystems[1537]: Found /dev/vda9 Dec 16 13:08:38.213068 extend-filesystems[1537]: Checking size of /dev/vda9 Dec 16 13:08:38.225733 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:08:38.229507 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:08:38.234589 kernel: kvm_amd: TSC scaling supported Dec 16 13:08:38.234638 kernel: kvm_amd: Nested Virtualization enabled Dec 16 13:08:38.234659 kernel: kvm_amd: Nested Paging enabled Dec 16 13:08:38.234678 kernel: kvm_amd: LBR virtualization supported Dec 16 13:08:38.236843 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 13:08:38.236879 kernel: kvm_amd: Virtual GIF supported Dec 16 13:08:38.240082 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:08:38.242414 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:08:38.248645 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:08:38.259965 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:08:38.262739 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:08:38.263056 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:08:38.265063 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:08:38.265700 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:08:38.266357 jq[1560]: true Dec 16 13:08:38.268073 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:08:38.268933 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:08:38.272017 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:08:38.273335 extend-filesystems[1537]: Resized partition /dev/vda9 Dec 16 13:08:38.272316 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:08:38.334120 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:08:38.345076 update_engine[1559]: I20251216 13:08:38.344975 1559 main.cc:92] Flatcar Update Engine starting Dec 16 13:08:38.353130 jq[1565]: true Dec 16 13:08:38.360479 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:08:38.360782 systemd-logind[1558]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:08:38.360838 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:08:38.365196 systemd-logind[1558]: New seat seat0. Dec 16 13:08:38.396127 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:08:38.409550 dbus-daemon[1534]: [system] SELinux support is enabled Dec 16 13:08:38.411713 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:08:38.414985 update_engine[1559]: I20251216 13:08:38.414680 1559 update_check_scheduler.cc:74] Next update check in 11m51s Dec 16 13:08:38.444342 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:08:38.464135 tar[1564]: linux-amd64/LICENSE Dec 16 13:08:38.464467 tar[1564]: linux-amd64/helm Dec 16 13:08:38.474744 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 13:08:38.472538 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:08:38.472562 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:08:38.477203 dbus-daemon[1534]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:08:38.477715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:08:38.479769 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:08:38.479854 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:08:38.482359 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:08:38.488357 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:08:38.591395 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:08:38.602264 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:08:38.612119 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 13:08:38.871903 containerd[1576]: time="2025-12-16T13:08:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:08:38.734058 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:08:38.749779 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:08:38.792669 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:08:38.873230 containerd[1576]: time="2025-12-16T13:08:38.873005023Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:08:38.792953 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:08:38.801136 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:38.806055 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:08:38.829648 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:08:38.834035 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:08:38.837454 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:08:38.839926 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:08:38.894121 containerd[1576]: time="2025-12-16T13:08:38.893459731Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.291µs" Dec 16 13:08:38.894121 containerd[1576]: time="2025-12-16T13:08:38.893533588Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:08:38.894121 containerd[1576]: time="2025-12-16T13:08:38.893572191Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:08:38.894121 containerd[1576]: time="2025-12-16T13:08:38.893860562Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:08:38.894121 containerd[1576]: time="2025-12-16T13:08:38.893888600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:08:38.894121 containerd[1576]: time="2025-12-16T13:08:38.893939147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:08:38.894121 containerd[1576]: time="2025-12-16T13:08:38.894038005Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:08:38.894121 containerd[1576]: time="2025-12-16T13:08:38.894060215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:08:38.895447 containerd[1576]: time="2025-12-16T13:08:38.895407107Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:08:38.895494 containerd[1576]: time="2025-12-16T13:08:38.895464239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:08:38.895515 containerd[1576]: time="2025-12-16T13:08:38.895487238Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:08:38.895515 containerd[1576]: time="2025-12-16T13:08:38.895500261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:08:38.895741 containerd[1576]: time="2025-12-16T13:08:38.895691972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:08:38.896043 containerd[1576]: time="2025-12-16T13:08:38.896003713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:08:38.896122 containerd[1576]: time="2025-12-16T13:08:38.896040969Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:08:38.896122 containerd[1576]: time="2025-12-16T13:08:38.896051659Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:08:38.896122 containerd[1576]: time="2025-12-16T13:08:38.896105599Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:08:38.896340 containerd[1576]: time="2025-12-16T13:08:38.896316358Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:08:38.896419 containerd[1576]: time="2025-12-16T13:08:38.896391284Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:08:38.923701 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 13:08:38.923701 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 13:08:38.923701 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 13:08:38.928813 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Dec 16 13:08:38.926413 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:08:38.926770 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:08:38.950565 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:08:38.953010 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:08:38.956663 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:08:39.090558 containerd[1576]: time="2025-12-16T13:08:39.090455364Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090593939Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090617975Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090632575Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090653039Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090667329Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090693948Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090706947Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090721928Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:08:39.090724 containerd[1576]: time="2025-12-16T13:08:39.090732967Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:08:39.090970 containerd[1576]: time="2025-12-16T13:08:39.090742869Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:08:39.090970 containerd[1576]: time="2025-12-16T13:08:39.090928535Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:08:39.091264 containerd[1576]: time="2025-12-16T13:08:39.091215966Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:08:39.091264 containerd[1576]: time="2025-12-16T13:08:39.091250753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:08:39.091345 containerd[1576]: time="2025-12-16T13:08:39.091269296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:08:39.091345 containerd[1576]: time="2025-12-16T13:08:39.091286550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:08:39.091345 containerd[1576]: time="2025-12-16T13:08:39.091297742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:08:39.091345 containerd[1576]: time="2025-12-16T13:08:39.091308284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:08:39.091345 containerd[1576]: time="2025-12-16T13:08:39.091337133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:08:39.091479 containerd[1576]: time="2025-12-16T13:08:39.091350390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:08:39.091479 containerd[1576]: time="2025-12-16T13:08:39.091364690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:08:39.091479 containerd[1576]: time="2025-12-16T13:08:39.091376894Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:08:39.091479 containerd[1576]: time="2025-12-16T13:08:39.091386910Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:08:39.091479 containerd[1576]: time="2025-12-16T13:08:39.091471627Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:08:39.091572 containerd[1576]: time="2025-12-16T13:08:39.091487477Z" level=info msg="Start snapshots syncer" Dec 16 13:08:39.091572 containerd[1576]: time="2025-12-16T13:08:39.091523440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:08:39.091998 containerd[1576]: time="2025-12-16T13:08:39.091928350Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:08:39.092369 containerd[1576]: time="2025-12-16T13:08:39.092024591Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:08:39.094334 containerd[1576]: time="2025-12-16T13:08:39.094267483Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:08:39.094570 containerd[1576]: time="2025-12-16T13:08:39.094515060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:08:39.094570 containerd[1576]: time="2025-12-16T13:08:39.094543340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:08:39.094570 containerd[1576]: time="2025-12-16T13:08:39.094568327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094586056Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094608668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094622277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094635647Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094677547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094690752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094701491Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094743277Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094759343Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094768677Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094777918Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094786674Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094796534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:08:39.094945 containerd[1576]: time="2025-12-16T13:08:39.094815336Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:08:39.095391 containerd[1576]: time="2025-12-16T13:08:39.094837907Z" level=info msg="runtime interface created" Dec 16 13:08:39.095391 containerd[1576]: time="2025-12-16T13:08:39.094843792Z" level=info msg="created NRI interface" Dec 16 13:08:39.095391 containerd[1576]: time="2025-12-16T13:08:39.094852269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:08:39.095391 containerd[1576]: time="2025-12-16T13:08:39.094868252Z" level=info msg="Connect containerd service" Dec 16 13:08:39.095391 containerd[1576]: time="2025-12-16T13:08:39.094897782Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:08:39.096290 containerd[1576]: time="2025-12-16T13:08:39.096264699Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:08:39.239531 tar[1564]: linux-amd64/README.md Dec 16 13:08:39.258184 containerd[1576]: time="2025-12-16T13:08:39.258139772Z" level=info msg="Start subscribing containerd event" Dec 16 13:08:39.258438 containerd[1576]: time="2025-12-16T13:08:39.258205377Z" level=info msg="Start recovering state" Dec 16 13:08:39.258438 containerd[1576]: time="2025-12-16T13:08:39.258379077Z" level=info msg="Start event monitor" Dec 16 13:08:39.258438 containerd[1576]: time="2025-12-16T13:08:39.258393698Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:08:39.258438 containerd[1576]: time="2025-12-16T13:08:39.258406790Z" level=info msg="Start streaming server" Dec 16 13:08:39.258438 containerd[1576]: time="2025-12-16T13:08:39.258417394Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:08:39.258438 containerd[1576]: time="2025-12-16T13:08:39.258425447Z" level=info msg="runtime interface starting up..." Dec 16 13:08:39.258438 containerd[1576]: time="2025-12-16T13:08:39.258430187Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:08:39.258589 containerd[1576]: time="2025-12-16T13:08:39.258521245Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:08:39.258589 containerd[1576]: time="2025-12-16T13:08:39.258431993Z" level=info msg="starting plugins..." Dec 16 13:08:39.258589 containerd[1576]: time="2025-12-16T13:08:39.258563309Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:08:39.258850 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:08:39.264102 containerd[1576]: time="2025-12-16T13:08:39.263163753Z" level=info msg="containerd successfully booted in 0.442474s" Dec 16 13:08:39.292226 systemd-networkd[1461]: eth0: Gained IPv6LL Dec 16 13:08:39.293369 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:08:39.296929 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:08:39.303896 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:08:39.308233 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 13:08:39.313096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:08:39.325725 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:08:39.372888 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 13:08:39.373544 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 13:08:39.377427 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:08:39.381638 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:08:40.518496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:08:40.534259 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:08:40.536436 systemd[1]: Startup finished in 3.303s (kernel) + 6.449s (initrd) + 5.467s (userspace) = 15.220s. Dec 16 13:08:40.541533 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:08:41.197684 kubelet[1676]: E1216 13:08:41.197603 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:08:41.202368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:08:41.202583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:08:41.203047 systemd[1]: kubelet.service: Consumed 1.731s CPU time, 260.7M memory peak. Dec 16 13:08:42.732761 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:08:42.734612 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:42474.service - OpenSSH per-connection server daemon (10.0.0.1:42474). Dec 16 13:08:42.819390 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 42474 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:08:42.821849 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:42.829692 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:08:42.830890 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:08:42.838474 systemd-logind[1558]: New session 1 of user core. Dec 16 13:08:42.856721 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:08:42.860356 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:08:42.878113 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:08:42.881243 systemd-logind[1558]: New session c1 of user core. Dec 16 13:08:43.071696 systemd[1695]: Queued start job for default target default.target. Dec 16 13:08:43.080926 systemd[1695]: Created slice app.slice - User Application Slice. Dec 16 13:08:43.080959 systemd[1695]: Reached target paths.target - Paths. Dec 16 13:08:43.081005 systemd[1695]: Reached target timers.target - Timers. Dec 16 13:08:43.082921 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:08:43.096469 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:08:43.096644 systemd[1695]: Reached target sockets.target - Sockets. Dec 16 13:08:43.096698 systemd[1695]: Reached target basic.target - Basic System. Dec 16 13:08:43.096753 systemd[1695]: Reached target default.target - Main User Target. Dec 16 13:08:43.096812 systemd[1695]: Startup finished in 205ms. Dec 16 13:08:43.097359 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:08:43.099252 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:08:43.162749 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:42484.service - OpenSSH per-connection server daemon (10.0.0.1:42484). Dec 16 13:08:43.221515 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 42484 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:08:43.223192 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:43.227556 systemd-logind[1558]: New session 2 of user core. Dec 16 13:08:43.237219 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:08:43.293263 sshd[1709]: Connection closed by 10.0.0.1 port 42484 Dec 16 13:08:43.293771 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:43.301687 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:42484.service: Deactivated successfully. Dec 16 13:08:43.303632 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:08:43.304459 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:08:43.307733 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:42490.service - OpenSSH per-connection server daemon (10.0.0.1:42490). Dec 16 13:08:43.308558 systemd-logind[1558]: Removed session 2. Dec 16 13:08:43.363754 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 42490 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:08:43.365807 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:43.372528 systemd-logind[1558]: New session 3 of user core. Dec 16 13:08:43.385375 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:08:43.437729 sshd[1718]: Connection closed by 10.0.0.1 port 42490 Dec 16 13:08:43.438136 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:43.446981 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:42490.service: Deactivated successfully. Dec 16 13:08:43.449048 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:08:43.450006 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:08:43.452800 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:42496.service - OpenSSH per-connection server daemon (10.0.0.1:42496). Dec 16 13:08:43.453663 systemd-logind[1558]: Removed session 3. Dec 16 13:08:43.514130 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 42496 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:08:43.515934 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:43.521171 systemd-logind[1558]: New session 4 of user core. Dec 16 13:08:43.531437 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:08:43.590586 sshd[1727]: Connection closed by 10.0.0.1 port 42496 Dec 16 13:08:43.591020 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:43.614487 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:42496.service: Deactivated successfully. Dec 16 13:08:43.616925 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:08:43.617862 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:08:43.620829 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:42502.service - OpenSSH per-connection server daemon (10.0.0.1:42502). Dec 16 13:08:43.621661 systemd-logind[1558]: Removed session 4. Dec 16 13:08:43.716298 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 42502 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:08:43.718176 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:43.722842 systemd-logind[1558]: New session 5 of user core. Dec 16 13:08:43.736439 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:08:43.805241 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:08:43.805709 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:08:43.828409 sudo[1737]: pam_unix(sudo:session): session closed for user root Dec 16 13:08:43.830777 sshd[1736]: Connection closed by 10.0.0.1 port 42502 Dec 16 13:08:43.831438 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:43.846506 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:42502.service: Deactivated successfully. Dec 16 13:08:43.848681 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:08:43.850911 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:08:43.855019 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:42512.service - OpenSSH per-connection server daemon (10.0.0.1:42512). Dec 16 13:08:43.855812 systemd-logind[1558]: Removed session 5. Dec 16 13:08:43.921409 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 42512 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:08:43.925918 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:43.943708 systemd-logind[1558]: New session 6 of user core. Dec 16 13:08:43.968405 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:08:44.028125 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:08:44.028578 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:08:44.247216 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 16 13:08:44.256094 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:08:44.256514 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:08:44.270496 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:08:44.334785 augenrules[1770]: No rules Dec 16 13:08:44.336729 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:08:44.337048 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:08:44.338511 sudo[1747]: pam_unix(sudo:session): session closed for user root Dec 16 13:08:44.340256 sshd[1746]: Connection closed by 10.0.0.1 port 42512 Dec 16 13:08:44.340660 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:44.350021 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:42512.service: Deactivated successfully. Dec 16 13:08:44.352191 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:08:44.353037 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:08:44.356325 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:42520.service - OpenSSH per-connection server daemon (10.0.0.1:42520). Dec 16 13:08:44.357048 systemd-logind[1558]: Removed session 6. Dec 16 13:08:44.416634 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 42520 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:08:44.418265 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:44.424965 systemd-logind[1558]: New session 7 of user core. Dec 16 13:08:44.433281 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:08:44.490411 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:08:44.490748 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:08:45.145409 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:08:45.175701 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:08:45.627398 dockerd[1804]: time="2025-12-16T13:08:45.627234729Z" level=info msg="Starting up" Dec 16 13:08:45.628204 dockerd[1804]: time="2025-12-16T13:08:45.628168528Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:08:45.670732 dockerd[1804]: time="2025-12-16T13:08:45.670619675Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:08:46.448716 dockerd[1804]: time="2025-12-16T13:08:46.448631179Z" level=info msg="Loading containers: start." Dec 16 13:08:46.731104 kernel: Initializing XFRM netlink socket Dec 16 13:08:47.045368 systemd-networkd[1461]: docker0: Link UP Dec 16 13:08:47.051147 dockerd[1804]: time="2025-12-16T13:08:47.051092912Z" level=info msg="Loading containers: done." Dec 16 13:08:47.069567 dockerd[1804]: time="2025-12-16T13:08:47.069489082Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:08:47.069739 dockerd[1804]: time="2025-12-16T13:08:47.069611816Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:08:47.069768 dockerd[1804]: time="2025-12-16T13:08:47.069743206Z" level=info msg="Initializing buildkit" Dec 16 13:08:47.104445 dockerd[1804]: time="2025-12-16T13:08:47.104381186Z" level=info msg="Completed buildkit initialization" Dec 16 13:08:47.110564 dockerd[1804]: time="2025-12-16T13:08:47.110485739Z" level=info msg="Daemon has completed initialization" Dec 16 13:08:47.110706 dockerd[1804]: time="2025-12-16T13:08:47.110612127Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:08:47.110851 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:08:47.855103 containerd[1576]: time="2025-12-16T13:08:47.855030278Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:08:48.851271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175091804.mount: Deactivated successfully. Dec 16 13:08:51.068624 containerd[1576]: time="2025-12-16T13:08:51.068536674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:51.069735 containerd[1576]: time="2025-12-16T13:08:51.069698098Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 16 13:08:51.071378 containerd[1576]: time="2025-12-16T13:08:51.071345744Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:51.076163 containerd[1576]: time="2025-12-16T13:08:51.076121267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:51.078252 containerd[1576]: time="2025-12-16T13:08:51.078135249Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 3.223034619s" Dec 16 13:08:51.078307 containerd[1576]: time="2025-12-16T13:08:51.078269450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:08:51.079081 containerd[1576]: time="2025-12-16T13:08:51.078911850Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:08:51.453412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:08:51.455700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:08:51.703117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:08:51.708861 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:08:51.846907 kubelet[2085]: E1216 13:08:51.846792 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:08:51.855417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:08:51.855724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:08:51.856282 systemd[1]: kubelet.service: Consumed 366ms CPU time, 110.7M memory peak. Dec 16 13:08:52.543971 containerd[1576]: time="2025-12-16T13:08:52.543904879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:52.544756 containerd[1576]: time="2025-12-16T13:08:52.544692208Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 16 13:08:52.545989 containerd[1576]: time="2025-12-16T13:08:52.545929830Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:52.548950 containerd[1576]: time="2025-12-16T13:08:52.548916741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:52.549906 containerd[1576]: time="2025-12-16T13:08:52.549877271Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.470938644s" Dec 16 13:08:52.549973 containerd[1576]: time="2025-12-16T13:08:52.549910019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:08:52.550482 containerd[1576]: time="2025-12-16T13:08:52.550450906Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:08:54.293393 containerd[1576]: time="2025-12-16T13:08:54.293324610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:54.294201 containerd[1576]: time="2025-12-16T13:08:54.294169982Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 16 13:08:54.295603 containerd[1576]: time="2025-12-16T13:08:54.295541811Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:54.298110 containerd[1576]: time="2025-12-16T13:08:54.298074740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:54.299321 containerd[1576]: time="2025-12-16T13:08:54.299283906Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.748803528s" Dec 16 13:08:54.299369 containerd[1576]: time="2025-12-16T13:08:54.299317155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:08:54.299950 containerd[1576]: time="2025-12-16T13:08:54.299885864Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:08:55.485722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894870353.mount: Deactivated successfully. Dec 16 13:08:55.831159 containerd[1576]: time="2025-12-16T13:08:55.830985487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:55.832194 containerd[1576]: time="2025-12-16T13:08:55.832152113Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 16 13:08:55.833433 containerd[1576]: time="2025-12-16T13:08:55.833381885Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:55.835306 containerd[1576]: time="2025-12-16T13:08:55.835270667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:55.835878 containerd[1576]: time="2025-12-16T13:08:55.835832428Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.535880827s" Dec 16 13:08:55.835911 containerd[1576]: time="2025-12-16T13:08:55.835876579Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:08:55.836509 containerd[1576]: time="2025-12-16T13:08:55.836457766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:08:56.446413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2405755935.mount: Deactivated successfully. Dec 16 13:08:57.895567 containerd[1576]: time="2025-12-16T13:08:57.895490962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:57.896426 containerd[1576]: time="2025-12-16T13:08:57.896375786Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 16 13:08:57.897819 containerd[1576]: time="2025-12-16T13:08:57.897759226Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:57.900671 containerd[1576]: time="2025-12-16T13:08:57.900613521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:57.901848 containerd[1576]: time="2025-12-16T13:08:57.901796856Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.065297261s" Dec 16 13:08:57.901848 containerd[1576]: time="2025-12-16T13:08:57.901841061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:08:57.902550 containerd[1576]: time="2025-12-16T13:08:57.902434446Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:08:58.425784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365212035.mount: Deactivated successfully. Dec 16 13:08:58.435506 containerd[1576]: time="2025-12-16T13:08:58.435460380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:58.442091 containerd[1576]: time="2025-12-16T13:08:58.442026485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 16 13:08:58.483719 containerd[1576]: time="2025-12-16T13:08:58.483670380Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:58.486506 containerd[1576]: time="2025-12-16T13:08:58.486455982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:58.487122 containerd[1576]: time="2025-12-16T13:08:58.487040616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 584.571524ms" Dec 16 13:08:58.487190 containerd[1576]: time="2025-12-16T13:08:58.487123772Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:08:58.487758 containerd[1576]: time="2025-12-16T13:08:58.487709893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:08:59.229118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449589793.mount: Deactivated successfully. Dec 16 13:09:01.973579 containerd[1576]: time="2025-12-16T13:09:01.973489278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:01.974278 containerd[1576]: time="2025-12-16T13:09:01.974246762Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 16 13:09:01.975723 containerd[1576]: time="2025-12-16T13:09:01.975666406Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:01.979936 containerd[1576]: time="2025-12-16T13:09:01.979874240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:01.981246 containerd[1576]: time="2025-12-16T13:09:01.981188175Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.493440589s" Dec 16 13:09:01.981246 containerd[1576]: time="2025-12-16T13:09:01.981233113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:09:02.106317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:09:02.108399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:02.379708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:02.397723 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:09:02.440967 kubelet[2254]: E1216 13:09:02.440889 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:09:02.445291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:09:02.445505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:09:02.445941 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.4M memory peak. Dec 16 13:09:04.813979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:04.814179 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.4M memory peak. Dec 16 13:09:04.816438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:04.844296 systemd[1]: Reload requested from client PID 2271 ('systemctl') (unit session-7.scope)... Dec 16 13:09:04.844312 systemd[1]: Reloading... Dec 16 13:09:04.943093 zram_generator::config[2314]: No configuration found. Dec 16 13:09:05.309095 systemd[1]: Reloading finished in 464 ms. Dec 16 13:09:05.372900 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:09:05.373004 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:09:05.373344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:05.373391 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.1M memory peak. Dec 16 13:09:05.375222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:05.590812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:05.609663 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:09:05.654859 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:09:05.654859 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:09:05.655334 kubelet[2362]: I1216 13:09:05.654915 2362 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:09:06.334284 kubelet[2362]: I1216 13:09:06.334167 2362 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:09:06.334284 kubelet[2362]: I1216 13:09:06.334208 2362 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:09:06.338252 kubelet[2362]: I1216 13:09:06.338014 2362 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:09:06.338252 kubelet[2362]: I1216 13:09:06.338251 2362 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:09:06.338613 kubelet[2362]: I1216 13:09:06.338587 2362 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:09:06.802606 kubelet[2362]: E1216 13:09:06.802554 2362 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:09:06.802606 kubelet[2362]: I1216 13:09:06.802603 2362 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:09:06.807117 kubelet[2362]: I1216 13:09:06.807100 2362 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:09:06.812415 kubelet[2362]: I1216 13:09:06.812392 2362 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:09:06.814443 kubelet[2362]: I1216 13:09:06.814393 2362 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:09:06.814602 kubelet[2362]: I1216 13:09:06.814421 2362 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:09:06.814718 kubelet[2362]: I1216 13:09:06.814612 2362 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:09:06.814718 kubelet[2362]: I1216 13:09:06.814623 2362 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:09:06.814774 kubelet[2362]: I1216 13:09:06.814731 2362 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:09:06.818220 kubelet[2362]: I1216 13:09:06.818183 2362 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:09:06.818431 kubelet[2362]: I1216 13:09:06.818400 2362 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:09:06.818431 kubelet[2362]: I1216 13:09:06.818427 2362 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:09:06.818497 kubelet[2362]: I1216 13:09:06.818451 2362 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:09:06.818497 kubelet[2362]: I1216 13:09:06.818475 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:09:06.819154 kubelet[2362]: E1216 13:09:06.819085 2362 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:09:06.819325 kubelet[2362]: E1216 13:09:06.819164 2362 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:09:06.823913 kubelet[2362]: I1216 13:09:06.823444 2362 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:09:06.824072 kubelet[2362]: I1216 13:09:06.824034 2362 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:09:06.824106 kubelet[2362]: I1216 13:09:06.824094 2362 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:09:06.824189 kubelet[2362]: W1216 13:09:06.824158 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:09:06.828457 kubelet[2362]: I1216 13:09:06.828420 2362 server.go:1262] "Started kubelet" Dec 16 13:09:06.828657 kubelet[2362]: I1216 13:09:06.828605 2362 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:09:06.828824 kubelet[2362]: I1216 13:09:06.828789 2362 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:09:06.829366 kubelet[2362]: I1216 13:09:06.829345 2362 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:09:06.830114 kubelet[2362]: I1216 13:09:06.829438 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:09:06.830114 kubelet[2362]: I1216 13:09:06.828812 2362 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:09:06.830564 kubelet[2362]: I1216 13:09:06.830534 2362 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:09:06.832456 kubelet[2362]: I1216 13:09:06.831559 2362 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:09:06.833868 kubelet[2362]: E1216 13:09:06.833757 2362 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:09:06.833868 kubelet[2362]: I1216 13:09:06.833793 2362 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:09:06.833970 kubelet[2362]: I1216 13:09:06.833947 2362 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:09:06.834041 kubelet[2362]: E1216 13:09:06.832697 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b418a3ed1142 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 13:09:06.82838253 +0000 UTC m=+1.211532659,LastTimestamp:2025-12-16 13:09:06.82838253 +0000 UTC m=+1.211532659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 13:09:06.834041 kubelet[2362]: I1216 13:09:06.834019 2362 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:09:06.834455 kubelet[2362]: E1216 13:09:06.834431 2362 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:09:06.834706 kubelet[2362]: E1216 13:09:06.834662 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Dec 16 13:09:06.834752 kubelet[2362]: I1216 13:09:06.834726 2362 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:09:06.834818 kubelet[2362]: E1216 13:09:06.834780 2362 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:09:06.834905 kubelet[2362]: I1216 13:09:06.834880 2362 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:09:06.836079 kubelet[2362]: I1216 13:09:06.835903 2362 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:09:06.848976 kubelet[2362]: I1216 13:09:06.848950 2362 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:09:06.848976 kubelet[2362]: I1216 13:09:06.848970 2362 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:09:06.849090 kubelet[2362]: I1216 13:09:06.848995 2362 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:09:06.850250 kubelet[2362]: I1216 13:09:06.850210 2362 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:09:06.851886 kubelet[2362]: I1216 13:09:06.851848 2362 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:09:06.851886 kubelet[2362]: I1216 13:09:06.851875 2362 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:09:06.851979 kubelet[2362]: I1216 13:09:06.851896 2362 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:09:06.851979 kubelet[2362]: E1216 13:09:06.851934 2362 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:09:06.852504 kubelet[2362]: I1216 13:09:06.852291 2362 policy_none.go:49] "None policy: Start" Dec 16 13:09:06.852504 kubelet[2362]: I1216 13:09:06.852313 2362 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:09:06.852504 kubelet[2362]: I1216 13:09:06.852327 2362 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:09:06.853721 kubelet[2362]: I1216 13:09:06.853697 2362 policy_none.go:47] "Start" Dec 16 13:09:06.856490 kubelet[2362]: E1216 13:09:06.856425 2362 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:09:06.860337 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:09:06.871023 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:09:06.874154 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:09:06.894010 kubelet[2362]: E1216 13:09:06.893921 2362 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:09:06.894943 kubelet[2362]: I1216 13:09:06.894196 2362 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:09:06.894943 kubelet[2362]: I1216 13:09:06.894221 2362 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:09:06.894943 kubelet[2362]: I1216 13:09:06.894441 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:09:06.895393 kubelet[2362]: E1216 13:09:06.895359 2362 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:09:06.895393 kubelet[2362]: E1216 13:09:06.895401 2362 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 13:09:06.966136 systemd[1]: Created slice kubepods-burstable-pod9b773c0a6054306257f0b21eea0de4dc.slice - libcontainer container kubepods-burstable-pod9b773c0a6054306257f0b21eea0de4dc.slice. Dec 16 13:09:06.977185 kubelet[2362]: E1216 13:09:06.977146 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:06.980163 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 16 13:09:06.982475 kubelet[2362]: E1216 13:09:06.982449 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:06.985316 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 16 13:09:06.987086 kubelet[2362]: E1216 13:09:06.987038 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:06.996100 kubelet[2362]: I1216 13:09:06.996028 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:09:06.996444 kubelet[2362]: E1216 13:09:06.996414 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Dec 16 13:09:07.035447 kubelet[2362]: E1216 13:09:07.035389 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Dec 16 13:09:07.136050 kubelet[2362]: I1216 13:09:07.135871 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:09:07.136050 kubelet[2362]: I1216 13:09:07.135911 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b773c0a6054306257f0b21eea0de4dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9b773c0a6054306257f0b21eea0de4dc\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:07.136050 kubelet[2362]: I1216 13:09:07.135932 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:07.136050 kubelet[2362]: I1216 13:09:07.135950 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:07.136050 kubelet[2362]: I1216 13:09:07.135967 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b773c0a6054306257f0b21eea0de4dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b773c0a6054306257f0b21eea0de4dc\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:07.136401 kubelet[2362]: I1216 13:09:07.135982 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b773c0a6054306257f0b21eea0de4dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b773c0a6054306257f0b21eea0de4dc\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:07.136401 kubelet[2362]: I1216 13:09:07.135996 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:07.136401 kubelet[2362]: I1216 13:09:07.136012 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:07.136401 kubelet[2362]: I1216 13:09:07.136027 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:07.198767 kubelet[2362]: I1216 13:09:07.198714 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:09:07.199233 kubelet[2362]: E1216 13:09:07.199191 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Dec 16 13:09:07.282237 containerd[1576]: time="2025-12-16T13:09:07.282171438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9b773c0a6054306257f0b21eea0de4dc,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:07.286425 containerd[1576]: time="2025-12-16T13:09:07.286381796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:07.290663 containerd[1576]: time="2025-12-16T13:09:07.290618844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:07.436444 kubelet[2362]: E1216 13:09:07.436399 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Dec 16 13:09:07.601044 kubelet[2362]: I1216 13:09:07.600990 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:09:07.601441 kubelet[2362]: E1216 13:09:07.601406 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Dec 16 13:09:07.768252 kubelet[2362]: E1216 13:09:07.768118 2362 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:09:07.853937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560111156.mount: Deactivated successfully. Dec 16 13:09:07.865514 containerd[1576]: time="2025-12-16T13:09:07.865411652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:09:07.867006 containerd[1576]: time="2025-12-16T13:09:07.866927394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:09:07.870230 containerd[1576]: time="2025-12-16T13:09:07.870172171Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:09:07.872919 containerd[1576]: time="2025-12-16T13:09:07.872832950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:09:07.874258 containerd[1576]: time="2025-12-16T13:09:07.874181957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:09:07.875322 containerd[1576]: time="2025-12-16T13:09:07.875280183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:09:07.876326 containerd[1576]: time="2025-12-16T13:09:07.876290069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.177233ms" Dec 16 13:09:07.876746 containerd[1576]: time="2025-12-16T13:09:07.876698221Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:09:07.878021 containerd[1576]: time="2025-12-16T13:09:07.877894371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:09:07.880540 containerd[1576]: time="2025-12-16T13:09:07.880513161Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.853011ms" Dec 16 13:09:07.882932 containerd[1576]: time="2025-12-16T13:09:07.882897401Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 590.65328ms" Dec 16 13:09:07.914160 containerd[1576]: time="2025-12-16T13:09:07.914099395Z" level=info msg="connecting to shim f0e560160d0478ada543a680c1bdd5749ad640cbb567c2eb126f1c4f89246ee7" address="unix:///run/containerd/s/69936fb1ef9b8a48d88bec04adab0b7fb41245d595f4a5abbc19f14980964881" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:07.915918 containerd[1576]: time="2025-12-16T13:09:07.915879975Z" level=info msg="connecting to shim 97977c609467f2c5bc2f99505ad94cb9f7d81476e2440bcadf031945111e3066" address="unix:///run/containerd/s/6471867291a96d95b87d81ab5cc97c6e360521f39556a736e59712e57d5f510b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:07.927311 containerd[1576]: time="2025-12-16T13:09:07.927249510Z" level=info msg="connecting to shim dd40d6b167e48e1e3f9a89bd3623d49c8ccfcd2d7f948e7db1a7099c0da4244b" address="unix:///run/containerd/s/3f64b3833c03c6c27c441fca7c3f94c6521c5ba09cd3888105382b55420cd64f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:07.945298 systemd[1]: Started cri-containerd-f0e560160d0478ada543a680c1bdd5749ad640cbb567c2eb126f1c4f89246ee7.scope - libcontainer container f0e560160d0478ada543a680c1bdd5749ad640cbb567c2eb126f1c4f89246ee7. Dec 16 13:09:07.950221 systemd[1]: Started cri-containerd-97977c609467f2c5bc2f99505ad94cb9f7d81476e2440bcadf031945111e3066.scope - libcontainer container 97977c609467f2c5bc2f99505ad94cb9f7d81476e2440bcadf031945111e3066. Dec 16 13:09:07.970221 systemd[1]: Started cri-containerd-dd40d6b167e48e1e3f9a89bd3623d49c8ccfcd2d7f948e7db1a7099c0da4244b.scope - libcontainer container dd40d6b167e48e1e3f9a89bd3623d49c8ccfcd2d7f948e7db1a7099c0da4244b. Dec 16 13:09:08.018720 containerd[1576]: time="2025-12-16T13:09:08.018404480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"97977c609467f2c5bc2f99505ad94cb9f7d81476e2440bcadf031945111e3066\"" Dec 16 13:09:08.028296 containerd[1576]: time="2025-12-16T13:09:08.028233678Z" level=info msg="CreateContainer within sandbox \"97977c609467f2c5bc2f99505ad94cb9f7d81476e2440bcadf031945111e3066\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:09:08.028653 containerd[1576]: time="2025-12-16T13:09:08.028626133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9b773c0a6054306257f0b21eea0de4dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0e560160d0478ada543a680c1bdd5749ad640cbb567c2eb126f1c4f89246ee7\"" Dec 16 13:09:08.034910 containerd[1576]: time="2025-12-16T13:09:08.034871247Z" level=info msg="CreateContainer within sandbox \"f0e560160d0478ada543a680c1bdd5749ad640cbb567c2eb126f1c4f89246ee7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:09:08.038831 containerd[1576]: time="2025-12-16T13:09:08.038767314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd40d6b167e48e1e3f9a89bd3623d49c8ccfcd2d7f948e7db1a7099c0da4244b\"" Dec 16 13:09:08.044000 containerd[1576]: time="2025-12-16T13:09:08.043960247Z" level=info msg="CreateContainer within sandbox \"dd40d6b167e48e1e3f9a89bd3623d49c8ccfcd2d7f948e7db1a7099c0da4244b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:09:08.046833 containerd[1576]: time="2025-12-16T13:09:08.046802581Z" level=info msg="Container bf42d3ebdcb72cc40b400df284c1abeee6a17002b4cc3339cf76e159e4601ee4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:08.058090 containerd[1576]: time="2025-12-16T13:09:08.057750637Z" level=info msg="Container ecb27c0ef30bd9dd548d3631e3d274e622426766fd245409c2a716814129542a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:08.058362 containerd[1576]: time="2025-12-16T13:09:08.058306361Z" level=info msg="CreateContainer within sandbox \"97977c609467f2c5bc2f99505ad94cb9f7d81476e2440bcadf031945111e3066\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf42d3ebdcb72cc40b400df284c1abeee6a17002b4cc3339cf76e159e4601ee4\"" Dec 16 13:09:08.059359 containerd[1576]: time="2025-12-16T13:09:08.059321228Z" level=info msg="StartContainer for \"bf42d3ebdcb72cc40b400df284c1abeee6a17002b4cc3339cf76e159e4601ee4\"" Dec 16 13:09:08.060783 containerd[1576]: time="2025-12-16T13:09:08.060756210Z" level=info msg="connecting to shim bf42d3ebdcb72cc40b400df284c1abeee6a17002b4cc3339cf76e159e4601ee4" address="unix:///run/containerd/s/6471867291a96d95b87d81ab5cc97c6e360521f39556a736e59712e57d5f510b" protocol=ttrpc version=3 Dec 16 13:09:08.066796 containerd[1576]: time="2025-12-16T13:09:08.066762123Z" level=info msg="Container a998664230a56bf0038f5c03620e6617823f453335513797997e0dfeffd5ac1e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:08.075331 containerd[1576]: time="2025-12-16T13:09:08.075283078Z" level=info msg="CreateContainer within sandbox \"f0e560160d0478ada543a680c1bdd5749ad640cbb567c2eb126f1c4f89246ee7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ecb27c0ef30bd9dd548d3631e3d274e622426766fd245409c2a716814129542a\"" Dec 16 13:09:08.075950 containerd[1576]: time="2025-12-16T13:09:08.075919275Z" level=info msg="StartContainer for \"ecb27c0ef30bd9dd548d3631e3d274e622426766fd245409c2a716814129542a\"" Dec 16 13:09:08.077289 containerd[1576]: time="2025-12-16T13:09:08.077263729Z" level=info msg="connecting to shim ecb27c0ef30bd9dd548d3631e3d274e622426766fd245409c2a716814129542a" address="unix:///run/containerd/s/69936fb1ef9b8a48d88bec04adab0b7fb41245d595f4a5abbc19f14980964881" protocol=ttrpc version=3 Dec 16 13:09:08.081874 containerd[1576]: time="2025-12-16T13:09:08.081819553Z" level=info msg="CreateContainer within sandbox \"dd40d6b167e48e1e3f9a89bd3623d49c8ccfcd2d7f948e7db1a7099c0da4244b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a998664230a56bf0038f5c03620e6617823f453335513797997e0dfeffd5ac1e\"" Dec 16 13:09:08.082495 containerd[1576]: time="2025-12-16T13:09:08.082462306Z" level=info msg="StartContainer for \"a998664230a56bf0038f5c03620e6617823f453335513797997e0dfeffd5ac1e\"" Dec 16 13:09:08.083329 systemd[1]: Started cri-containerd-bf42d3ebdcb72cc40b400df284c1abeee6a17002b4cc3339cf76e159e4601ee4.scope - libcontainer container bf42d3ebdcb72cc40b400df284c1abeee6a17002b4cc3339cf76e159e4601ee4. Dec 16 13:09:08.085985 containerd[1576]: time="2025-12-16T13:09:08.085954761Z" level=info msg="connecting to shim a998664230a56bf0038f5c03620e6617823f453335513797997e0dfeffd5ac1e" address="unix:///run/containerd/s/3f64b3833c03c6c27c441fca7c3f94c6521c5ba09cd3888105382b55420cd64f" protocol=ttrpc version=3 Dec 16 13:09:08.101229 systemd[1]: Started cri-containerd-ecb27c0ef30bd9dd548d3631e3d274e622426766fd245409c2a716814129542a.scope - libcontainer container ecb27c0ef30bd9dd548d3631e3d274e622426766fd245409c2a716814129542a. Dec 16 13:09:08.105574 systemd[1]: Started cri-containerd-a998664230a56bf0038f5c03620e6617823f453335513797997e0dfeffd5ac1e.scope - libcontainer container a998664230a56bf0038f5c03620e6617823f453335513797997e0dfeffd5ac1e. Dec 16 13:09:08.158625 containerd[1576]: time="2025-12-16T13:09:08.158570204Z" level=info msg="StartContainer for \"bf42d3ebdcb72cc40b400df284c1abeee6a17002b4cc3339cf76e159e4601ee4\" returns successfully" Dec 16 13:09:08.173719 kubelet[2362]: E1216 13:09:08.173669 2362 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:09:08.176080 containerd[1576]: time="2025-12-16T13:09:08.174824407Z" level=info msg="StartContainer for \"ecb27c0ef30bd9dd548d3631e3d274e622426766fd245409c2a716814129542a\" returns successfully" Dec 16 13:09:08.190642 containerd[1576]: time="2025-12-16T13:09:08.190583317Z" level=info msg="StartContainer for \"a998664230a56bf0038f5c03620e6617823f453335513797997e0dfeffd5ac1e\" returns successfully" Dec 16 13:09:08.205476 kubelet[2362]: E1216 13:09:08.205415 2362 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:09:08.237307 kubelet[2362]: E1216 13:09:08.237252 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Dec 16 13:09:08.403825 kubelet[2362]: I1216 13:09:08.403788 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:09:08.874687 kubelet[2362]: E1216 13:09:08.874576 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:08.877325 kubelet[2362]: E1216 13:09:08.877286 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:08.882511 kubelet[2362]: E1216 13:09:08.882477 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:09.355780 kubelet[2362]: I1216 13:09:09.355737 2362 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:09:09.356246 kubelet[2362]: E1216 13:09:09.355812 2362 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 13:09:09.364100 kubelet[2362]: E1216 13:09:09.364069 2362 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:09:09.395554 kubelet[2362]: E1216 13:09:09.395451 2362 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1881b418a3ed1142 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 13:09:06.82838253 +0000 UTC m=+1.211532659,LastTimestamp:2025-12-16 13:09:06.82838253 +0000 UTC m=+1.211532659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 13:09:09.464224 kubelet[2362]: E1216 13:09:09.464173 2362 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:09:09.564324 kubelet[2362]: E1216 13:09:09.564286 2362 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:09:09.664914 kubelet[2362]: E1216 13:09:09.664864 2362 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:09:09.765768 kubelet[2362]: E1216 13:09:09.765686 2362 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:09:09.866682 kubelet[2362]: E1216 13:09:09.866640 2362 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:09:09.883433 kubelet[2362]: E1216 13:09:09.883379 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:09.883568 kubelet[2362]: E1216 13:09:09.883480 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:09.883801 kubelet[2362]: E1216 13:09:09.883767 2362 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:09:09.967151 kubelet[2362]: E1216 13:09:09.966954 2362 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:09:10.134989 kubelet[2362]: I1216 13:09:10.134914 2362 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:10.140493 kubelet[2362]: E1216 13:09:10.140445 2362 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:10.140493 kubelet[2362]: I1216 13:09:10.140481 2362 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:09:10.142305 kubelet[2362]: E1216 13:09:10.142256 2362 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 13:09:10.142305 kubelet[2362]: I1216 13:09:10.142282 2362 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:10.144335 kubelet[2362]: E1216 13:09:10.144282 2362 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:10.821137 kubelet[2362]: I1216 13:09:10.821092 2362 apiserver.go:52] "Watching apiserver" Dec 16 13:09:10.834457 kubelet[2362]: I1216 13:09:10.834418 2362 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:09:11.545743 systemd[1]: Reload requested from client PID 2651 ('systemctl') (unit session-7.scope)... Dec 16 13:09:11.545760 systemd[1]: Reloading... Dec 16 13:09:11.630093 zram_generator::config[2697]: No configuration found. Dec 16 13:09:11.870428 systemd[1]: Reloading finished in 324 ms. Dec 16 13:09:11.904899 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:11.927707 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:09:11.928051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:11.928127 systemd[1]: kubelet.service: Consumed 1.237s CPU time, 125.7M memory peak. Dec 16 13:09:11.930161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:12.163201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:12.179522 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:09:12.219570 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:09:12.219570 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:09:12.219955 kubelet[2739]: I1216 13:09:12.219592 2739 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:09:12.228345 kubelet[2739]: I1216 13:09:12.228300 2739 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:09:12.228345 kubelet[2739]: I1216 13:09:12.228329 2739 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:09:12.228463 kubelet[2739]: I1216 13:09:12.228368 2739 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:09:12.228463 kubelet[2739]: I1216 13:09:12.228382 2739 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:09:12.228593 kubelet[2739]: I1216 13:09:12.228570 2739 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:09:12.229768 kubelet[2739]: I1216 13:09:12.229735 2739 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:09:12.231781 kubelet[2739]: I1216 13:09:12.231743 2739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:09:12.235128 kubelet[2739]: I1216 13:09:12.235105 2739 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:09:12.241136 kubelet[2739]: I1216 13:09:12.241067 2739 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:09:12.241363 kubelet[2739]: I1216 13:09:12.241336 2739 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:09:12.241532 kubelet[2739]: I1216 13:09:12.241356 2739 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:09:12.241627 kubelet[2739]: I1216 13:09:12.241535 2739 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:09:12.241627 kubelet[2739]: I1216 13:09:12.241545 2739 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:09:12.241627 kubelet[2739]: I1216 13:09:12.241571 2739 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:09:12.242385 kubelet[2739]: I1216 13:09:12.242360 2739 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:09:12.242553 kubelet[2739]: I1216 13:09:12.242538 2739 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:09:12.242593 kubelet[2739]: I1216 13:09:12.242556 2739 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:09:12.242642 kubelet[2739]: I1216 13:09:12.242631 2739 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:09:12.242683 kubelet[2739]: I1216 13:09:12.242667 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:09:12.243804 kubelet[2739]: I1216 13:09:12.243781 2739 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:09:12.244424 kubelet[2739]: I1216 13:09:12.244400 2739 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:09:12.244424 kubelet[2739]: I1216 13:09:12.244428 2739 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:09:12.247160 kubelet[2739]: I1216 13:09:12.247113 2739 server.go:1262] "Started kubelet" Dec 16 13:09:12.250390 kubelet[2739]: I1216 13:09:12.250352 2739 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:09:12.250729 kubelet[2739]: I1216 13:09:12.250698 2739 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:09:12.250760 kubelet[2739]: I1216 13:09:12.250743 2739 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:09:12.251018 kubelet[2739]: I1216 13:09:12.250987 2739 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:09:12.254374 kubelet[2739]: I1216 13:09:12.254346 2739 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:09:12.255716 kubelet[2739]: I1216 13:09:12.255686 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:09:12.257891 kubelet[2739]: I1216 13:09:12.257346 2739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:09:12.265089 kubelet[2739]: I1216 13:09:12.263811 2739 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:09:12.265089 kubelet[2739]: I1216 13:09:12.263903 2739 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:09:12.265378 kubelet[2739]: I1216 13:09:12.265364 2739 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:09:12.267081 kubelet[2739]: I1216 13:09:12.265936 2739 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:09:12.267081 kubelet[2739]: E1216 13:09:12.266439 2739 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:09:12.269743 kubelet[2739]: I1216 13:09:12.269225 2739 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:09:12.269743 kubelet[2739]: I1216 13:09:12.269247 2739 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:09:12.280412 kubelet[2739]: I1216 13:09:12.280238 2739 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:09:12.281701 kubelet[2739]: I1216 13:09:12.281668 2739 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:09:12.281701 kubelet[2739]: I1216 13:09:12.281702 2739 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:09:12.281835 kubelet[2739]: I1216 13:09:12.281729 2739 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:09:12.281835 kubelet[2739]: E1216 13:09:12.281780 2739 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:09:12.313551 kubelet[2739]: I1216 13:09:12.313519 2739 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:09:12.313551 kubelet[2739]: I1216 13:09:12.313536 2739 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:09:12.313551 kubelet[2739]: I1216 13:09:12.313556 2739 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:09:12.313740 kubelet[2739]: I1216 13:09:12.313687 2739 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:09:12.313740 kubelet[2739]: I1216 13:09:12.313699 2739 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:09:12.313740 kubelet[2739]: I1216 13:09:12.313733 2739 policy_none.go:49] "None policy: Start" Dec 16 13:09:12.313740 kubelet[2739]: I1216 13:09:12.313742 2739 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:09:12.313907 kubelet[2739]: I1216 13:09:12.313754 2739 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:09:12.313907 kubelet[2739]: I1216 13:09:12.313835 2739 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:09:12.313907 kubelet[2739]: I1216 13:09:12.313842 2739 policy_none.go:47] "Start" Dec 16 13:09:12.318109 kubelet[2739]: E1216 13:09:12.317955 2739 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:09:12.318204 kubelet[2739]: I1216 13:09:12.318177 2739 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:09:12.318238 kubelet[2739]: I1216 13:09:12.318203 2739 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:09:12.318449 kubelet[2739]: I1216 13:09:12.318431 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:09:12.320144 kubelet[2739]: E1216 13:09:12.320109 2739 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:09:12.383195 kubelet[2739]: I1216 13:09:12.383143 2739 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:12.383195 kubelet[2739]: I1216 13:09:12.383193 2739 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:09:12.383382 kubelet[2739]: I1216 13:09:12.383236 2739 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:12.422339 kubelet[2739]: I1216 13:09:12.421520 2739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:09:12.428368 kubelet[2739]: I1216 13:09:12.428348 2739 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 13:09:12.428442 kubelet[2739]: I1216 13:09:12.428406 2739 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:09:12.467281 kubelet[2739]: I1216 13:09:12.467200 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b773c0a6054306257f0b21eea0de4dc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b773c0a6054306257f0b21eea0de4dc\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:12.467281 kubelet[2739]: I1216 13:09:12.467258 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b773c0a6054306257f0b21eea0de4dc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b773c0a6054306257f0b21eea0de4dc\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:12.467478 kubelet[2739]: I1216 13:09:12.467301 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b773c0a6054306257f0b21eea0de4dc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9b773c0a6054306257f0b21eea0de4dc\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:12.467478 kubelet[2739]: I1216 13:09:12.467339 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:12.467478 kubelet[2739]: I1216 13:09:12.467364 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:12.467478 kubelet[2739]: I1216 13:09:12.467408 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:12.467478 kubelet[2739]: I1216 13:09:12.467430 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:12.467615 kubelet[2739]: I1216 13:09:12.467447 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:09:12.467615 kubelet[2739]: I1216 13:09:12.467468 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:09:12.548521 sudo[2777]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:09:12.548868 sudo[2777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:09:12.850699 sudo[2777]: pam_unix(sudo:session): session closed for user root Dec 16 13:09:13.243652 kubelet[2739]: I1216 13:09:13.243568 2739 apiserver.go:52] "Watching apiserver" Dec 16 13:09:13.264631 kubelet[2739]: I1216 13:09:13.264575 2739 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:09:13.300006 kubelet[2739]: I1216 13:09:13.299960 2739 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:09:13.300538 kubelet[2739]: I1216 13:09:13.300504 2739 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:13.383502 kubelet[2739]: E1216 13:09:13.383449 2739 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 13:09:13.429714 kubelet[2739]: E1216 13:09:13.429509 2739 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 13:09:13.604017 kubelet[2739]: I1216 13:09:13.603445 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.603402461 podStartE2EDuration="1.603402461s" podCreationTimestamp="2025-12-16 13:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:13.43079221 +0000 UTC m=+1.247152880" watchObservedRunningTime="2025-12-16 13:09:13.603402461 +0000 UTC m=+1.419763099" Dec 16 13:09:13.612341 kubelet[2739]: I1216 13:09:13.612278 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.612257565 podStartE2EDuration="1.612257565s" podCreationTimestamp="2025-12-16 13:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:13.612077544 +0000 UTC m=+1.428438182" watchObservedRunningTime="2025-12-16 13:09:13.612257565 +0000 UTC m=+1.428618193" Dec 16 13:09:13.612472 kubelet[2739]: I1216 13:09:13.612395 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.61238989 podStartE2EDuration="1.61238989s" podCreationTimestamp="2025-12-16 13:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:13.604851229 +0000 UTC m=+1.421211867" watchObservedRunningTime="2025-12-16 13:09:13.61238989 +0000 UTC m=+1.428750528" Dec 16 13:09:14.405082 sudo[1783]: pam_unix(sudo:session): session closed for user root Dec 16 13:09:14.407362 sshd[1782]: Connection closed by 10.0.0.1 port 42520 Dec 16 13:09:14.407876 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:14.413924 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:42520.service: Deactivated successfully. Dec 16 13:09:14.416477 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:09:14.416724 systemd[1]: session-7.scope: Consumed 5.544s CPU time, 268.4M memory peak. Dec 16 13:09:14.418116 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:09:14.419597 systemd-logind[1558]: Removed session 7. Dec 16 13:09:18.893345 kubelet[2739]: I1216 13:09:18.893300 2739 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:09:18.893956 kubelet[2739]: I1216 13:09:18.893880 2739 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:09:18.893994 containerd[1576]: time="2025-12-16T13:09:18.893719922Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:09:19.662743 systemd[1]: Created slice kubepods-besteffort-pod8c5fcd76_6f47_4fe5_8b51_396b28423ca1.slice - libcontainer container kubepods-besteffort-pod8c5fcd76_6f47_4fe5_8b51_396b28423ca1.slice. Dec 16 13:09:19.677156 systemd[1]: Created slice kubepods-burstable-poddf154b5a_bfaa_4f78_a6fa_93fcd8fba501.slice - libcontainer container kubepods-burstable-poddf154b5a_bfaa_4f78_a6fa_93fcd8fba501.slice. Dec 16 13:09:19.713683 kubelet[2739]: I1216 13:09:19.713642 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-cgroup\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.713683 kubelet[2739]: I1216 13:09:19.713677 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cni-path\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.713683 kubelet[2739]: I1216 13:09:19.713694 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-etc-cni-netd\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.713928 kubelet[2739]: I1216 13:09:19.713731 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6qjg\" (UniqueName: \"kubernetes.io/projected/8c5fcd76-6f47-4fe5-8b51-396b28423ca1-kube-api-access-s6qjg\") pod \"kube-proxy-jxd2h\" (UID: \"8c5fcd76-6f47-4fe5-8b51-396b28423ca1\") " pod="kube-system/kube-proxy-jxd2h" Dec 16 13:09:19.713928 kubelet[2739]: I1216 13:09:19.713756 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-lib-modules\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.713928 kubelet[2739]: I1216 13:09:19.713770 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-clustermesh-secrets\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.713928 kubelet[2739]: I1216 13:09:19.713824 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-hubble-tls\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.713928 kubelet[2739]: I1216 13:09:19.713863 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-644hw\" (UniqueName: \"kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-kube-api-access-644hw\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.714043 kubelet[2739]: I1216 13:09:19.713880 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c5fcd76-6f47-4fe5-8b51-396b28423ca1-xtables-lock\") pod \"kube-proxy-jxd2h\" (UID: \"8c5fcd76-6f47-4fe5-8b51-396b28423ca1\") " pod="kube-system/kube-proxy-jxd2h" Dec 16 13:09:19.714043 kubelet[2739]: I1216 13:09:19.713895 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-xtables-lock\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.714043 kubelet[2739]: I1216 13:09:19.713934 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-config-path\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.714043 kubelet[2739]: I1216 13:09:19.713950 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-host-proc-sys-net\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.714043 kubelet[2739]: I1216 13:09:19.713964 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-host-proc-sys-kernel\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.714200 kubelet[2739]: I1216 13:09:19.713978 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c5fcd76-6f47-4fe5-8b51-396b28423ca1-kube-proxy\") pod \"kube-proxy-jxd2h\" (UID: \"8c5fcd76-6f47-4fe5-8b51-396b28423ca1\") " pod="kube-system/kube-proxy-jxd2h" Dec 16 13:09:19.714200 kubelet[2739]: I1216 13:09:19.714024 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c5fcd76-6f47-4fe5-8b51-396b28423ca1-lib-modules\") pod \"kube-proxy-jxd2h\" (UID: \"8c5fcd76-6f47-4fe5-8b51-396b28423ca1\") " pod="kube-system/kube-proxy-jxd2h" Dec 16 13:09:19.714200 kubelet[2739]: I1216 13:09:19.714044 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-run\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.714200 kubelet[2739]: I1216 13:09:19.714086 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-bpf-maps\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.714200 kubelet[2739]: I1216 13:09:19.714102 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-hostproc\") pod \"cilium-g4zmt\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " pod="kube-system/cilium-g4zmt" Dec 16 13:09:19.827193 kubelet[2739]: E1216 13:09:19.827134 2739 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:09:19.827193 kubelet[2739]: E1216 13:09:19.827173 2739 projected.go:196] Error preparing data for projected volume kube-api-access-644hw for pod kube-system/cilium-g4zmt: configmap "kube-root-ca.crt" not found Dec 16 13:09:19.827360 kubelet[2739]: E1216 13:09:19.827244 2739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-kube-api-access-644hw podName:df154b5a-bfaa-4f78-a6fa-93fcd8fba501 nodeName:}" failed. No retries permitted until 2025-12-16 13:09:20.32722148 +0000 UTC m=+8.143582118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-644hw" (UniqueName: "kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-kube-api-access-644hw") pod "cilium-g4zmt" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501") : configmap "kube-root-ca.crt" not found Dec 16 13:09:19.827589 kubelet[2739]: E1216 13:09:19.827487 2739 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:09:19.827589 kubelet[2739]: E1216 13:09:19.827519 2739 projected.go:196] Error preparing data for projected volume kube-api-access-s6qjg for pod kube-system/kube-proxy-jxd2h: configmap "kube-root-ca.crt" not found Dec 16 13:09:19.827589 kubelet[2739]: E1216 13:09:19.827588 2739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8c5fcd76-6f47-4fe5-8b51-396b28423ca1-kube-api-access-s6qjg podName:8c5fcd76-6f47-4fe5-8b51-396b28423ca1 nodeName:}" failed. No retries permitted until 2025-12-16 13:09:20.327565629 +0000 UTC m=+8.143926267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s6qjg" (UniqueName: "kubernetes.io/projected/8c5fcd76-6f47-4fe5-8b51-396b28423ca1-kube-api-access-s6qjg") pod "kube-proxy-jxd2h" (UID: "8c5fcd76-6f47-4fe5-8b51-396b28423ca1") : configmap "kube-root-ca.crt" not found Dec 16 13:09:20.069471 systemd[1]: Created slice kubepods-besteffort-podfed41712_6b3e_4fa8_80b1_3b835a688b24.slice - libcontainer container kubepods-besteffort-podfed41712_6b3e_4fa8_80b1_3b835a688b24.slice. Dec 16 13:09:20.116732 kubelet[2739]: I1216 13:09:20.116675 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fed41712-6b3e-4fa8-80b1-3b835a688b24-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-kwxvx\" (UID: \"fed41712-6b3e-4fa8-80b1-3b835a688b24\") " pod="kube-system/cilium-operator-6f9c7c5859-kwxvx" Dec 16 13:09:20.117306 kubelet[2739]: I1216 13:09:20.116775 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6sb2\" (UniqueName: \"kubernetes.io/projected/fed41712-6b3e-4fa8-80b1-3b835a688b24-kube-api-access-b6sb2\") pod \"cilium-operator-6f9c7c5859-kwxvx\" (UID: \"fed41712-6b3e-4fa8-80b1-3b835a688b24\") " pod="kube-system/cilium-operator-6f9c7c5859-kwxvx" Dec 16 13:09:20.381176 containerd[1576]: time="2025-12-16T13:09:20.381005104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-kwxvx,Uid:fed41712-6b3e-4fa8-80b1-3b835a688b24,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:20.403884 containerd[1576]: time="2025-12-16T13:09:20.403825636Z" level=info msg="connecting to shim 9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5" address="unix:///run/containerd/s/64711452f44fc719bf3dad78c7587753d2b61f256a7db72d5c3a68ab227c04fb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:20.437314 systemd[1]: Started cri-containerd-9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5.scope - libcontainer container 9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5. Dec 16 13:09:20.486929 containerd[1576]: time="2025-12-16T13:09:20.486879099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-kwxvx,Uid:fed41712-6b3e-4fa8-80b1-3b835a688b24,Namespace:kube-system,Attempt:0,} returns sandbox id \"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\"" Dec 16 13:09:20.493637 containerd[1576]: time="2025-12-16T13:09:20.493605324Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:09:20.577530 containerd[1576]: time="2025-12-16T13:09:20.577479991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxd2h,Uid:8c5fcd76-6f47-4fe5-8b51-396b28423ca1,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:20.583910 containerd[1576]: time="2025-12-16T13:09:20.583869932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g4zmt,Uid:df154b5a-bfaa-4f78-a6fa-93fcd8fba501,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:20.597358 containerd[1576]: time="2025-12-16T13:09:20.597273279Z" level=info msg="connecting to shim 801d47714f00d47c56d020d9671e8264c80b486851173738ff99ecca97969ba2" address="unix:///run/containerd/s/238f65489da6e721e68cce3a67655c003ef798db3ebc599e011ebd64e5188772" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:20.607262 containerd[1576]: time="2025-12-16T13:09:20.607210605Z" level=info msg="connecting to shim 0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f" address="unix:///run/containerd/s/6194662de1e0db70cb8288af7c403f158e3747a6323f4349b0cc0cf88c1a6cac" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:20.625318 systemd[1]: Started cri-containerd-801d47714f00d47c56d020d9671e8264c80b486851173738ff99ecca97969ba2.scope - libcontainer container 801d47714f00d47c56d020d9671e8264c80b486851173738ff99ecca97969ba2. Dec 16 13:09:20.629631 systemd[1]: Started cri-containerd-0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f.scope - libcontainer container 0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f. Dec 16 13:09:20.672436 containerd[1576]: time="2025-12-16T13:09:20.672389637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxd2h,Uid:8c5fcd76-6f47-4fe5-8b51-396b28423ca1,Namespace:kube-system,Attempt:0,} returns sandbox id \"801d47714f00d47c56d020d9671e8264c80b486851173738ff99ecca97969ba2\"" Dec 16 13:09:20.674650 containerd[1576]: time="2025-12-16T13:09:20.674616855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g4zmt,Uid:df154b5a-bfaa-4f78-a6fa-93fcd8fba501,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\"" Dec 16 13:09:20.681812 containerd[1576]: time="2025-12-16T13:09:20.681773266Z" level=info msg="CreateContainer within sandbox \"801d47714f00d47c56d020d9671e8264c80b486851173738ff99ecca97969ba2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:09:20.695440 containerd[1576]: time="2025-12-16T13:09:20.695379775Z" level=info msg="Container 47ab2698934c0d592a78802e893708347d9c05e7f56ee412d0bcc3458639b3cb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:20.703012 containerd[1576]: time="2025-12-16T13:09:20.702928447Z" level=info msg="CreateContainer within sandbox \"801d47714f00d47c56d020d9671e8264c80b486851173738ff99ecca97969ba2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"47ab2698934c0d592a78802e893708347d9c05e7f56ee412d0bcc3458639b3cb\"" Dec 16 13:09:20.703538 containerd[1576]: time="2025-12-16T13:09:20.703510919Z" level=info msg="StartContainer for \"47ab2698934c0d592a78802e893708347d9c05e7f56ee412d0bcc3458639b3cb\"" Dec 16 13:09:20.705231 containerd[1576]: time="2025-12-16T13:09:20.705198992Z" level=info msg="connecting to shim 47ab2698934c0d592a78802e893708347d9c05e7f56ee412d0bcc3458639b3cb" address="unix:///run/containerd/s/238f65489da6e721e68cce3a67655c003ef798db3ebc599e011ebd64e5188772" protocol=ttrpc version=3 Dec 16 13:09:20.726211 systemd[1]: Started cri-containerd-47ab2698934c0d592a78802e893708347d9c05e7f56ee412d0bcc3458639b3cb.scope - libcontainer container 47ab2698934c0d592a78802e893708347d9c05e7f56ee412d0bcc3458639b3cb. Dec 16 13:09:20.852702 containerd[1576]: time="2025-12-16T13:09:20.852650123Z" level=info msg="StartContainer for \"47ab2698934c0d592a78802e893708347d9c05e7f56ee412d0bcc3458639b3cb\" returns successfully" Dec 16 13:09:21.338192 kubelet[2739]: I1216 13:09:21.338116 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxd2h" podStartSLOduration=2.3375007500000002 podStartE2EDuration="2.33750075s" podCreationTimestamp="2025-12-16 13:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:21.337245054 +0000 UTC m=+9.153605692" watchObservedRunningTime="2025-12-16 13:09:21.33750075 +0000 UTC m=+9.153861389" Dec 16 13:09:21.945163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054720591.mount: Deactivated successfully. Dec 16 13:09:22.259931 containerd[1576]: time="2025-12-16T13:09:22.259805696Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:22.260799 containerd[1576]: time="2025-12-16T13:09:22.260729107Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:09:22.262159 containerd[1576]: time="2025-12-16T13:09:22.262125964Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:22.263251 containerd[1576]: time="2025-12-16T13:09:22.263204312Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.769567156s" Dec 16 13:09:22.263292 containerd[1576]: time="2025-12-16T13:09:22.263252198Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:09:22.266728 containerd[1576]: time="2025-12-16T13:09:22.266688707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:09:22.271641 containerd[1576]: time="2025-12-16T13:09:22.271599246Z" level=info msg="CreateContainer within sandbox \"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:09:22.279438 containerd[1576]: time="2025-12-16T13:09:22.279388589Z" level=info msg="Container b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:22.287511 containerd[1576]: time="2025-12-16T13:09:22.287465123Z" level=info msg="CreateContainer within sandbox \"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\"" Dec 16 13:09:22.288558 containerd[1576]: time="2025-12-16T13:09:22.288493990Z" level=info msg="StartContainer for \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\"" Dec 16 13:09:22.289591 containerd[1576]: time="2025-12-16T13:09:22.289536777Z" level=info msg="connecting to shim b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f" address="unix:///run/containerd/s/64711452f44fc719bf3dad78c7587753d2b61f256a7db72d5c3a68ab227c04fb" protocol=ttrpc version=3 Dec 16 13:09:22.318224 systemd[1]: Started cri-containerd-b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f.scope - libcontainer container b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f. Dec 16 13:09:22.362982 containerd[1576]: time="2025-12-16T13:09:22.362920577Z" level=info msg="StartContainer for \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" returns successfully" Dec 16 13:09:23.316939 update_engine[1559]: I20251216 13:09:23.316867 1559 update_attempter.cc:509] Updating boot flags... Dec 16 13:09:25.529184 kubelet[2739]: I1216 13:09:25.529101 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-kwxvx" podStartSLOduration=3.751099601 podStartE2EDuration="5.529008311s" podCreationTimestamp="2025-12-16 13:09:20 +0000 UTC" firstStartedPulling="2025-12-16 13:09:20.488571532 +0000 UTC m=+8.304932170" lastFinishedPulling="2025-12-16 13:09:22.266480242 +0000 UTC m=+10.082840880" observedRunningTime="2025-12-16 13:09:23.348500043 +0000 UTC m=+11.164860702" watchObservedRunningTime="2025-12-16 13:09:25.529008311 +0000 UTC m=+13.345368949" Dec 16 13:09:30.304821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599335054.mount: Deactivated successfully. Dec 16 13:09:32.772947 containerd[1576]: time="2025-12-16T13:09:32.772876473Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:32.774108 containerd[1576]: time="2025-12-16T13:09:32.774078493Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:09:32.779829 containerd[1576]: time="2025-12-16T13:09:32.779790719Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:32.781467 containerd[1576]: time="2025-12-16T13:09:32.781420324Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.514699116s" Dec 16 13:09:32.781467 containerd[1576]: time="2025-12-16T13:09:32.781458235Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:09:32.792893 containerd[1576]: time="2025-12-16T13:09:32.792837681Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:09:32.800697 containerd[1576]: time="2025-12-16T13:09:32.800651934Z" level=info msg="Container 1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:32.805683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021747180.mount: Deactivated successfully. Dec 16 13:09:32.809367 containerd[1576]: time="2025-12-16T13:09:32.809305556Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\"" Dec 16 13:09:32.809954 containerd[1576]: time="2025-12-16T13:09:32.809926207Z" level=info msg="StartContainer for \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\"" Dec 16 13:09:32.812185 containerd[1576]: time="2025-12-16T13:09:32.812155601Z" level=info msg="connecting to shim 1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6" address="unix:///run/containerd/s/6194662de1e0db70cb8288af7c403f158e3747a6323f4349b0cc0cf88c1a6cac" protocol=ttrpc version=3 Dec 16 13:09:32.838236 systemd[1]: Started cri-containerd-1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6.scope - libcontainer container 1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6. Dec 16 13:09:32.878139 containerd[1576]: time="2025-12-16T13:09:32.878030106Z" level=info msg="StartContainer for \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\" returns successfully" Dec 16 13:09:32.882984 systemd[1]: cri-containerd-1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6.scope: Deactivated successfully. Dec 16 13:09:32.884574 containerd[1576]: time="2025-12-16T13:09:32.884532999Z" level=info msg="received container exit event container_id:\"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\" id:\"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\" pid:3239 exited_at:{seconds:1765890572 nanos:883954977}" Dec 16 13:09:32.909439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6-rootfs.mount: Deactivated successfully. Dec 16 13:09:34.364081 containerd[1576]: time="2025-12-16T13:09:34.364010411Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:09:34.375942 containerd[1576]: time="2025-12-16T13:09:34.375812500Z" level=info msg="Container 24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:34.383243 containerd[1576]: time="2025-12-16T13:09:34.383198371Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\"" Dec 16 13:09:34.383760 containerd[1576]: time="2025-12-16T13:09:34.383719083Z" level=info msg="StartContainer for \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\"" Dec 16 13:09:34.384849 containerd[1576]: time="2025-12-16T13:09:34.384800251Z" level=info msg="connecting to shim 24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91" address="unix:///run/containerd/s/6194662de1e0db70cb8288af7c403f158e3747a6323f4349b0cc0cf88c1a6cac" protocol=ttrpc version=3 Dec 16 13:09:34.421198 systemd[1]: Started cri-containerd-24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91.scope - libcontainer container 24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91. Dec 16 13:09:34.456576 containerd[1576]: time="2025-12-16T13:09:34.456532600Z" level=info msg="StartContainer for \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\" returns successfully" Dec 16 13:09:34.468015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:09:34.468270 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:09:34.468503 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:09:34.470039 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:09:34.472011 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:09:34.475157 systemd[1]: cri-containerd-24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91.scope: Deactivated successfully. Dec 16 13:09:34.475816 containerd[1576]: time="2025-12-16T13:09:34.475776477Z" level=info msg="received container exit event container_id:\"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\" id:\"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\" pid:3284 exited_at:{seconds:1765890574 nanos:475528452}" Dec 16 13:09:34.494581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:09:35.373611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91-rootfs.mount: Deactivated successfully. Dec 16 13:09:35.375315 containerd[1576]: time="2025-12-16T13:09:35.375277374Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:09:35.388333 containerd[1576]: time="2025-12-16T13:09:35.388271155Z" level=info msg="Container f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:35.393530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126720327.mount: Deactivated successfully. Dec 16 13:09:35.399555 containerd[1576]: time="2025-12-16T13:09:35.399512166Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\"" Dec 16 13:09:35.399984 containerd[1576]: time="2025-12-16T13:09:35.399954973Z" level=info msg="StartContainer for \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\"" Dec 16 13:09:35.401295 containerd[1576]: time="2025-12-16T13:09:35.401255865Z" level=info msg="connecting to shim f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d" address="unix:///run/containerd/s/6194662de1e0db70cb8288af7c403f158e3747a6323f4349b0cc0cf88c1a6cac" protocol=ttrpc version=3 Dec 16 13:09:35.426238 systemd[1]: Started cri-containerd-f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d.scope - libcontainer container f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d. Dec 16 13:09:35.521247 containerd[1576]: time="2025-12-16T13:09:35.521187733Z" level=info msg="StartContainer for \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\" returns successfully" Dec 16 13:09:35.522311 systemd[1]: cri-containerd-f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d.scope: Deactivated successfully. Dec 16 13:09:35.523735 containerd[1576]: time="2025-12-16T13:09:35.523689291Z" level=info msg="received container exit event container_id:\"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\" id:\"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\" pid:3331 exited_at:{seconds:1765890575 nanos:523450777}" Dec 16 13:09:35.550351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d-rootfs.mount: Deactivated successfully. Dec 16 13:09:36.373302 containerd[1576]: time="2025-12-16T13:09:36.373250892Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:09:36.386950 containerd[1576]: time="2025-12-16T13:09:36.386888079Z" level=info msg="Container 4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:36.394435 containerd[1576]: time="2025-12-16T13:09:36.394387837Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\"" Dec 16 13:09:36.395078 containerd[1576]: time="2025-12-16T13:09:36.395014800Z" level=info msg="StartContainer for \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\"" Dec 16 13:09:36.396128 containerd[1576]: time="2025-12-16T13:09:36.396083785Z" level=info msg="connecting to shim 4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf" address="unix:///run/containerd/s/6194662de1e0db70cb8288af7c403f158e3747a6323f4349b0cc0cf88c1a6cac" protocol=ttrpc version=3 Dec 16 13:09:36.418197 systemd[1]: Started cri-containerd-4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf.scope - libcontainer container 4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf. Dec 16 13:09:36.448049 systemd[1]: cri-containerd-4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf.scope: Deactivated successfully. Dec 16 13:09:36.449574 containerd[1576]: time="2025-12-16T13:09:36.449526844Z" level=info msg="received container exit event container_id:\"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\" id:\"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\" pid:3370 exited_at:{seconds:1765890576 nanos:448586936}" Dec 16 13:09:36.458662 containerd[1576]: time="2025-12-16T13:09:36.458625130Z" level=info msg="StartContainer for \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\" returns successfully" Dec 16 13:09:36.474927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf-rootfs.mount: Deactivated successfully. Dec 16 13:09:37.378775 containerd[1576]: time="2025-12-16T13:09:37.378721594Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:09:37.398659 containerd[1576]: time="2025-12-16T13:09:37.398602496Z" level=info msg="Container 6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:37.402913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866986976.mount: Deactivated successfully. Dec 16 13:09:37.407929 containerd[1576]: time="2025-12-16T13:09:37.407881287Z" level=info msg="CreateContainer within sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\"" Dec 16 13:09:37.409078 containerd[1576]: time="2025-12-16T13:09:37.408879629Z" level=info msg="StartContainer for \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\"" Dec 16 13:09:37.410186 containerd[1576]: time="2025-12-16T13:09:37.410138056Z" level=info msg="connecting to shim 6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6" address="unix:///run/containerd/s/6194662de1e0db70cb8288af7c403f158e3747a6323f4349b0cc0cf88c1a6cac" protocol=ttrpc version=3 Dec 16 13:09:37.438255 systemd[1]: Started cri-containerd-6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6.scope - libcontainer container 6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6. Dec 16 13:09:37.493644 containerd[1576]: time="2025-12-16T13:09:37.493579466Z" level=info msg="StartContainer for \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" returns successfully" Dec 16 13:09:37.663525 kubelet[2739]: I1216 13:09:37.663474 2739 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:09:37.707236 systemd[1]: Created slice kubepods-burstable-podc9a94f60_fe71_45bc_a440_de1b30c987b9.slice - libcontainer container kubepods-burstable-podc9a94f60_fe71_45bc_a440_de1b30c987b9.slice. Dec 16 13:09:37.718130 systemd[1]: Created slice kubepods-burstable-pod90dc7b7f_e396_4b9c_ad0f_57bf8f73e3a9.slice - libcontainer container kubepods-burstable-pod90dc7b7f_e396_4b9c_ad0f_57bf8f73e3a9.slice. Dec 16 13:09:37.759918 kubelet[2739]: I1216 13:09:37.759868 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtv5x\" (UniqueName: \"kubernetes.io/projected/90dc7b7f-e396-4b9c-ad0f-57bf8f73e3a9-kube-api-access-wtv5x\") pod \"coredns-66bc5c9577-tdkdh\" (UID: \"90dc7b7f-e396-4b9c-ad0f-57bf8f73e3a9\") " pod="kube-system/coredns-66bc5c9577-tdkdh" Dec 16 13:09:37.759918 kubelet[2739]: I1216 13:09:37.759928 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fztrh\" (UniqueName: \"kubernetes.io/projected/c9a94f60-fe71-45bc-a440-de1b30c987b9-kube-api-access-fztrh\") pod \"coredns-66bc5c9577-tkj8t\" (UID: \"c9a94f60-fe71-45bc-a440-de1b30c987b9\") " pod="kube-system/coredns-66bc5c9577-tkj8t" Dec 16 13:09:37.760134 kubelet[2739]: I1216 13:09:37.760016 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90dc7b7f-e396-4b9c-ad0f-57bf8f73e3a9-config-volume\") pod \"coredns-66bc5c9577-tdkdh\" (UID: \"90dc7b7f-e396-4b9c-ad0f-57bf8f73e3a9\") " pod="kube-system/coredns-66bc5c9577-tdkdh" Dec 16 13:09:37.760134 kubelet[2739]: I1216 13:09:37.760050 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9a94f60-fe71-45bc-a440-de1b30c987b9-config-volume\") pod \"coredns-66bc5c9577-tkj8t\" (UID: \"c9a94f60-fe71-45bc-a440-de1b30c987b9\") " pod="kube-system/coredns-66bc5c9577-tkj8t" Dec 16 13:09:38.240417 containerd[1576]: time="2025-12-16T13:09:38.240330796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tkj8t,Uid:c9a94f60-fe71-45bc-a440-de1b30c987b9,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:38.243374 containerd[1576]: time="2025-12-16T13:09:38.243322404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tdkdh,Uid:90dc7b7f-e396-4b9c-ad0f-57bf8f73e3a9,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:38.395082 kubelet[2739]: I1216 13:09:38.394986 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g4zmt" podStartSLOduration=7.287826776 podStartE2EDuration="19.394964097s" podCreationTimestamp="2025-12-16 13:09:19 +0000 UTC" firstStartedPulling="2025-12-16 13:09:20.675758868 +0000 UTC m=+8.492119506" lastFinishedPulling="2025-12-16 13:09:32.782896189 +0000 UTC m=+20.599256827" observedRunningTime="2025-12-16 13:09:38.394554648 +0000 UTC m=+26.210915286" watchObservedRunningTime="2025-12-16 13:09:38.394964097 +0000 UTC m=+26.211324735" Dec 16 13:09:39.856588 systemd-networkd[1461]: cilium_host: Link UP Dec 16 13:09:39.856755 systemd-networkd[1461]: cilium_net: Link UP Dec 16 13:09:39.856937 systemd-networkd[1461]: cilium_net: Gained carrier Dec 16 13:09:39.857133 systemd-networkd[1461]: cilium_host: Gained carrier Dec 16 13:09:39.974669 systemd-networkd[1461]: cilium_vxlan: Link UP Dec 16 13:09:39.974682 systemd-networkd[1461]: cilium_vxlan: Gained carrier Dec 16 13:09:40.202103 kernel: NET: Registered PF_ALG protocol family Dec 16 13:09:40.339291 systemd-networkd[1461]: cilium_host: Gained IPv6LL Dec 16 13:09:40.658677 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:53122.service - OpenSSH per-connection server daemon (10.0.0.1:53122). Dec 16 13:09:40.718651 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 53122 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:09:40.720862 sshd-session[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:40.726330 systemd-logind[1558]: New session 8 of user core. Dec 16 13:09:40.732212 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:09:40.897827 sshd[3781]: Connection closed by 10.0.0.1 port 53122 Dec 16 13:09:40.899358 systemd-networkd[1461]: cilium_net: Gained IPv6LL Dec 16 13:09:40.900187 sshd-session[3751]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:40.905808 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:09:40.906256 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:53122.service: Deactivated successfully. Dec 16 13:09:40.908664 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:09:40.910757 systemd-logind[1558]: Removed session 8. Dec 16 13:09:40.972342 systemd-networkd[1461]: lxc_health: Link UP Dec 16 13:09:40.972710 systemd-networkd[1461]: lxc_health: Gained carrier Dec 16 13:09:41.091290 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL Dec 16 13:09:41.304104 kernel: eth0: renamed from tmp516e5 Dec 16 13:09:41.304019 systemd-networkd[1461]: lxc2d1807691a55: Link UP Dec 16 13:09:41.305012 systemd-networkd[1461]: lxc2d1807691a55: Gained carrier Dec 16 13:09:41.326952 systemd-networkd[1461]: lxcafe63950c870: Link UP Dec 16 13:09:41.329093 kernel: eth0: renamed from tmp7143e Dec 16 13:09:41.333133 systemd-networkd[1461]: lxcafe63950c870: Gained carrier Dec 16 13:09:42.435283 systemd-networkd[1461]: lxc_health: Gained IPv6LL Dec 16 13:09:42.627575 systemd-networkd[1461]: lxcafe63950c870: Gained IPv6LL Dec 16 13:09:42.691276 systemd-networkd[1461]: lxc2d1807691a55: Gained IPv6LL Dec 16 13:09:45.087427 containerd[1576]: time="2025-12-16T13:09:45.087353177Z" level=info msg="connecting to shim 7143e01e319cd273c0c96d9f27d30d2b9511d6f427cb4c05eeb35b415ff13c98" address="unix:///run/containerd/s/914301a936c18d90730ae994a74cf494fc7e841073190fa88d8a13b55a33bd89" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:45.088228 containerd[1576]: time="2025-12-16T13:09:45.088179560Z" level=info msg="connecting to shim 516e5b2cb8b86814fc80c3624179b3faff3a0a54ab9d80380af72511bff986cb" address="unix:///run/containerd/s/7e3be2b081f4e4be19d5efe1f2d3dadd2fa38ba3ac4dcc3a33e41cdf9252b147" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:45.124344 systemd[1]: Started cri-containerd-7143e01e319cd273c0c96d9f27d30d2b9511d6f427cb4c05eeb35b415ff13c98.scope - libcontainer container 7143e01e319cd273c0c96d9f27d30d2b9511d6f427cb4c05eeb35b415ff13c98. Dec 16 13:09:45.129762 systemd[1]: Started cri-containerd-516e5b2cb8b86814fc80c3624179b3faff3a0a54ab9d80380af72511bff986cb.scope - libcontainer container 516e5b2cb8b86814fc80c3624179b3faff3a0a54ab9d80380af72511bff986cb. Dec 16 13:09:45.140817 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:09:45.146273 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:09:45.180038 containerd[1576]: time="2025-12-16T13:09:45.179987577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tkj8t,Uid:c9a94f60-fe71-45bc-a440-de1b30c987b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7143e01e319cd273c0c96d9f27d30d2b9511d6f427cb4c05eeb35b415ff13c98\"" Dec 16 13:09:45.183629 containerd[1576]: time="2025-12-16T13:09:45.183460724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tdkdh,Uid:90dc7b7f-e396-4b9c-ad0f-57bf8f73e3a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"516e5b2cb8b86814fc80c3624179b3faff3a0a54ab9d80380af72511bff986cb\"" Dec 16 13:09:45.186507 containerd[1576]: time="2025-12-16T13:09:45.186225226Z" level=info msg="CreateContainer within sandbox \"7143e01e319cd273c0c96d9f27d30d2b9511d6f427cb4c05eeb35b415ff13c98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:09:45.190642 containerd[1576]: time="2025-12-16T13:09:45.190601551Z" level=info msg="CreateContainer within sandbox \"516e5b2cb8b86814fc80c3624179b3faff3a0a54ab9d80380af72511bff986cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:09:45.201602 containerd[1576]: time="2025-12-16T13:09:45.201536108Z" level=info msg="Container 7b2046386dc4c9768ac822e105b197499dde97404dd76805ebfb5961fde42931: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:45.210666 containerd[1576]: time="2025-12-16T13:09:45.210618270Z" level=info msg="CreateContainer within sandbox \"7143e01e319cd273c0c96d9f27d30d2b9511d6f427cb4c05eeb35b415ff13c98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b2046386dc4c9768ac822e105b197499dde97404dd76805ebfb5961fde42931\"" Dec 16 13:09:45.212210 containerd[1576]: time="2025-12-16T13:09:45.212141775Z" level=info msg="StartContainer for \"7b2046386dc4c9768ac822e105b197499dde97404dd76805ebfb5961fde42931\"" Dec 16 13:09:45.213075 containerd[1576]: time="2025-12-16T13:09:45.213020723Z" level=info msg="connecting to shim 7b2046386dc4c9768ac822e105b197499dde97404dd76805ebfb5961fde42931" address="unix:///run/containerd/s/914301a936c18d90730ae994a74cf494fc7e841073190fa88d8a13b55a33bd89" protocol=ttrpc version=3 Dec 16 13:09:45.220465 containerd[1576]: time="2025-12-16T13:09:45.220418125Z" level=info msg="Container a3b821ac3af2595daad7af199d686d224b14735727e29a069bc58ccbbc959105: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:45.229247 containerd[1576]: time="2025-12-16T13:09:45.229188419Z" level=info msg="CreateContainer within sandbox \"516e5b2cb8b86814fc80c3624179b3faff3a0a54ab9d80380af72511bff986cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3b821ac3af2595daad7af199d686d224b14735727e29a069bc58ccbbc959105\"" Dec 16 13:09:45.230559 containerd[1576]: time="2025-12-16T13:09:45.230532954Z" level=info msg="StartContainer for \"a3b821ac3af2595daad7af199d686d224b14735727e29a069bc58ccbbc959105\"" Dec 16 13:09:45.231682 containerd[1576]: time="2025-12-16T13:09:45.231649310Z" level=info msg="connecting to shim a3b821ac3af2595daad7af199d686d224b14735727e29a069bc58ccbbc959105" address="unix:///run/containerd/s/7e3be2b081f4e4be19d5efe1f2d3dadd2fa38ba3ac4dcc3a33e41cdf9252b147" protocol=ttrpc version=3 Dec 16 13:09:45.236341 systemd[1]: Started cri-containerd-7b2046386dc4c9768ac822e105b197499dde97404dd76805ebfb5961fde42931.scope - libcontainer container 7b2046386dc4c9768ac822e105b197499dde97404dd76805ebfb5961fde42931. Dec 16 13:09:45.258320 systemd[1]: Started cri-containerd-a3b821ac3af2595daad7af199d686d224b14735727e29a069bc58ccbbc959105.scope - libcontainer container a3b821ac3af2595daad7af199d686d224b14735727e29a069bc58ccbbc959105. Dec 16 13:09:45.285918 containerd[1576]: time="2025-12-16T13:09:45.285840191Z" level=info msg="StartContainer for \"7b2046386dc4c9768ac822e105b197499dde97404dd76805ebfb5961fde42931\" returns successfully" Dec 16 13:09:45.313001 containerd[1576]: time="2025-12-16T13:09:45.312939840Z" level=info msg="StartContainer for \"a3b821ac3af2595daad7af199d686d224b14735727e29a069bc58ccbbc959105\" returns successfully" Dec 16 13:09:45.443967 kubelet[2739]: I1216 13:09:45.443809 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tkj8t" podStartSLOduration=25.443785913 podStartE2EDuration="25.443785913s" podCreationTimestamp="2025-12-16 13:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:45.443309024 +0000 UTC m=+33.259669662" watchObservedRunningTime="2025-12-16 13:09:45.443785913 +0000 UTC m=+33.260146551" Dec 16 13:09:45.446324 kubelet[2739]: I1216 13:09:45.446210 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tdkdh" podStartSLOduration=25.446187304 podStartE2EDuration="25.446187304s" podCreationTimestamp="2025-12-16 13:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:45.424437609 +0000 UTC m=+33.240798247" watchObservedRunningTime="2025-12-16 13:09:45.446187304 +0000 UTC m=+33.262547942" Dec 16 13:09:45.914997 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:53134.service - OpenSSH per-connection server daemon (10.0.0.1:53134). Dec 16 13:09:45.990943 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 53134 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:09:45.992648 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:45.996999 systemd-logind[1558]: New session 9 of user core. Dec 16 13:09:46.007296 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:09:46.162602 sshd[4102]: Connection closed by 10.0.0.1 port 53134 Dec 16 13:09:46.163027 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:46.167702 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:53134.service: Deactivated successfully. Dec 16 13:09:46.171498 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:09:46.172531 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:09:46.173961 systemd-logind[1558]: Removed session 9. Dec 16 13:09:51.180285 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:49900.service - OpenSSH per-connection server daemon (10.0.0.1:49900). Dec 16 13:09:51.259880 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 49900 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:09:51.261406 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:51.266151 systemd-logind[1558]: New session 10 of user core. Dec 16 13:09:51.277203 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:09:51.461836 sshd[4126]: Connection closed by 10.0.0.1 port 49900 Dec 16 13:09:51.462101 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:51.467200 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:49900.service: Deactivated successfully. Dec 16 13:09:51.469635 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:09:51.471514 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:09:51.473578 systemd-logind[1558]: Removed session 10. Dec 16 13:09:56.475306 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:49902.service - OpenSSH per-connection server daemon (10.0.0.1:49902). Dec 16 13:09:56.535621 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 49902 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:09:56.536999 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:56.541778 systemd-logind[1558]: New session 11 of user core. Dec 16 13:09:56.560220 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:09:56.680154 sshd[4143]: Connection closed by 10.0.0.1 port 49902 Dec 16 13:09:56.680680 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:56.689142 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:49902.service: Deactivated successfully. Dec 16 13:09:56.691169 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:09:56.692154 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:09:56.695196 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:49904.service - OpenSSH per-connection server daemon (10.0.0.1:49904). Dec 16 13:09:56.697040 systemd-logind[1558]: Removed session 11. Dec 16 13:09:56.748993 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 49904 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:09:56.751031 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:56.756028 systemd-logind[1558]: New session 12 of user core. Dec 16 13:09:56.770211 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:09:56.928610 sshd[4160]: Connection closed by 10.0.0.1 port 49904 Dec 16 13:09:56.929162 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:56.943650 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:49904.service: Deactivated successfully. Dec 16 13:09:56.946882 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:09:56.949141 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:09:56.955378 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Dec 16 13:09:56.956435 systemd-logind[1558]: Removed session 12. Dec 16 13:09:57.013121 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:09:57.015095 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:57.019984 systemd-logind[1558]: New session 13 of user core. Dec 16 13:09:57.035215 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:09:57.160606 sshd[4175]: Connection closed by 10.0.0.1 port 49918 Dec 16 13:09:57.161087 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:57.166705 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:49918.service: Deactivated successfully. Dec 16 13:09:57.169453 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:09:57.170402 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:09:57.171997 systemd-logind[1558]: Removed session 13. Dec 16 13:10:02.179938 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:50636.service - OpenSSH per-connection server daemon (10.0.0.1:50636). Dec 16 13:10:02.235467 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 50636 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:02.236914 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:02.241216 systemd-logind[1558]: New session 14 of user core. Dec 16 13:10:02.249185 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:10:02.366878 sshd[4191]: Connection closed by 10.0.0.1 port 50636 Dec 16 13:10:02.367224 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:02.371678 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:50636.service: Deactivated successfully. Dec 16 13:10:02.374088 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:10:02.374943 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:10:02.376237 systemd-logind[1558]: Removed session 14. Dec 16 13:10:07.380699 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:50654.service - OpenSSH per-connection server daemon (10.0.0.1:50654). Dec 16 13:10:07.439984 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 50654 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:07.441336 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:07.445925 systemd-logind[1558]: New session 15 of user core. Dec 16 13:10:07.461197 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:10:07.594340 sshd[4208]: Connection closed by 10.0.0.1 port 50654 Dec 16 13:10:07.594766 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:07.599212 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:50654.service: Deactivated successfully. Dec 16 13:10:07.601909 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:10:07.605072 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:10:07.606323 systemd-logind[1558]: Removed session 15. Dec 16 13:10:12.607405 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:54224.service - OpenSSH per-connection server daemon (10.0.0.1:54224). Dec 16 13:10:12.651306 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 54224 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:12.652906 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:12.657863 systemd-logind[1558]: New session 16 of user core. Dec 16 13:10:12.665204 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:10:12.781575 sshd[4227]: Connection closed by 10.0.0.1 port 54224 Dec 16 13:10:12.781942 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:12.792962 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:54224.service: Deactivated successfully. Dec 16 13:10:12.795013 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:10:12.795890 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:10:12.798987 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:54230.service - OpenSSH per-connection server daemon (10.0.0.1:54230). Dec 16 13:10:12.799769 systemd-logind[1558]: Removed session 16. Dec 16 13:10:12.853671 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 54230 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:12.854883 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:12.859347 systemd-logind[1558]: New session 17 of user core. Dec 16 13:10:12.869180 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:10:13.146142 sshd[4244]: Connection closed by 10.0.0.1 port 54230 Dec 16 13:10:13.146862 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:13.157487 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:54230.service: Deactivated successfully. Dec 16 13:10:13.160171 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:10:13.161437 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:10:13.165343 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:54246.service - OpenSSH per-connection server daemon (10.0.0.1:54246). Dec 16 13:10:13.166342 systemd-logind[1558]: Removed session 17. Dec 16 13:10:13.235659 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 54246 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:13.237662 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:13.244379 systemd-logind[1558]: New session 18 of user core. Dec 16 13:10:13.256261 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:10:13.852202 sshd[4258]: Connection closed by 10.0.0.1 port 54246 Dec 16 13:10:13.853007 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:13.872244 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:54246.service: Deactivated successfully. Dec 16 13:10:13.876535 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:10:13.877482 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:10:13.884595 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:54254.service - OpenSSH per-connection server daemon (10.0.0.1:54254). Dec 16 13:10:13.885540 systemd-logind[1558]: Removed session 18. Dec 16 13:10:13.929264 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 54254 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:13.930884 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:13.937224 systemd-logind[1558]: New session 19 of user core. Dec 16 13:10:13.944303 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:10:14.241206 sshd[4278]: Connection closed by 10.0.0.1 port 54254 Dec 16 13:10:14.241691 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:14.254441 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:54254.service: Deactivated successfully. Dec 16 13:10:14.258963 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:10:14.259938 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:10:14.263358 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:54266.service - OpenSSH per-connection server daemon (10.0.0.1:54266). Dec 16 13:10:14.264295 systemd-logind[1558]: Removed session 19. Dec 16 13:10:14.322704 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 54266 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:14.324628 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:14.329353 systemd-logind[1558]: New session 20 of user core. Dec 16 13:10:14.348356 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:10:14.465176 sshd[4293]: Connection closed by 10.0.0.1 port 54266 Dec 16 13:10:14.465603 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:14.471472 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:54266.service: Deactivated successfully. Dec 16 13:10:14.473527 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:10:14.474382 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:10:14.475470 systemd-logind[1558]: Removed session 20. Dec 16 13:10:19.479003 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:54312.service - OpenSSH per-connection server daemon (10.0.0.1:54312). Dec 16 13:10:19.532637 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 54312 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:19.534034 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:19.539239 systemd-logind[1558]: New session 21 of user core. Dec 16 13:10:19.547229 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:10:19.680106 sshd[4311]: Connection closed by 10.0.0.1 port 54312 Dec 16 13:10:19.680517 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:19.685218 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:54312.service: Deactivated successfully. Dec 16 13:10:19.687795 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:10:19.689008 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:10:19.690877 systemd-logind[1558]: Removed session 21. Dec 16 13:10:24.692416 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:37270.service - OpenSSH per-connection server daemon (10.0.0.1:37270). Dec 16 13:10:24.749953 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 37270 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:24.751931 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:24.756887 systemd-logind[1558]: New session 22 of user core. Dec 16 13:10:24.765221 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:10:24.884764 sshd[4331]: Connection closed by 10.0.0.1 port 37270 Dec 16 13:10:24.885222 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:24.891091 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:37270.service: Deactivated successfully. Dec 16 13:10:24.893409 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:10:24.894319 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:10:24.895929 systemd-logind[1558]: Removed session 22. Dec 16 13:10:29.906727 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:37360.service - OpenSSH per-connection server daemon (10.0.0.1:37360). Dec 16 13:10:29.975078 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 37360 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:29.976874 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:29.981766 systemd-logind[1558]: New session 23 of user core. Dec 16 13:10:29.989189 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:10:30.120851 sshd[4348]: Connection closed by 10.0.0.1 port 37360 Dec 16 13:10:30.121475 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:30.139495 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:37360.service: Deactivated successfully. Dec 16 13:10:30.142686 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:10:30.143873 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:10:30.149457 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:37436.service - OpenSSH per-connection server daemon (10.0.0.1:37436). Dec 16 13:10:30.150616 systemd-logind[1558]: Removed session 23. Dec 16 13:10:30.212581 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 37436 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:30.215273 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:30.222408 systemd-logind[1558]: New session 24 of user core. Dec 16 13:10:30.233531 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:10:31.586280 containerd[1576]: time="2025-12-16T13:10:31.586215612Z" level=info msg="StopContainer for \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" with timeout 30 (s)" Dec 16 13:10:31.608871 containerd[1576]: time="2025-12-16T13:10:31.608805481Z" level=info msg="Stop container \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" with signal terminated" Dec 16 13:10:31.674878 systemd[1]: cri-containerd-b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f.scope: Deactivated successfully. Dec 16 13:10:31.677530 containerd[1576]: time="2025-12-16T13:10:31.677468533Z" level=info msg="received container exit event container_id:\"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" id:\"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" pid:3161 exited_at:{seconds:1765890631 nanos:677041002}" Dec 16 13:10:31.690345 containerd[1576]: time="2025-12-16T13:10:31.689881344Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:10:31.696004 containerd[1576]: time="2025-12-16T13:10:31.695952004Z" level=info msg="StopContainer for \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" with timeout 2 (s)" Dec 16 13:10:31.696583 containerd[1576]: time="2025-12-16T13:10:31.696551569Z" level=info msg="Stop container \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" with signal terminated" Dec 16 13:10:31.708035 systemd-networkd[1461]: lxc_health: Link DOWN Dec 16 13:10:31.708050 systemd-networkd[1461]: lxc_health: Lost carrier Dec 16 13:10:31.716575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f-rootfs.mount: Deactivated successfully. Dec 16 13:10:31.727160 systemd[1]: cri-containerd-6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6.scope: Deactivated successfully. Dec 16 13:10:31.728695 containerd[1576]: time="2025-12-16T13:10:31.728386329Z" level=info msg="received container exit event container_id:\"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" id:\"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" pid:3407 exited_at:{seconds:1765890631 nanos:727872776}" Dec 16 13:10:31.727678 systemd[1]: cri-containerd-6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6.scope: Consumed 7.097s CPU time, 123.5M memory peak, 441K read from disk, 13.3M written to disk. Dec 16 13:10:31.753023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6-rootfs.mount: Deactivated successfully. Dec 16 13:10:31.803728 containerd[1576]: time="2025-12-16T13:10:31.803668380Z" level=info msg="StopContainer for \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" returns successfully" Dec 16 13:10:31.804832 containerd[1576]: time="2025-12-16T13:10:31.804793218Z" level=info msg="StopContainer for \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" returns successfully" Dec 16 13:10:31.806810 containerd[1576]: time="2025-12-16T13:10:31.806782528Z" level=info msg="StopPodSandbox for \"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\"" Dec 16 13:10:31.808077 containerd[1576]: time="2025-12-16T13:10:31.808027862Z" level=info msg="StopPodSandbox for \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\"" Dec 16 13:10:31.811427 containerd[1576]: time="2025-12-16T13:10:31.811386839Z" level=info msg="Container to stop \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:10:31.817291 containerd[1576]: time="2025-12-16T13:10:31.817251273Z" level=info msg="Container to stop \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:10:31.817291 containerd[1576]: time="2025-12-16T13:10:31.817279766Z" level=info msg="Container to stop \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:10:31.817291 containerd[1576]: time="2025-12-16T13:10:31.817293392Z" level=info msg="Container to stop \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:10:31.817291 containerd[1576]: time="2025-12-16T13:10:31.817303981Z" level=info msg="Container to stop \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:10:31.817291 containerd[1576]: time="2025-12-16T13:10:31.817313950Z" level=info msg="Container to stop \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:10:31.820030 systemd[1]: cri-containerd-9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5.scope: Deactivated successfully. Dec 16 13:10:31.825631 systemd[1]: cri-containerd-0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f.scope: Deactivated successfully. Dec 16 13:10:31.827042 containerd[1576]: time="2025-12-16T13:10:31.826988287Z" level=info msg="received sandbox exit event container_id:\"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\" id:\"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\" exit_status:137 exited_at:{seconds:1765890631 nanos:826514008}" monitor_name=podsandbox Dec 16 13:10:31.828846 containerd[1576]: time="2025-12-16T13:10:31.828762603Z" level=info msg="received sandbox exit event container_id:\"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" id:\"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" exit_status:137 exited_at:{seconds:1765890631 nanos:828568910}" monitor_name=podsandbox Dec 16 13:10:31.853929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f-rootfs.mount: Deactivated successfully. Dec 16 13:10:31.854171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5-rootfs.mount: Deactivated successfully. Dec 16 13:10:31.858793 containerd[1576]: time="2025-12-16T13:10:31.858754279Z" level=info msg="shim disconnected" id=0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f namespace=k8s.io Dec 16 13:10:31.858909 containerd[1576]: time="2025-12-16T13:10:31.858793272Z" level=warning msg="cleaning up after shim disconnected" id=0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f namespace=k8s.io Dec 16 13:10:31.874914 containerd[1576]: time="2025-12-16T13:10:31.858803030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:10:31.875041 containerd[1576]: time="2025-12-16T13:10:31.861140682Z" level=info msg="shim disconnected" id=9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5 namespace=k8s.io Dec 16 13:10:31.875041 containerd[1576]: time="2025-12-16T13:10:31.874981470Z" level=warning msg="cleaning up after shim disconnected" id=9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5 namespace=k8s.io Dec 16 13:10:31.875041 containerd[1576]: time="2025-12-16T13:10:31.874990256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:10:31.893116 containerd[1576]: time="2025-12-16T13:10:31.892884432Z" level=info msg="TearDown network for sandbox \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" successfully" Dec 16 13:10:31.893116 containerd[1576]: time="2025-12-16T13:10:31.892931831Z" level=info msg="StopPodSandbox for \"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" returns successfully" Dec 16 13:10:31.893962 containerd[1576]: time="2025-12-16T13:10:31.893854881Z" level=info msg="TearDown network for sandbox \"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\" successfully" Dec 16 13:10:31.893962 containerd[1576]: time="2025-12-16T13:10:31.893872975Z" level=info msg="StopPodSandbox for \"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\" returns successfully" Dec 16 13:10:31.895355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f-shm.mount: Deactivated successfully. Dec 16 13:10:31.895490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5-shm.mount: Deactivated successfully. Dec 16 13:10:31.904043 containerd[1576]: time="2025-12-16T13:10:31.903976076Z" level=info msg="received sandbox container exit event sandbox_id:\"0d9ee6ba94d8812d81dba576d41181931c607d4d1640f34f327514ffdf86656f\" exit_status:137 exited_at:{seconds:1765890631 nanos:828568910}" monitor_name=criService Dec 16 13:10:31.904376 containerd[1576]: time="2025-12-16T13:10:31.904129784Z" level=info msg="received sandbox container exit event sandbox_id:\"9495191e9805c600bfa90108de05de196889b3368aa07f96145493adf16761d5\" exit_status:137 exited_at:{seconds:1765890631 nanos:826514008}" monitor_name=criService Dec 16 13:10:32.109754 kubelet[2739]: I1216 13:10:32.109603 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-clustermesh-secrets\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.109754 kubelet[2739]: I1216 13:10:32.109646 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-644hw\" (UniqueName: \"kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-kube-api-access-644hw\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.109754 kubelet[2739]: I1216 13:10:32.109666 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-host-proc-sys-kernel\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.109754 kubelet[2739]: I1216 13:10:32.109682 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-etc-cni-netd\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.109754 kubelet[2739]: I1216 13:10:32.109700 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-hubble-tls\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.109754 kubelet[2739]: I1216 13:10:32.109713 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-host-proc-sys-net\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110392 kubelet[2739]: I1216 13:10:32.109726 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-hostproc\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110392 kubelet[2739]: I1216 13:10:32.109772 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6sb2\" (UniqueName: \"kubernetes.io/projected/fed41712-6b3e-4fa8-80b1-3b835a688b24-kube-api-access-b6sb2\") pod \"fed41712-6b3e-4fa8-80b1-3b835a688b24\" (UID: \"fed41712-6b3e-4fa8-80b1-3b835a688b24\") " Dec 16 13:10:32.110392 kubelet[2739]: I1216 13:10:32.109792 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-lib-modules\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110392 kubelet[2739]: I1216 13:10:32.109805 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-xtables-lock\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110392 kubelet[2739]: I1216 13:10:32.109820 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-cgroup\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110392 kubelet[2739]: I1216 13:10:32.109838 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cni-path\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110540 kubelet[2739]: I1216 13:10:32.109851 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-bpf-maps\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110540 kubelet[2739]: I1216 13:10:32.109865 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fed41712-6b3e-4fa8-80b1-3b835a688b24-cilium-config-path\") pod \"fed41712-6b3e-4fa8-80b1-3b835a688b24\" (UID: \"fed41712-6b3e-4fa8-80b1-3b835a688b24\") " Dec 16 13:10:32.110540 kubelet[2739]: I1216 13:10:32.109887 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-config-path\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110540 kubelet[2739]: I1216 13:10:32.109901 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-run\") pod \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\" (UID: \"df154b5a-bfaa-4f78-a6fa-93fcd8fba501\") " Dec 16 13:10:32.110540 kubelet[2739]: I1216 13:10:32.109986 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.110895 kubelet[2739]: I1216 13:10:32.110779 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.110895 kubelet[2739]: I1216 13:10:32.110818 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cni-path" (OuterVolumeSpecName: "cni-path") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.110895 kubelet[2739]: I1216 13:10:32.110894 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.111104 kubelet[2739]: I1216 13:10:32.110915 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.111104 kubelet[2739]: I1216 13:10:32.110935 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.111104 kubelet[2739]: I1216 13:10:32.110958 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.111832 kubelet[2739]: I1216 13:10:32.111814 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.112244 kubelet[2739]: I1216 13:10:32.112181 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-hostproc" (OuterVolumeSpecName: "hostproc") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.112374 kubelet[2739]: I1216 13:10:32.112275 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:10:32.116323 kubelet[2739]: I1216 13:10:32.116301 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed41712-6b3e-4fa8-80b1-3b835a688b24-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fed41712-6b3e-4fa8-80b1-3b835a688b24" (UID: "fed41712-6b3e-4fa8-80b1-3b835a688b24"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:10:32.116439 kubelet[2739]: I1216 13:10:32.116361 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:10:32.117126 kubelet[2739]: I1216 13:10:32.117035 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-kube-api-access-644hw" (OuterVolumeSpecName: "kube-api-access-644hw") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "kube-api-access-644hw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:10:32.119112 kubelet[2739]: I1216 13:10:32.119073 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:10:32.119703 kubelet[2739]: I1216 13:10:32.119673 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "df154b5a-bfaa-4f78-a6fa-93fcd8fba501" (UID: "df154b5a-bfaa-4f78-a6fa-93fcd8fba501"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:10:32.120367 kubelet[2739]: I1216 13:10:32.120345 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed41712-6b3e-4fa8-80b1-3b835a688b24-kube-api-access-b6sb2" (OuterVolumeSpecName: "kube-api-access-b6sb2") pod "fed41712-6b3e-4fa8-80b1-3b835a688b24" (UID: "fed41712-6b3e-4fa8-80b1-3b835a688b24"). InnerVolumeSpecName "kube-api-access-b6sb2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:10:32.210821 kubelet[2739]: I1216 13:10:32.210769 2739 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.210821 kubelet[2739]: I1216 13:10:32.210803 2739 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.210821 kubelet[2739]: I1216 13:10:32.210812 2739 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.210821 kubelet[2739]: I1216 13:10:32.210823 2739 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.210821 kubelet[2739]: I1216 13:10:32.210831 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b6sb2\" (UniqueName: \"kubernetes.io/projected/fed41712-6b3e-4fa8-80b1-3b835a688b24-kube-api-access-b6sb2\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.210821 kubelet[2739]: I1216 13:10:32.210839 2739 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211132 kubelet[2739]: I1216 13:10:32.210847 2739 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211132 kubelet[2739]: I1216 13:10:32.210854 2739 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211132 kubelet[2739]: I1216 13:10:32.210861 2739 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211132 kubelet[2739]: I1216 13:10:32.210869 2739 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211132 kubelet[2739]: I1216 13:10:32.210877 2739 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fed41712-6b3e-4fa8-80b1-3b835a688b24-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211132 kubelet[2739]: I1216 13:10:32.210892 2739 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211132 kubelet[2739]: I1216 13:10:32.210901 2739 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211132 kubelet[2739]: I1216 13:10:32.210908 2739 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211302 kubelet[2739]: I1216 13:10:32.210916 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-644hw\" (UniqueName: \"kubernetes.io/projected/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-kube-api-access-644hw\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.211302 kubelet[2739]: I1216 13:10:32.210923 2739 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df154b5a-bfaa-4f78-a6fa-93fcd8fba501-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 16 13:10:32.293307 systemd[1]: Removed slice kubepods-besteffort-podfed41712_6b3e_4fa8_80b1_3b835a688b24.slice - libcontainer container kubepods-besteffort-podfed41712_6b3e_4fa8_80b1_3b835a688b24.slice. Dec 16 13:10:32.295210 systemd[1]: Removed slice kubepods-burstable-poddf154b5a_bfaa_4f78_a6fa_93fcd8fba501.slice - libcontainer container kubepods-burstable-poddf154b5a_bfaa_4f78_a6fa_93fcd8fba501.slice. Dec 16 13:10:32.295435 systemd[1]: kubepods-burstable-poddf154b5a_bfaa_4f78_a6fa_93fcd8fba501.slice: Consumed 7.219s CPU time, 123.9M memory peak, 449K read from disk, 13.3M written to disk. Dec 16 13:10:32.340328 kubelet[2739]: E1216 13:10:32.340264 2739 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:10:32.548079 kubelet[2739]: I1216 13:10:32.547911 2739 scope.go:117] "RemoveContainer" containerID="b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f" Dec 16 13:10:32.550649 containerd[1576]: time="2025-12-16T13:10:32.550583929Z" level=info msg="RemoveContainer for \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\"" Dec 16 13:10:32.589682 containerd[1576]: time="2025-12-16T13:10:32.589618559Z" level=info msg="RemoveContainer for \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" returns successfully" Dec 16 13:10:32.590168 kubelet[2739]: I1216 13:10:32.589995 2739 scope.go:117] "RemoveContainer" containerID="b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f" Dec 16 13:10:32.590469 containerd[1576]: time="2025-12-16T13:10:32.590403512Z" level=error msg="ContainerStatus for \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\": not found" Dec 16 13:10:32.590689 kubelet[2739]: E1216 13:10:32.590652 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\": not found" containerID="b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f" Dec 16 13:10:32.590755 kubelet[2739]: I1216 13:10:32.590703 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f"} err="failed to get container status \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0fbbdec33ac4e6779f312bfdc2f41af27d9e23170dc1ee79f2b545f937c568f\": not found" Dec 16 13:10:32.590786 kubelet[2739]: I1216 13:10:32.590759 2739 scope.go:117] "RemoveContainer" containerID="6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6" Dec 16 13:10:32.592701 containerd[1576]: time="2025-12-16T13:10:32.592660809Z" level=info msg="RemoveContainer for \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\"" Dec 16 13:10:32.597690 containerd[1576]: time="2025-12-16T13:10:32.597645433Z" level=info msg="RemoveContainer for \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" returns successfully" Dec 16 13:10:32.597858 kubelet[2739]: I1216 13:10:32.597816 2739 scope.go:117] "RemoveContainer" containerID="4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf" Dec 16 13:10:32.599304 containerd[1576]: time="2025-12-16T13:10:32.599273027Z" level=info msg="RemoveContainer for \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\"" Dec 16 13:10:32.603566 containerd[1576]: time="2025-12-16T13:10:32.603527631Z" level=info msg="RemoveContainer for \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\" returns successfully" Dec 16 13:10:32.603757 kubelet[2739]: I1216 13:10:32.603682 2739 scope.go:117] "RemoveContainer" containerID="f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d" Dec 16 13:10:32.606253 containerd[1576]: time="2025-12-16T13:10:32.606209363Z" level=info msg="RemoveContainer for \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\"" Dec 16 13:10:32.613940 containerd[1576]: time="2025-12-16T13:10:32.613886319Z" level=info msg="RemoveContainer for \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\" returns successfully" Dec 16 13:10:32.614201 kubelet[2739]: I1216 13:10:32.614169 2739 scope.go:117] "RemoveContainer" containerID="24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91" Dec 16 13:10:32.616002 containerd[1576]: time="2025-12-16T13:10:32.615949822Z" level=info msg="RemoveContainer for \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\"" Dec 16 13:10:32.627452 containerd[1576]: time="2025-12-16T13:10:32.627389689Z" level=info msg="RemoveContainer for \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\" returns successfully" Dec 16 13:10:32.627663 kubelet[2739]: I1216 13:10:32.627633 2739 scope.go:117] "RemoveContainer" containerID="1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6" Dec 16 13:10:32.630290 containerd[1576]: time="2025-12-16T13:10:32.630241942Z" level=info msg="RemoveContainer for \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\"" Dec 16 13:10:32.635823 containerd[1576]: time="2025-12-16T13:10:32.635654829Z" level=info msg="RemoveContainer for \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\" returns successfully" Dec 16 13:10:32.636629 kubelet[2739]: I1216 13:10:32.636405 2739 scope.go:117] "RemoveContainer" containerID="6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6" Dec 16 13:10:32.637117 containerd[1576]: time="2025-12-16T13:10:32.637032265Z" level=error msg="ContainerStatus for \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\": not found" Dec 16 13:10:32.637340 kubelet[2739]: E1216 13:10:32.637316 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\": not found" containerID="6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6" Dec 16 13:10:32.637402 kubelet[2739]: I1216 13:10:32.637348 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6"} err="failed to get container status \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f3f0f3321bd90fc1d9dbb637763c8333d5b43f06c3f160fa8b2d39aeac6ebc6\": not found" Dec 16 13:10:32.637402 kubelet[2739]: I1216 13:10:32.637385 2739 scope.go:117] "RemoveContainer" containerID="4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf" Dec 16 13:10:32.637661 containerd[1576]: time="2025-12-16T13:10:32.637624166Z" level=error msg="ContainerStatus for \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\": not found" Dec 16 13:10:32.637779 kubelet[2739]: E1216 13:10:32.637755 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\": not found" containerID="4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf" Dec 16 13:10:32.637832 kubelet[2739]: I1216 13:10:32.637777 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf"} err="failed to get container status \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e8d81fcad46b3b47750f14709d29b212a504ef44ac6226c0309aff7fce45fdf\": not found" Dec 16 13:10:32.637832 kubelet[2739]: I1216 13:10:32.637791 2739 scope.go:117] "RemoveContainer" containerID="f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d" Dec 16 13:10:32.638029 containerd[1576]: time="2025-12-16T13:10:32.637953473Z" level=error msg="ContainerStatus for \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\": not found" Dec 16 13:10:32.638130 kubelet[2739]: E1216 13:10:32.638091 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\": not found" containerID="f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d" Dec 16 13:10:32.638202 kubelet[2739]: I1216 13:10:32.638130 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d"} err="failed to get container status \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9c0f21bf359074add6b2d7232152e96cae3ef0593e3152c8d77376513d6039d\": not found" Dec 16 13:10:32.638202 kubelet[2739]: I1216 13:10:32.638151 2739 scope.go:117] "RemoveContainer" containerID="24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91" Dec 16 13:10:32.638497 containerd[1576]: time="2025-12-16T13:10:32.638446138Z" level=error msg="ContainerStatus for \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\": not found" Dec 16 13:10:32.638766 kubelet[2739]: E1216 13:10:32.638733 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\": not found" containerID="24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91" Dec 16 13:10:32.638821 kubelet[2739]: I1216 13:10:32.638780 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91"} err="failed to get container status \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\": rpc error: code = NotFound desc = an error occurred when try to find container \"24479656a9b470d1527df8a945a340360e2a89247ea156b86d41c85cd8623c91\": not found" Dec 16 13:10:32.638821 kubelet[2739]: I1216 13:10:32.638814 2739 scope.go:117] "RemoveContainer" containerID="1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6" Dec 16 13:10:32.639087 containerd[1576]: time="2025-12-16T13:10:32.639033991Z" level=error msg="ContainerStatus for \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\": not found" Dec 16 13:10:32.639338 kubelet[2739]: E1216 13:10:32.639253 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\": not found" containerID="1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6" Dec 16 13:10:32.639338 kubelet[2739]: I1216 13:10:32.639293 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6"} err="failed to get container status \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a708eeefd4d8a9b5ab6ab4e817a81a98c5e3c0a7131f948a2f2cb1daf8d53a6\": not found" Dec 16 13:10:32.716340 systemd[1]: var-lib-kubelet-pods-df154b5a\x2dbfaa\x2d4f78\x2da6fa\x2d93fcd8fba501-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d644hw.mount: Deactivated successfully. Dec 16 13:10:32.716492 systemd[1]: var-lib-kubelet-pods-fed41712\x2d6b3e\x2d4fa8\x2d80b1\x2d3b835a688b24-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db6sb2.mount: Deactivated successfully. Dec 16 13:10:32.716583 systemd[1]: var-lib-kubelet-pods-df154b5a\x2dbfaa\x2d4f78\x2da6fa\x2d93fcd8fba501-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:10:32.716675 systemd[1]: var-lib-kubelet-pods-df154b5a\x2dbfaa\x2d4f78\x2da6fa\x2d93fcd8fba501-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:10:33.549038 sshd[4364]: Connection closed by 10.0.0.1 port 37436 Dec 16 13:10:33.550202 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:33.561177 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:37436.service: Deactivated successfully. Dec 16 13:10:33.563903 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:10:33.565044 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:10:33.569272 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:44772.service - OpenSSH per-connection server daemon (10.0.0.1:44772). Dec 16 13:10:33.570424 systemd-logind[1558]: Removed session 24. Dec 16 13:10:33.637007 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 44772 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:33.638905 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:33.644590 systemd-logind[1558]: New session 25 of user core. Dec 16 13:10:33.655390 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:10:34.286762 kubelet[2739]: I1216 13:10:34.286706 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df154b5a-bfaa-4f78-a6fa-93fcd8fba501" path="/var/lib/kubelet/pods/df154b5a-bfaa-4f78-a6fa-93fcd8fba501/volumes" Dec 16 13:10:34.287747 kubelet[2739]: I1216 13:10:34.287711 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fed41712-6b3e-4fa8-80b1-3b835a688b24" path="/var/lib/kubelet/pods/fed41712-6b3e-4fa8-80b1-3b835a688b24/volumes" Dec 16 13:10:34.374204 kubelet[2739]: I1216 13:10:34.373719 2739 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:10:34Z","lastTransitionTime":"2025-12-16T13:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:10:34.614684 sshd[4515]: Connection closed by 10.0.0.1 port 44772 Dec 16 13:10:34.615017 sshd-session[4512]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:34.630133 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:44772.service: Deactivated successfully. Dec 16 13:10:34.636627 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:10:34.638300 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:10:34.645011 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:44774.service - OpenSSH per-connection server daemon (10.0.0.1:44774). Dec 16 13:10:34.646851 systemd-logind[1558]: Removed session 25. Dec 16 13:10:34.661796 systemd[1]: Created slice kubepods-burstable-podabfc82d2_b5b2_478d_9a32_de61a253518d.slice - libcontainer container kubepods-burstable-podabfc82d2_b5b2_478d_9a32_de61a253518d.slice. Dec 16 13:10:34.712093 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 44774 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:34.714375 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:34.721170 systemd-logind[1558]: New session 26 of user core. Dec 16 13:10:34.726713 kubelet[2739]: I1216 13:10:34.726662 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-cilium-run\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.726713 kubelet[2739]: I1216 13:10:34.726696 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abfc82d2-b5b2-478d-9a32-de61a253518d-hubble-tls\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.726713 kubelet[2739]: I1216 13:10:34.726711 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abfc82d2-b5b2-478d-9a32-de61a253518d-cilium-ipsec-secrets\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.726713 kubelet[2739]: I1216 13:10:34.726727 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-xtables-lock\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.726983 kubelet[2739]: I1216 13:10:34.726744 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abfc82d2-b5b2-478d-9a32-de61a253518d-clustermesh-secrets\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.726983 kubelet[2739]: I1216 13:10:34.726758 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abfc82d2-b5b2-478d-9a32-de61a253518d-cilium-config-path\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.726983 kubelet[2739]: I1216 13:10:34.726887 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-host-proc-sys-kernel\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.726983 kubelet[2739]: I1216 13:10:34.726934 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-etc-cni-netd\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.727100 kubelet[2739]: I1216 13:10:34.726988 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-lib-modules\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.727100 kubelet[2739]: I1216 13:10:34.727008 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-bpf-maps\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.727100 kubelet[2739]: I1216 13:10:34.727027 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-cilium-cgroup\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.727100 kubelet[2739]: I1216 13:10:34.727041 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-host-proc-sys-net\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.727100 kubelet[2739]: I1216 13:10:34.727076 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42mk\" (UniqueName: \"kubernetes.io/projected/abfc82d2-b5b2-478d-9a32-de61a253518d-kube-api-access-b42mk\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.727251 kubelet[2739]: I1216 13:10:34.727103 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-hostproc\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.727251 kubelet[2739]: I1216 13:10:34.727131 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abfc82d2-b5b2-478d-9a32-de61a253518d-cni-path\") pod \"cilium-jj9c8\" (UID: \"abfc82d2-b5b2-478d-9a32-de61a253518d\") " pod="kube-system/cilium-jj9c8" Dec 16 13:10:34.729318 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:10:34.784028 sshd[4530]: Connection closed by 10.0.0.1 port 44774 Dec 16 13:10:34.784680 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:34.802769 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:44774.service: Deactivated successfully. Dec 16 13:10:34.805402 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:10:34.806510 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:10:34.809959 systemd[1]: Started sshd@26-10.0.0.130:22-10.0.0.1:44786.service - OpenSSH per-connection server daemon (10.0.0.1:44786). Dec 16 13:10:34.811484 systemd-logind[1558]: Removed session 26. Dec 16 13:10:34.877959 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 44786 ssh2: RSA SHA256:cDcH/+jjHLxoF3s01JIELyBK+nBySby6n6uc9s4z+Lg Dec 16 13:10:34.880311 sshd-session[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:34.885539 systemd-logind[1558]: New session 27 of user core. Dec 16 13:10:34.900381 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:10:35.093496 containerd[1576]: time="2025-12-16T13:10:35.093437125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jj9c8,Uid:abfc82d2-b5b2-478d-9a32-de61a253518d,Namespace:kube-system,Attempt:0,}" Dec 16 13:10:35.597876 containerd[1576]: time="2025-12-16T13:10:35.597824305Z" level=info msg="connecting to shim f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061" address="unix:///run/containerd/s/4a32a32819c78601c6df5f087803374b221c7d302d0ae4392c5c4948386998d4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:35.624233 systemd[1]: Started cri-containerd-f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061.scope - libcontainer container f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061. Dec 16 13:10:35.678803 containerd[1576]: time="2025-12-16T13:10:35.678738875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jj9c8,Uid:abfc82d2-b5b2-478d-9a32-de61a253518d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\"" Dec 16 13:10:35.743781 containerd[1576]: time="2025-12-16T13:10:35.743728146Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:10:35.940482 containerd[1576]: time="2025-12-16T13:10:35.940417439Z" level=info msg="Container 9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:36.004710 containerd[1576]: time="2025-12-16T13:10:36.004646795Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa\"" Dec 16 13:10:36.005343 containerd[1576]: time="2025-12-16T13:10:36.005298694Z" level=info msg="StartContainer for \"9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa\"" Dec 16 13:10:36.006348 containerd[1576]: time="2025-12-16T13:10:36.006316120Z" level=info msg="connecting to shim 9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa" address="unix:///run/containerd/s/4a32a32819c78601c6df5f087803374b221c7d302d0ae4392c5c4948386998d4" protocol=ttrpc version=3 Dec 16 13:10:36.031339 systemd[1]: Started cri-containerd-9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa.scope - libcontainer container 9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa. Dec 16 13:10:36.071490 containerd[1576]: time="2025-12-16T13:10:36.071429703Z" level=info msg="StartContainer for \"9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa\" returns successfully" Dec 16 13:10:36.081763 systemd[1]: cri-containerd-9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa.scope: Deactivated successfully. Dec 16 13:10:36.084033 containerd[1576]: time="2025-12-16T13:10:36.083981284Z" level=info msg="received container exit event container_id:\"9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa\" id:\"9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa\" pid:4609 exited_at:{seconds:1765890636 nanos:83542377}" Dec 16 13:10:36.113372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c258efbca57423c49cfa92080a0449bafffc7b5616321edbfb16763363491fa-rootfs.mount: Deactivated successfully. Dec 16 13:10:36.576276 containerd[1576]: time="2025-12-16T13:10:36.576211079Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:10:36.584071 containerd[1576]: time="2025-12-16T13:10:36.584002243Z" level=info msg="Container b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:36.592980 containerd[1576]: time="2025-12-16T13:10:36.592914400Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37\"" Dec 16 13:10:36.593578 containerd[1576]: time="2025-12-16T13:10:36.593540479Z" level=info msg="StartContainer for \"b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37\"" Dec 16 13:10:36.594705 containerd[1576]: time="2025-12-16T13:10:36.594670868Z" level=info msg="connecting to shim b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37" address="unix:///run/containerd/s/4a32a32819c78601c6df5f087803374b221c7d302d0ae4392c5c4948386998d4" protocol=ttrpc version=3 Dec 16 13:10:36.620190 systemd[1]: Started cri-containerd-b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37.scope - libcontainer container b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37. Dec 16 13:10:36.663323 containerd[1576]: time="2025-12-16T13:10:36.663276729Z" level=info msg="StartContainer for \"b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37\" returns successfully" Dec 16 13:10:36.671317 systemd[1]: cri-containerd-b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37.scope: Deactivated successfully. Dec 16 13:10:36.672871 containerd[1576]: time="2025-12-16T13:10:36.672822167Z" level=info msg="received container exit event container_id:\"b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37\" id:\"b969fbe9b42363961731a845a07be848dc50181c2dbd348663b3e1c5b9554a37\" pid:4655 exited_at:{seconds:1765890636 nanos:672511773}" Dec 16 13:10:37.341941 kubelet[2739]: E1216 13:10:37.341816 2739 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:10:37.959141 containerd[1576]: time="2025-12-16T13:10:37.959041966Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:10:38.320214 containerd[1576]: time="2025-12-16T13:10:38.320044596Z" level=info msg="Container 7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:38.324230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731632169.mount: Deactivated successfully. Dec 16 13:10:38.600018 containerd[1576]: time="2025-12-16T13:10:38.599881958Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33\"" Dec 16 13:10:38.600675 containerd[1576]: time="2025-12-16T13:10:38.600623638Z" level=info msg="StartContainer for \"7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33\"" Dec 16 13:10:38.602041 containerd[1576]: time="2025-12-16T13:10:38.602019121Z" level=info msg="connecting to shim 7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33" address="unix:///run/containerd/s/4a32a32819c78601c6df5f087803374b221c7d302d0ae4392c5c4948386998d4" protocol=ttrpc version=3 Dec 16 13:10:38.621357 systemd[1]: Started cri-containerd-7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33.scope - libcontainer container 7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33. Dec 16 13:10:38.707226 systemd[1]: cri-containerd-7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33.scope: Deactivated successfully. Dec 16 13:10:38.962627 containerd[1576]: time="2025-12-16T13:10:38.962559123Z" level=info msg="received container exit event container_id:\"7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33\" id:\"7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33\" pid:4701 exited_at:{seconds:1765890638 nanos:708465801}" Dec 16 13:10:38.964424 containerd[1576]: time="2025-12-16T13:10:38.964395919Z" level=info msg="StartContainer for \"7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33\" returns successfully" Dec 16 13:10:38.989463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c92f5275728f36ee6e8e7c258d9e8c872346e8765ad358d422f3ada15f96a33-rootfs.mount: Deactivated successfully. Dec 16 13:10:40.595780 containerd[1576]: time="2025-12-16T13:10:40.595723318Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:10:40.604331 containerd[1576]: time="2025-12-16T13:10:40.604282701Z" level=info msg="Container eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:40.615966 containerd[1576]: time="2025-12-16T13:10:40.615905625Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b\"" Dec 16 13:10:40.616940 containerd[1576]: time="2025-12-16T13:10:40.616839088Z" level=info msg="StartContainer for \"eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b\"" Dec 16 13:10:40.617949 containerd[1576]: time="2025-12-16T13:10:40.617918217Z" level=info msg="connecting to shim eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b" address="unix:///run/containerd/s/4a32a32819c78601c6df5f087803374b221c7d302d0ae4392c5c4948386998d4" protocol=ttrpc version=3 Dec 16 13:10:40.645384 systemd[1]: Started cri-containerd-eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b.scope - libcontainer container eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b. Dec 16 13:10:40.680764 systemd[1]: cri-containerd-eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b.scope: Deactivated successfully. Dec 16 13:10:40.682508 containerd[1576]: time="2025-12-16T13:10:40.682468615Z" level=info msg="received container exit event container_id:\"eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b\" id:\"eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b\" pid:4740 exited_at:{seconds:1765890640 nanos:680967568}" Dec 16 13:10:40.692800 containerd[1576]: time="2025-12-16T13:10:40.692754652Z" level=info msg="StartContainer for \"eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b\" returns successfully" Dec 16 13:10:40.709943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eef7db72d8ff50794303d13a3ac8d22b1200723286e9a5f09f6346581cd6a15b-rootfs.mount: Deactivated successfully. Dec 16 13:10:41.736400 containerd[1576]: time="2025-12-16T13:10:41.736337476Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:10:41.753318 containerd[1576]: time="2025-12-16T13:10:41.753238607Z" level=info msg="Container acee2c0c8c7a1022c9e8187c01da1bc2c9c233bd9671847be11282463feaf8c5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:41.768480 containerd[1576]: time="2025-12-16T13:10:41.768405448Z" level=info msg="CreateContainer within sandbox \"f9109c096722b5f6ce5e8e50342cd29f4b57fc77a25986d08e57ce175179e061\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"acee2c0c8c7a1022c9e8187c01da1bc2c9c233bd9671847be11282463feaf8c5\"" Dec 16 13:10:41.769177 containerd[1576]: time="2025-12-16T13:10:41.769155366Z" level=info msg="StartContainer for \"acee2c0c8c7a1022c9e8187c01da1bc2c9c233bd9671847be11282463feaf8c5\"" Dec 16 13:10:41.770589 containerd[1576]: time="2025-12-16T13:10:41.770537109Z" level=info msg="connecting to shim acee2c0c8c7a1022c9e8187c01da1bc2c9c233bd9671847be11282463feaf8c5" address="unix:///run/containerd/s/4a32a32819c78601c6df5f087803374b221c7d302d0ae4392c5c4948386998d4" protocol=ttrpc version=3 Dec 16 13:10:41.796775 systemd[1]: Started cri-containerd-acee2c0c8c7a1022c9e8187c01da1bc2c9c233bd9671847be11282463feaf8c5.scope - libcontainer container acee2c0c8c7a1022c9e8187c01da1bc2c9c233bd9671847be11282463feaf8c5. Dec 16 13:10:41.865348 containerd[1576]: time="2025-12-16T13:10:41.865296404Z" level=info msg="StartContainer for \"acee2c0c8c7a1022c9e8187c01da1bc2c9c233bd9671847be11282463feaf8c5\" returns successfully" Dec 16 13:10:42.394518 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:10:42.688533 kubelet[2739]: I1216 13:10:42.688423 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jj9c8" podStartSLOduration=8.688381182 podStartE2EDuration="8.688381182s" podCreationTimestamp="2025-12-16 13:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:10:42.688035488 +0000 UTC m=+90.504396136" watchObservedRunningTime="2025-12-16 13:10:42.688381182 +0000 UTC m=+90.504741820" Dec 16 13:10:45.758201 systemd-networkd[1461]: lxc_health: Link UP Dec 16 13:10:45.758678 systemd-networkd[1461]: lxc_health: Gained carrier Dec 16 13:10:47.523275 systemd-networkd[1461]: lxc_health: Gained IPv6LL Dec 16 13:10:52.077944 sshd[4544]: Connection closed by 10.0.0.1 port 44786 Dec 16 13:10:52.078488 sshd-session[4537]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:52.082993 systemd[1]: sshd@26-10.0.0.130:22-10.0.0.1:44786.service: Deactivated successfully. Dec 16 13:10:52.085399 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:10:52.086391 systemd-logind[1558]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:10:52.087690 systemd-logind[1558]: Removed session 27.