Aug 13 00:46:49.847244 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:46:49.847270 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:49.847279 kernel: BIOS-provided physical RAM map: Aug 13 00:46:49.847285 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:46:49.847292 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:46:49.847298 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:46:49.847306 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 00:46:49.847315 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 00:46:49.847324 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:46:49.847331 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:46:49.847338 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:46:49.847344 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:46:49.847351 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:46:49.847357 kernel: NX (Execute Disable) protection: active Aug 13 00:46:49.847368 kernel: APIC: Static calls initialized Aug 13 00:46:49.847375 kernel: SMBIOS 2.8 present. Aug 13 00:46:49.847385 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 00:46:49.847392 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:46:49.847399 kernel: Hypervisor detected: KVM Aug 13 00:46:49.847406 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:46:49.847413 kernel: kvm-clock: using sched offset of 4917086216 cycles Aug 13 00:46:49.847421 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:46:49.847428 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:46:49.847436 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:46:49.847446 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:46:49.847453 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 00:46:49.847461 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:46:49.847468 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:46:49.847476 kernel: Using GB pages for direct mapping Aug 13 00:46:49.847483 kernel: ACPI: Early table checksum verification disabled Aug 13 00:46:49.847490 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 00:46:49.847497 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:49.847507 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:49.847514 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:49.847521 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 00:46:49.847529 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:49.847536 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:49.847543 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:49.847550 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:49.847558 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 00:46:49.847584 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 00:46:49.847607 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 00:46:49.847625 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 00:46:49.847633 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 00:46:49.847641 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 00:46:49.847648 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 00:46:49.847659 kernel: No NUMA configuration found Aug 13 00:46:49.847666 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 00:46:49.847674 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Aug 13 00:46:49.847681 kernel: Zone ranges: Aug 13 00:46:49.847689 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:46:49.847697 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 00:46:49.847710 kernel: Normal empty Aug 13 00:46:49.847717 kernel: Device empty Aug 13 00:46:49.847724 kernel: Movable zone start for each node Aug 13 00:46:49.847732 kernel: Early memory node ranges Aug 13 00:46:49.847743 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:46:49.847750 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 00:46:49.847758 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 00:46:49.847765 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:46:49.847772 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:46:49.847783 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 00:46:49.847790 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:46:49.847800 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:46:49.847807 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:46:49.847818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:46:49.847825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:46:49.847835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:46:49.847842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:46:49.847850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:46:49.847857 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:46:49.847864 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:46:49.847893 kernel: TSC deadline timer available Aug 13 00:46:49.847901 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:46:49.847912 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:46:49.847919 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:46:49.847926 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:46:49.847933 kernel: CPU topo: Num. cores per package: 4 Aug 13 00:46:49.847941 kernel: CPU topo: Num. threads per package: 4 Aug 13 00:46:49.847948 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Aug 13 00:46:49.847956 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:46:49.847963 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:46:49.847971 kernel: kvm-guest: setup PV sched yield Aug 13 00:46:49.847978 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:46:49.847988 kernel: Booting paravirtualized kernel on KVM Aug 13 00:46:49.847996 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:46:49.848004 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:46:49.848012 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Aug 13 00:46:49.848019 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Aug 13 00:46:49.848027 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:46:49.848034 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:46:49.848041 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:46:49.848050 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:49.848060 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:46:49.848068 kernel: random: crng init done Aug 13 00:46:49.848075 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:46:49.848083 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:46:49.848090 kernel: Fallback order for Node 0: 0 Aug 13 00:46:49.848098 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Aug 13 00:46:49.848105 kernel: Policy zone: DMA32 Aug 13 00:46:49.848112 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:46:49.848122 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:46:49.848130 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:46:49.848137 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:46:49.848145 kernel: Dynamic Preempt: voluntary Aug 13 00:46:49.848152 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:46:49.848160 kernel: rcu: RCU event tracing is enabled. Aug 13 00:46:49.848168 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:46:49.848175 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:46:49.848185 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:46:49.848195 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:46:49.848203 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:46:49.848211 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:46:49.848218 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:46:49.848226 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:46:49.848233 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:46:49.848241 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:46:49.848249 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:46:49.848266 kernel: Console: colour VGA+ 80x25 Aug 13 00:46:49.848274 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:46:49.848281 kernel: ACPI: Core revision 20240827 Aug 13 00:46:49.848289 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:46:49.848299 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:46:49.848307 kernel: x2apic enabled Aug 13 00:46:49.848317 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:46:49.848325 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:46:49.848333 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:46:49.848343 kernel: kvm-guest: setup PV IPIs Aug 13 00:46:49.848351 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:46:49.848359 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 13 00:46:49.848367 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:46:49.848374 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:46:49.848382 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:46:49.848390 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:46:49.848398 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:46:49.848406 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:46:49.848416 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:46:49.848423 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:46:49.848431 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:46:49.848439 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:46:49.848447 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:46:49.848455 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:46:49.848463 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:46:49.848471 kernel: x86/bugs: return thunk changed Aug 13 00:46:49.848481 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:46:49.848489 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:46:49.848497 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:46:49.848505 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:46:49.848512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:46:49.848520 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 00:46:49.848537 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:46:49.848555 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:46:49.848572 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:46:49.848585 kernel: landlock: Up and running. Aug 13 00:46:49.848593 kernel: SELinux: Initializing. Aug 13 00:46:49.848607 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:46:49.848617 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:46:49.848625 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:46:49.848633 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:46:49.848646 kernel: ... version: 0 Aug 13 00:46:49.848655 kernel: ... bit width: 48 Aug 13 00:46:49.848662 kernel: ... generic registers: 6 Aug 13 00:46:49.848674 kernel: ... value mask: 0000ffffffffffff Aug 13 00:46:49.848682 kernel: ... max period: 00007fffffffffff Aug 13 00:46:49.848690 kernel: ... fixed-purpose events: 0 Aug 13 00:46:49.848698 kernel: ... event mask: 000000000000003f Aug 13 00:46:49.848705 kernel: signal: max sigframe size: 1776 Aug 13 00:46:49.848713 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:46:49.848721 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:46:49.848729 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:46:49.848737 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:46:49.848748 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:46:49.848756 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 00:46:49.848763 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:46:49.848771 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:46:49.848779 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 136904K reserved, 0K cma-reserved) Aug 13 00:46:49.848787 kernel: devtmpfs: initialized Aug 13 00:46:49.848795 kernel: x86/mm: Memory block size: 128MB Aug 13 00:46:49.848803 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:46:49.848811 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:46:49.848821 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:46:49.848832 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:46:49.848840 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:46:49.848847 kernel: audit: type=2000 audit(1755046006.692:1): state=initialized audit_enabled=0 res=1 Aug 13 00:46:49.848855 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:46:49.848863 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:46:49.848889 kernel: cpuidle: using governor menu Aug 13 00:46:49.848897 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:46:49.848904 kernel: dca service started, version 1.12.1 Aug 13 00:46:49.848915 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 00:46:49.848923 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 00:46:49.848931 kernel: PCI: Using configuration type 1 for base access Aug 13 00:46:49.848939 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:46:49.848947 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:46:49.848954 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:46:49.848962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:46:49.848970 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:46:49.848978 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:46:49.848988 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:46:49.848995 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:46:49.849003 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:46:49.849011 kernel: ACPI: Interpreter enabled Aug 13 00:46:49.849021 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:46:49.849029 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:46:49.849037 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:46:49.849044 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:46:49.849052 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:46:49.849062 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:46:49.849281 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:46:49.849408 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:46:49.849528 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:46:49.849538 kernel: PCI host bridge to bus 0000:00 Aug 13 00:46:49.849685 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:46:49.849796 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:46:49.849961 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:46:49.850075 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 00:46:49.850183 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:46:49.850291 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 00:46:49.850398 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:46:49.850563 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:46:49.850719 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:46:49.850840 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 00:46:49.850984 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 00:46:49.851105 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:46:49.851223 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:46:49.851363 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 13 00:46:49.851485 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Aug 13 00:46:49.851618 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 00:46:49.851739 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:46:49.851895 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:46:49.852019 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Aug 13 00:46:49.852140 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 00:46:49.852259 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:46:49.852400 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:46:49.852526 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Aug 13 00:46:49.852656 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Aug 13 00:46:49.852784 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 00:46:49.852931 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:46:49.853068 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:46:49.853200 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:46:49.853338 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 00:46:49.853458 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Aug 13 00:46:49.853576 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Aug 13 00:46:49.853722 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 00:46:49.853843 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 00:46:49.853853 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:46:49.853862 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:46:49.853888 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:46:49.853896 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:46:49.853906 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:46:49.853914 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:46:49.853922 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:46:49.853930 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:46:49.853938 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:46:49.853945 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:46:49.853953 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:46:49.853963 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:46:49.853971 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:46:49.853979 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:46:49.853987 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:46:49.853994 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:46:49.854002 kernel: iommu: Default domain type: Translated Aug 13 00:46:49.854010 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:46:49.854018 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:46:49.854025 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:46:49.854033 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:46:49.854043 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 00:46:49.854165 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:46:49.854284 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:46:49.854402 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:46:49.854412 kernel: vgaarb: loaded Aug 13 00:46:49.854420 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:46:49.854428 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:46:49.854436 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:46:49.854447 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:46:49.854455 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:46:49.854463 kernel: pnp: PnP ACPI init Aug 13 00:46:49.854624 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:46:49.854636 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:46:49.854644 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:46:49.854652 kernel: NET: Registered PF_INET protocol family Aug 13 00:46:49.854660 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:46:49.854672 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:46:49.854680 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:46:49.854688 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:46:49.854696 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:46:49.854704 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:46:49.854711 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:46:49.854719 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:46:49.854727 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:46:49.854735 kernel: NET: Registered PF_XDP protocol family Aug 13 00:46:49.854847 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:46:49.854973 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:46:49.855082 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:46:49.855190 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 00:46:49.855298 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:46:49.855406 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 00:46:49.855416 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:46:49.855424 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 13 00:46:49.855436 kernel: Initialise system trusted keyrings Aug 13 00:46:49.855444 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:46:49.855452 kernel: Key type asymmetric registered Aug 13 00:46:49.855459 kernel: Asymmetric key parser 'x509' registered Aug 13 00:46:49.855467 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:46:49.855475 kernel: io scheduler mq-deadline registered Aug 13 00:46:49.855483 kernel: io scheduler kyber registered Aug 13 00:46:49.855491 kernel: io scheduler bfq registered Aug 13 00:46:49.855498 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:46:49.855510 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:46:49.855517 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:46:49.855525 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:46:49.855533 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:46:49.855541 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:46:49.855549 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:46:49.855557 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:46:49.855565 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:46:49.855573 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:46:49.855731 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:46:49.855847 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:46:49.856016 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:46:49 UTC (1755046009) Aug 13 00:46:49.856132 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:46:49.856142 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:46:49.856150 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:46:49.856158 kernel: Segment Routing with IPv6 Aug 13 00:46:49.856165 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:46:49.856178 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:46:49.856186 kernel: Key type dns_resolver registered Aug 13 00:46:49.856193 kernel: IPI shorthand broadcast: enabled Aug 13 00:46:49.856201 kernel: sched_clock: Marking stable (3426003084, 144609443)->(3600919212, -30306685) Aug 13 00:46:49.856209 kernel: registered taskstats version 1 Aug 13 00:46:49.856217 kernel: Loading compiled-in X.509 certificates Aug 13 00:46:49.856225 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:46:49.856232 kernel: Demotion targets for Node 0: null Aug 13 00:46:49.856240 kernel: Key type .fscrypt registered Aug 13 00:46:49.856250 kernel: Key type fscrypt-provisioning registered Aug 13 00:46:49.856258 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:46:49.856266 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:46:49.856274 kernel: ima: No architecture policies found Aug 13 00:46:49.856282 kernel: clk: Disabling unused clocks Aug 13 00:46:49.856290 kernel: Warning: unable to open an initial console. Aug 13 00:46:49.856298 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:46:49.856305 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:46:49.856314 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:46:49.856324 kernel: Run /init as init process Aug 13 00:46:49.856332 kernel: with arguments: Aug 13 00:46:49.856340 kernel: /init Aug 13 00:46:49.856347 kernel: with environment: Aug 13 00:46:49.856355 kernel: HOME=/ Aug 13 00:46:49.856363 kernel: TERM=linux Aug 13 00:46:49.856370 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:46:49.856382 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:46:49.856396 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:49.856418 systemd[1]: Detected virtualization kvm. Aug 13 00:46:49.856426 systemd[1]: Detected architecture x86-64. Aug 13 00:46:49.856435 systemd[1]: Running in initrd. Aug 13 00:46:49.856443 systemd[1]: No hostname configured, using default hostname. Aug 13 00:46:49.856454 systemd[1]: Hostname set to . Aug 13 00:46:49.856462 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:46:49.856471 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:46:49.856480 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:49.856488 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:49.856497 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:46:49.856506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:49.856515 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:46:49.856527 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:46:49.856537 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:46:49.856546 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:46:49.856554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:49.856565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:49.856573 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:49.856582 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:49.856590 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:49.856609 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:49.856618 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:49.856626 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:49.856635 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:46:49.856644 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:46:49.856653 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:49.856661 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:49.856670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:49.856681 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:49.856689 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:46:49.856698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:49.856706 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:46:49.856715 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:46:49.856728 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:46:49.856737 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:49.856745 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:49.856754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:49.856763 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:49.856772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:49.856783 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:46:49.856792 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:46:49.856821 systemd-journald[220]: Collecting audit messages is disabled. Aug 13 00:46:49.856843 systemd-journald[220]: Journal started Aug 13 00:46:49.856864 systemd-journald[220]: Runtime Journal (/run/log/journal/12e01e5dfa104d408f9fb9d866c2994c) is 6M, max 48.6M, 42.5M free. Aug 13 00:46:49.846435 systemd-modules-load[222]: Inserted module 'overlay' Aug 13 00:46:49.893020 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:49.893050 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:46:49.893073 kernel: Bridge firewalling registered Aug 13 00:46:49.875075 systemd-modules-load[222]: Inserted module 'br_netfilter' Aug 13 00:46:49.894171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:49.896737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:49.899347 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:49.906945 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:46:49.910353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:49.915549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:49.917770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:49.926711 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:49.929514 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:46:49.931251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:49.935071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:49.937967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:49.941629 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:49.944421 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:46:49.968030 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:49.988282 systemd-resolved[259]: Positive Trust Anchors: Aug 13 00:46:49.988297 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:49.988329 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:49.991238 systemd-resolved[259]: Defaulting to hostname 'linux'. Aug 13 00:46:49.992749 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:49.997240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:50.083920 kernel: SCSI subsystem initialized Aug 13 00:46:50.094911 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:46:50.107905 kernel: iscsi: registered transport (tcp) Aug 13 00:46:50.140927 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:46:50.141048 kernel: QLogic iSCSI HBA Driver Aug 13 00:46:50.163861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:46:50.192898 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:50.193827 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:46:50.256616 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:50.258664 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:46:50.313913 kernel: raid6: avx2x4 gen() 29837 MB/s Aug 13 00:46:50.330895 kernel: raid6: avx2x2 gen() 29340 MB/s Aug 13 00:46:50.347986 kernel: raid6: avx2x1 gen() 25040 MB/s Aug 13 00:46:50.348015 kernel: raid6: using algorithm avx2x4 gen() 29837 MB/s Aug 13 00:46:50.366034 kernel: raid6: .... xor() 7868 MB/s, rmw enabled Aug 13 00:46:50.366078 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:46:50.386897 kernel: xor: automatically using best checksumming function avx Aug 13 00:46:50.559912 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:46:50.569596 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:50.588513 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:50.637884 systemd-udevd[474]: Using default interface naming scheme 'v255'. Aug 13 00:46:50.644558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:50.648795 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:46:50.712557 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Aug 13 00:46:50.752247 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:50.756845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:50.837946 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:50.842953 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:46:50.876918 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:46:50.889933 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:46:50.889981 kernel: AES CTR mode by8 optimization enabled Aug 13 00:46:50.902903 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 00:46:50.906420 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:46:50.914651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:46:50.914708 kernel: GPT:9289727 != 19775487 Aug 13 00:46:50.914720 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:46:50.914730 kernel: GPT:9289727 != 19775487 Aug 13 00:46:50.914740 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:46:50.914750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:50.926377 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:50.927895 kernel: libata version 3.00 loaded. Aug 13 00:46:50.927991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:50.931382 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:50.935788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:50.937815 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:46:50.938118 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:46:50.941182 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 00:46:50.941406 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 00:46:50.941617 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:46:50.946957 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:50.949431 kernel: scsi host0: ahci Aug 13 00:46:50.958140 kernel: scsi host1: ahci Aug 13 00:46:50.964886 kernel: scsi host2: ahci Aug 13 00:46:50.967138 kernel: scsi host3: ahci Aug 13 00:46:50.967392 kernel: scsi host4: ahci Aug 13 00:46:50.968018 kernel: scsi host5: ahci Aug 13 00:46:50.969236 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Aug 13 00:46:50.969258 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Aug 13 00:46:50.970943 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Aug 13 00:46:50.971920 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Aug 13 00:46:50.971943 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Aug 13 00:46:50.973739 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Aug 13 00:46:50.976994 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:46:50.998617 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:46:51.008561 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:46:51.016891 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:46:51.017335 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:46:51.023165 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:46:51.076553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:51.267214 disk-uuid[636]: Primary Header is updated. Aug 13 00:46:51.267214 disk-uuid[636]: Secondary Entries is updated. Aug 13 00:46:51.267214 disk-uuid[636]: Secondary Header is updated. Aug 13 00:46:51.271227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:51.275912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:51.282793 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:51.282832 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:51.282847 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:46:51.282862 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:51.284926 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:51.285023 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:46:51.285910 kernel: ata3.00: applying bridge limits Aug 13 00:46:51.285949 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:51.288015 kernel: ata3.00: configured for UDMA/100 Aug 13 00:46:51.289479 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:46:51.342906 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:46:51.343234 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:46:51.476035 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:46:51.929905 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:51.930765 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:51.932344 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:51.936471 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:51.938489 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:46:51.977597 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:52.307923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:52.308392 disk-uuid[638]: The operation has completed successfully. Aug 13 00:46:52.343511 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:46:52.344505 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:46:52.382764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:46:52.409041 sh[666]: Success Aug 13 00:46:52.429919 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:46:52.430004 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:46:52.431896 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:46:52.443604 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 00:46:52.489562 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:46:52.494470 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:46:52.522057 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:46:52.531721 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:46:52.531783 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (678) Aug 13 00:46:52.534266 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:46:52.534298 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:52.534312 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:46:52.542543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:46:52.543818 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:46:52.544965 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:46:52.546274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:46:52.547866 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:46:52.580910 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Aug 13 00:46:52.584077 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:52.584105 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:52.584116 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:46:52.592912 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:52.593848 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:46:52.598013 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:46:52.715051 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:52.717797 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:52.746970 ignition[758]: Ignition 2.21.0 Aug 13 00:46:52.747583 ignition[758]: Stage: fetch-offline Aug 13 00:46:52.747688 ignition[758]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:52.747715 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:46:52.747864 ignition[758]: parsed url from cmdline: "" Aug 13 00:46:52.747887 ignition[758]: no config URL provided Aug 13 00:46:52.747897 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:46:52.747921 ignition[758]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:46:52.747979 ignition[758]: op(1): [started] loading QEMU firmware config module Aug 13 00:46:52.747986 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:46:52.759712 ignition[758]: op(1): [finished] loading QEMU firmware config module Aug 13 00:46:52.759756 ignition[758]: QEMU firmware config was not found. Ignoring... Aug 13 00:46:52.787055 systemd-networkd[853]: lo: Link UP Aug 13 00:46:52.787064 systemd-networkd[853]: lo: Gained carrier Aug 13 00:46:52.789324 systemd-networkd[853]: Enumeration completed Aug 13 00:46:52.789855 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:52.789860 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:46:52.789957 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:52.792278 systemd-networkd[853]: eth0: Link UP Aug 13 00:46:52.792379 systemd[1]: Reached target network.target - Network. Aug 13 00:46:52.792481 systemd-networkd[853]: eth0: Gained carrier Aug 13 00:46:52.792492 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:52.814945 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:46:52.820734 ignition[758]: parsing config with SHA512: 14c7577d07fff97c33de54a4f7c115af7817e9de3f3a6af2aa12d76df3cdb6ef35e3fdc24e3130871935051e7a08b0a68cd54c7dd5362f8233142f17fe44cf78 Aug 13 00:46:52.827540 unknown[758]: fetched base config from "system" Aug 13 00:46:52.827553 unknown[758]: fetched user config from "qemu" Aug 13 00:46:52.827963 ignition[758]: fetch-offline: fetch-offline passed Aug 13 00:46:52.828027 ignition[758]: Ignition finished successfully Aug 13 00:46:52.831976 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:52.834910 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:46:52.836261 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:46:52.896459 ignition[860]: Ignition 2.21.0 Aug 13 00:46:52.896478 ignition[860]: Stage: kargs Aug 13 00:46:52.896798 ignition[860]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:52.896822 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:46:52.901279 ignition[860]: kargs: kargs passed Aug 13 00:46:52.901349 ignition[860]: Ignition finished successfully Aug 13 00:46:52.908976 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:46:52.912151 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:46:52.964940 ignition[868]: Ignition 2.21.0 Aug 13 00:46:52.964955 ignition[868]: Stage: disks Aug 13 00:46:52.965144 ignition[868]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:52.965156 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:46:52.966393 ignition[868]: disks: disks passed Aug 13 00:46:52.966444 ignition[868]: Ignition finished successfully Aug 13 00:46:52.974141 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:46:52.975504 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:46:52.975767 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:46:52.976358 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:46:52.976728 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:46:52.977282 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:46:52.978856 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:46:53.020916 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:46:53.028992 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:46:53.031020 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:46:53.160902 kernel: EXT4-fs (vda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:46:53.161270 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:46:53.162839 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:46:53.165998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:46:53.168282 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:46:53.169660 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:46:53.169713 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:46:53.169743 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:46:53.190095 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:46:53.192251 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:46:53.197294 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Aug 13 00:46:53.197331 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:53.197347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:53.198190 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:46:53.204361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:46:53.236288 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:46:53.242102 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:46:53.247721 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:46:53.254136 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:46:53.367580 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:46:53.370402 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:46:53.372236 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:46:53.395921 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:53.410792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:46:53.436020 ignition[1000]: INFO : Ignition 2.21.0 Aug 13 00:46:53.436020 ignition[1000]: INFO : Stage: mount Aug 13 00:46:53.438027 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:53.438027 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:46:53.440298 ignition[1000]: INFO : mount: mount passed Aug 13 00:46:53.440298 ignition[1000]: INFO : Ignition finished successfully Aug 13 00:46:53.444406 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:46:53.446653 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:46:53.531199 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:46:53.533150 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:46:53.567587 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Aug 13 00:46:53.567649 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:53.567665 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:53.569095 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:46:53.573463 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:46:53.630303 ignition[1029]: INFO : Ignition 2.21.0 Aug 13 00:46:53.630303 ignition[1029]: INFO : Stage: files Aug 13 00:46:53.632862 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:53.632862 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:46:53.635512 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:46:53.635512 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:46:53.635512 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:46:53.640513 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:46:53.640513 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:46:53.640513 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:46:53.638990 unknown[1029]: wrote ssh authorized keys file for user: core Aug 13 00:46:53.646947 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:46:53.646947 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:46:53.693445 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:46:53.832707 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:46:53.834739 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:46:53.834739 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:46:53.926580 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:46:54.117911 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:46:54.117911 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:46:54.125563 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:46:54.125563 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:46:54.125563 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:46:54.125563 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:46:54.125563 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:46:54.125563 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:46:54.125563 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:46:54.205735 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:46:54.208617 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:46:54.208617 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:54.215968 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:54.215968 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:54.221341 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:46:54.504172 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:46:54.793117 systemd-networkd[853]: eth0: Gained IPv6LL Aug 13 00:46:55.165416 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:55.165416 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:46:55.170340 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:46:55.191274 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:46:55.191274 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:46:55.191274 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:46:55.191274 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:46:55.199277 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:46:55.199277 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:46:55.199277 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:46:55.217243 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:46:55.227403 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:46:55.229082 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:46:55.229082 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:46:55.231863 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:46:55.233292 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:46:55.235008 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:46:55.236724 ignition[1029]: INFO : files: files passed Aug 13 00:46:55.237492 ignition[1029]: INFO : Ignition finished successfully Aug 13 00:46:55.241267 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:46:55.243398 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:46:55.246007 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:46:55.258002 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:46:55.258184 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:46:55.261981 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 00:46:55.265745 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:55.265745 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:55.269166 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:55.272133 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:46:55.272738 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:46:55.276585 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:46:55.313560 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:46:55.313691 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:46:55.314800 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:46:55.317133 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:46:55.318853 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:46:55.321930 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:46:55.360962 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:46:55.365932 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:46:55.405508 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:55.406034 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:55.406414 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:46:55.406831 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:46:55.407031 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:46:55.415740 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:46:55.416311 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:46:55.416697 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:46:55.417187 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:46:55.417559 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:46:55.417896 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:46:55.418351 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:46:55.418681 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:55.419176 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:46:55.419535 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:46:55.419860 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:46:55.420302 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:46:55.420505 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:55.439647 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:55.440294 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:55.444025 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:46:55.444343 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:55.447519 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:46:55.447739 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:55.450424 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:46:55.450624 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:55.451376 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:46:55.455154 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:46:55.455612 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:55.456794 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:46:55.460693 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:46:55.461275 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:46:55.461412 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:55.463203 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:46:55.463324 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:55.464912 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:46:55.465085 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:46:55.467517 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:46:55.467699 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:46:55.473836 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:46:55.475735 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:46:55.475863 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:55.477383 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:46:55.480211 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:46:55.480345 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:55.481833 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:46:55.481956 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:55.489639 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:46:55.489787 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:46:55.510691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:46:55.546796 ignition[1084]: INFO : Ignition 2.21.0 Aug 13 00:46:55.546796 ignition[1084]: INFO : Stage: umount Aug 13 00:46:55.549039 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:55.549039 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:46:55.549039 ignition[1084]: INFO : umount: umount passed Aug 13 00:46:55.549039 ignition[1084]: INFO : Ignition finished successfully Aug 13 00:46:55.553453 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:46:55.553646 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:46:55.554713 systemd[1]: Stopped target network.target - Network. Aug 13 00:46:55.556156 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:46:55.556219 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:46:55.557840 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:46:55.557907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:46:55.559806 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:46:55.559892 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:46:55.560283 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:46:55.560335 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:55.560743 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:46:55.565260 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:46:55.573067 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:46:55.573248 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:46:55.579072 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:46:55.580063 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:46:55.580239 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:55.585587 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:55.585987 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:46:55.586138 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:46:55.590385 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:46:55.591297 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:46:55.591830 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:46:55.591966 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:55.594011 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:46:55.594480 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:46:55.594542 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:55.595820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:46:55.595911 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:55.598598 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:46:55.598649 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:55.599610 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:55.602486 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:46:55.629804 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:46:55.632075 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:55.632640 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:46:55.632689 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:55.635289 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:46:55.635329 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:55.635652 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:46:55.635701 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:55.641708 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:46:55.641765 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:55.644989 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:46:55.645040 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:55.649304 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:46:55.649858 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:46:55.649960 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:55.654623 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:46:55.654671 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:55.658724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:55.658806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:55.663779 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:46:55.668349 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:46:55.680378 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:46:55.680544 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:46:55.709117 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:46:55.709311 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:46:55.710455 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:46:55.714486 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:46:55.714575 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:46:55.716189 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:46:55.752209 systemd[1]: Switching root. Aug 13 00:46:55.797420 systemd-journald[220]: Journal stopped Aug 13 00:46:57.606036 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Aug 13 00:46:57.606114 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:46:57.606137 kernel: SELinux: policy capability open_perms=1 Aug 13 00:46:57.606152 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:46:57.606170 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:46:57.606184 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:46:57.606199 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:46:57.606213 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:46:57.606227 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:46:57.606242 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:46:57.606257 kernel: audit: type=1403 audit(1755046016.451:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:46:57.606284 systemd[1]: Successfully loaded SELinux policy in 55.276ms. Aug 13 00:46:57.606312 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.410ms. Aug 13 00:46:57.606332 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:57.606349 systemd[1]: Detected virtualization kvm. Aug 13 00:46:57.606366 systemd[1]: Detected architecture x86-64. Aug 13 00:46:57.606392 systemd[1]: Detected first boot. Aug 13 00:46:57.606408 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:46:57.606424 zram_generator::config[1130]: No configuration found. Aug 13 00:46:57.606441 kernel: Guest personality initialized and is inactive Aug 13 00:46:57.606462 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:46:57.606480 kernel: Initialized host personality Aug 13 00:46:57.606498 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:46:57.606516 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:46:57.606537 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:46:57.606557 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:46:57.606577 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:46:57.606597 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:46:57.606613 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:46:57.606630 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:46:57.606648 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:46:57.606663 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:46:57.606679 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:46:57.606695 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:46:57.606711 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:46:57.606732 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:46:57.606765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:57.606781 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:57.606797 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:46:57.606816 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:46:57.606833 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:46:57.606850 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:57.606865 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:46:57.606900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:57.606917 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:57.606934 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:46:57.606949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:46:57.606969 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:46:57.606985 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:46:57.607001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:57.607017 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:57.607033 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:57.607048 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:57.607063 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:46:57.607079 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:46:57.607095 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:46:57.607113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:57.607140 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:57.607155 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:57.607171 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:46:57.607187 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:46:57.607203 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:46:57.607219 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:46:57.607235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:57.607250 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:46:57.607269 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:46:57.607285 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:46:57.607301 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:46:57.607317 systemd[1]: Reached target machines.target - Containers. Aug 13 00:46:57.607337 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:46:57.607353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:57.607370 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:57.607397 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:46:57.607418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:46:57.607435 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:46:57.607451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:46:57.607467 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:46:57.607482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:46:57.607508 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:46:57.607524 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:46:57.607543 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:46:57.607569 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:46:57.607590 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:46:57.607612 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:57.607633 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:57.607654 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:57.607674 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:46:57.607694 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:46:57.607715 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:46:57.607735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:57.607759 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:46:57.607777 systemd[1]: Stopped verity-setup.service. Aug 13 00:46:57.607823 systemd-journald[1194]: Collecting audit messages is disabled. Aug 13 00:46:57.607862 kernel: loop: module loaded Aug 13 00:46:57.607903 kernel: fuse: init (API version 7.41) Aug 13 00:46:57.607921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:57.607939 systemd-journald[1194]: Journal started Aug 13 00:46:57.607969 systemd-journald[1194]: Runtime Journal (/run/log/journal/12e01e5dfa104d408f9fb9d866c2994c) is 6M, max 48.6M, 42.5M free. Aug 13 00:46:57.267655 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:46:57.291043 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:46:57.291523 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:46:57.613745 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:57.613473 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:46:57.616284 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:46:57.619084 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:46:57.620311 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:46:57.621575 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:46:57.624170 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:46:57.628350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:57.630122 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:46:57.630340 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:46:57.632188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:46:57.632410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:46:57.633992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:46:57.634201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:46:57.635821 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:46:57.636038 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:46:57.637462 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:46:57.641042 kernel: ACPI: bus type drm_connector registered Aug 13 00:46:57.637675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:46:57.639912 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:57.642154 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:57.643850 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:46:57.645811 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:46:57.646117 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:46:57.648944 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:46:57.667312 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:46:57.670512 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:46:57.673963 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:46:57.675305 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:46:57.675345 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:46:57.677607 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:46:57.684037 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:46:57.685352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:57.687214 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:46:57.690677 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:46:57.692102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:46:57.694095 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:46:57.695332 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:46:57.697018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:57.701048 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:46:57.707089 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:46:57.708925 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:46:57.710450 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:46:57.719049 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:46:57.720526 systemd-journald[1194]: Time spent on flushing to /var/log/journal/12e01e5dfa104d408f9fb9d866c2994c is 22.300ms for 981 entries. Aug 13 00:46:57.720526 systemd-journald[1194]: System Journal (/var/log/journal/12e01e5dfa104d408f9fb9d866c2994c) is 8M, max 195.6M, 187.6M free. Aug 13 00:46:57.753401 systemd-journald[1194]: Received client request to flush runtime journal. Aug 13 00:46:57.753450 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:46:57.720688 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:46:57.726278 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:46:57.729819 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:46:57.751084 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:57.757272 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:57.759080 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:46:57.783907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:46:57.794302 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:46:57.807825 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:46:57.811905 kernel: loop1: detected capacity change from 0 to 113872 Aug 13 00:46:57.812475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:57.844980 kernel: loop2: detected capacity change from 0 to 146240 Aug 13 00:46:57.869135 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 00:46:57.869152 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 00:46:57.876499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:57.882921 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 00:46:57.897932 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 00:46:57.906924 kernel: loop5: detected capacity change from 0 to 146240 Aug 13 00:46:57.941595 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 00:46:57.942621 (sd-merge)[1273]: Merged extensions into '/usr'. Aug 13 00:46:57.949175 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:46:57.949311 systemd[1]: Reloading... Aug 13 00:46:58.011973 zram_generator::config[1302]: No configuration found. Aug 13 00:46:58.225197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:58.273116 ldconfig[1237]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:46:58.311547 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:46:58.312042 systemd[1]: Reloading finished in 362 ms. Aug 13 00:46:58.341294 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:46:58.342864 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:46:58.372909 systemd[1]: Starting ensure-sysext.service... Aug 13 00:46:58.375079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:58.396341 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:46:58.396369 systemd[1]: Reloading... Aug 13 00:46:58.441402 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:46:58.441445 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:46:58.441800 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:46:58.444130 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:46:58.445246 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:46:58.445614 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Aug 13 00:46:58.445742 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Aug 13 00:46:58.451924 zram_generator::config[1364]: No configuration found. Aug 13 00:46:58.486293 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:46:58.486310 systemd-tmpfiles[1337]: Skipping /boot Aug 13 00:46:58.500800 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:46:58.500817 systemd-tmpfiles[1337]: Skipping /boot Aug 13 00:46:58.552918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:58.640750 systemd[1]: Reloading finished in 244 ms. Aug 13 00:46:58.681201 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:58.691526 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:46:58.694995 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:46:58.712149 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:46:58.716998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:58.720120 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:46:58.725101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:58.725280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:58.727238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:46:58.731225 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:46:58.735589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:46:58.736936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:58.737060 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:58.737155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:58.741708 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:58.742521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:58.743177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:58.743269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:58.743398 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:58.747002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:58.747311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:58.752070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:46:58.753439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:58.753545 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:58.753685 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:58.763122 systemd[1]: Finished ensure-sysext.service. Aug 13 00:46:58.768403 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:46:58.772116 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:46:58.773720 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:46:58.774954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:46:58.776650 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:46:58.776990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:46:58.778930 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:46:58.792206 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:46:58.824713 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:46:58.830811 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:46:58.833086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:46:58.833333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:46:58.838759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:46:58.839131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:46:58.841204 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:58.844092 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:46:58.845640 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:46:58.858269 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:46:58.859863 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:46:58.867517 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:46:58.871654 augenrules[1443]: No rules Aug 13 00:46:58.873116 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:46:58.873469 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:46:58.886366 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:46:58.910157 systemd-udevd[1436]: Using default interface naming scheme 'v255'. Aug 13 00:46:58.949906 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:58.959504 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:58.986373 systemd-resolved[1405]: Positive Trust Anchors: Aug 13 00:46:58.986782 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:58.986900 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:58.989765 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:46:58.993110 systemd-resolved[1405]: Defaulting to hostname 'linux'. Aug 13 00:46:58.995262 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:46:59.001733 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:59.004841 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:59.011646 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:46:59.013145 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:46:59.017045 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:46:59.019938 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:46:59.021586 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:46:59.023101 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:46:59.027737 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:46:59.031126 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:46:59.031172 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:59.033250 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:59.036127 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:46:59.043538 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:46:59.050886 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:46:59.055311 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:46:59.056806 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:46:59.062079 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:46:59.064112 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:46:59.068775 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:46:59.078106 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:59.079514 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:46:59.081744 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:46:59.081786 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:46:59.085485 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:46:59.093253 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:46:59.117086 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:46:59.154740 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:46:59.156272 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:46:59.159931 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:46:59.202288 jq[1493]: false Aug 13 00:46:59.210067 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:46:59.223122 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:46:59.227650 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:46:59.237362 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:46:59.245211 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:46:59.252613 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:46:59.253494 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:46:59.256579 systemd-networkd[1461]: lo: Link UP Aug 13 00:46:59.256587 systemd-networkd[1461]: lo: Gained carrier Aug 13 00:46:59.259718 systemd-networkd[1461]: Enumeration completed Aug 13 00:46:59.263156 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:46:59.264513 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:59.264519 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:46:59.266182 systemd-networkd[1461]: eth0: Link UP Aug 13 00:46:59.266995 systemd-networkd[1461]: eth0: Gained carrier Aug 13 00:46:59.267023 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:59.270358 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing passwd entry cache Aug 13 00:46:59.270382 oslogin_cache_refresh[1499]: Refreshing passwd entry cache Aug 13 00:46:59.278614 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:46:59.281578 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:59.283650 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:46:59.285494 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:46:59.285810 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:46:59.286220 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:46:59.286517 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:46:59.292826 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting users, quitting Aug 13 00:46:59.292826 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:46:59.292813 oslogin_cache_refresh[1499]: Failure getting users, quitting Aug 13 00:46:59.292838 oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:46:59.302980 systemd-networkd[1461]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:46:59.304042 systemd-timesyncd[1428]: Network configuration changed, trying to establish connection. Aug 13 00:46:59.977230 systemd-timesyncd[1428]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:46:59.977311 systemd-timesyncd[1428]: Initial clock synchronization to Wed 2025-08-13 00:46:59.976689 UTC. Aug 13 00:46:59.981520 systemd-resolved[1405]: Clock change detected. Flushing caches. Aug 13 00:46:59.988141 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing group entry cache Aug 13 00:46:59.986583 oslogin_cache_refresh[1499]: Refreshing group entry cache Aug 13 00:46:59.988729 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:46:59.988850 systemd[1]: Reached target network.target - Network. Aug 13 00:46:59.990833 update_engine[1511]: I20250813 00:46:59.990743 1511 main.cc:92] Flatcar Update Engine starting Aug 13 00:46:59.992869 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:46:59.998691 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:47:00.003956 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting groups, quitting Aug 13 00:47:00.003956 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:47:00.003193 oslogin_cache_refresh[1499]: Failure getting groups, quitting Aug 13 00:47:00.003212 oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:47:00.008858 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:47:00.012286 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:47:00.012636 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:47:00.052555 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:47:00.056213 dbus-daemon[1489]: [system] SELinux support is enabled Aug 13 00:47:00.059337 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:47:00.070192 jq[1513]: true Aug 13 00:47:00.071552 extend-filesystems[1498]: Found /dev/vda6 Aug 13 00:47:00.073020 update_engine[1511]: I20250813 00:47:00.071035 1511 update_check_scheduler.cc:74] Next update check in 9m27s Aug 13 00:47:00.076479 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:47:00.081391 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:47:00.082423 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:47:00.082891 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:47:00.092121 extend-filesystems[1498]: Found /dev/vda9 Aug 13 00:47:00.092417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:47:00.102367 extend-filesystems[1498]: Checking size of /dev/vda9 Aug 13 00:47:00.103352 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:47:00.103391 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:47:00.105478 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:47:00.114271 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:47:00.117328 jq[1537]: true Aug 13 00:47:00.116425 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:47:00.116506 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:47:00.120656 tar[1519]: linux-amd64/helm Aug 13 00:47:00.123321 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:47:00.147821 extend-filesystems[1498]: Resized partition /dev/vda9 Aug 13 00:47:00.148020 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:47:00.158512 extend-filesystems[1549]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:47:00.194500 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:47:00.196025 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:47:00.226164 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:47:00.257007 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:47:00.303183 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:47:00.303183 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:47:00.303183 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:47:00.313300 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Aug 13 00:47:00.305023 systemd-logind[1509]: New seat seat0. Aug 13 00:47:00.307996 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:47:00.308374 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:47:00.320674 sshd_keygen[1534]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:47:00.321299 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:47:00.332393 bash[1575]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:47:00.333810 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:47:00.346807 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:47:00.364574 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:47:00.370629 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:47:00.432747 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:47:00.434646 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:47:00.514791 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:47:00.547493 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:47:00.566384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:47:00.567041 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:47:00.567749 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:47:00.594108 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:47:00.602084 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:47:00.634020 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:47:00.637068 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:47:00.685614 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:47:00.700036 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:47:00.705774 kernel: kvm_amd: TSC scaling supported Aug 13 00:47:00.705906 kernel: kvm_amd: Nested Virtualization enabled Aug 13 00:47:00.705928 kernel: kvm_amd: Nested Paging enabled Aug 13 00:47:00.706975 kernel: kvm_amd: LBR virtualization supported Aug 13 00:47:00.707018 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 00:47:00.707678 kernel: kvm_amd: Virtual GIF supported Aug 13 00:47:00.849245 containerd[1532]: time="2025-08-13T00:47:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:47:00.854098 containerd[1532]: time="2025-08-13T00:47:00.853911315Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:47:00.879218 containerd[1532]: time="2025-08-13T00:47:00.879001990Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.895µs" Aug 13 00:47:00.879218 containerd[1532]: time="2025-08-13T00:47:00.879197627Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:47:00.879370 containerd[1532]: time="2025-08-13T00:47:00.879240327Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:47:00.880597 containerd[1532]: time="2025-08-13T00:47:00.880559801Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:47:00.880597 containerd[1532]: time="2025-08-13T00:47:00.880597622Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:47:00.880672 containerd[1532]: time="2025-08-13T00:47:00.880638539Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:47:00.880799 containerd[1532]: time="2025-08-13T00:47:00.880755818Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:47:00.880799 containerd[1532]: time="2025-08-13T00:47:00.880790353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:47:00.881262 containerd[1532]: time="2025-08-13T00:47:00.881221431Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:47:00.881262 containerd[1532]: time="2025-08-13T00:47:00.881249534Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:47:00.881330 containerd[1532]: time="2025-08-13T00:47:00.881264132Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:47:00.881330 containerd[1532]: time="2025-08-13T00:47:00.881275773Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:47:00.881434 containerd[1532]: time="2025-08-13T00:47:00.881406929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:47:00.881789 containerd[1532]: time="2025-08-13T00:47:00.881754611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:47:00.881842 containerd[1532]: time="2025-08-13T00:47:00.881802361Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:47:00.881842 containerd[1532]: time="2025-08-13T00:47:00.881815716Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:47:00.881902 containerd[1532]: time="2025-08-13T00:47:00.881862083Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:47:00.882177 containerd[1532]: time="2025-08-13T00:47:00.882146276Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:47:00.882291 containerd[1532]: time="2025-08-13T00:47:00.882267473Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:47:00.896326 containerd[1532]: time="2025-08-13T00:47:00.896253196Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:47:00.896507 containerd[1532]: time="2025-08-13T00:47:00.896371307Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:47:00.896507 containerd[1532]: time="2025-08-13T00:47:00.896393128Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:47:00.896507 containerd[1532]: time="2025-08-13T00:47:00.896492384Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:47:00.896584 containerd[1532]: time="2025-08-13T00:47:00.896512392Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:47:00.896584 containerd[1532]: time="2025-08-13T00:47:00.896526609Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:47:00.896584 containerd[1532]: time="2025-08-13T00:47:00.896548079Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:47:00.896584 containerd[1532]: time="2025-08-13T00:47:00.896563708Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:47:00.896687 containerd[1532]: time="2025-08-13T00:47:00.896588314Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:47:00.896687 containerd[1532]: time="2025-08-13T00:47:00.896609544Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:47:00.896687 containerd[1532]: time="2025-08-13T00:47:00.896625995Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:47:00.896687 containerd[1532]: time="2025-08-13T00:47:00.896642416Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:47:00.896864 containerd[1532]: time="2025-08-13T00:47:00.896833925Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:47:00.896908 containerd[1532]: time="2025-08-13T00:47:00.896868720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:47:00.896949 containerd[1532]: time="2025-08-13T00:47:00.896928703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:47:00.896977 containerd[1532]: time="2025-08-13T00:47:00.896947808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:47:00.896977 containerd[1532]: time="2025-08-13T00:47:00.896963538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:47:00.897031 containerd[1532]: time="2025-08-13T00:47:00.896977163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:47:00.897031 containerd[1532]: time="2025-08-13T00:47:00.896993163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:47:00.897031 containerd[1532]: time="2025-08-13T00:47:00.897008923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:47:00.897031 containerd[1532]: time="2025-08-13T00:47:00.897023450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:47:00.897136 containerd[1532]: time="2025-08-13T00:47:00.897038238Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:47:00.897136 containerd[1532]: time="2025-08-13T00:47:00.897053146Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:47:00.897294 containerd[1532]: time="2025-08-13T00:47:00.897152021Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:47:00.897322 containerd[1532]: time="2025-08-13T00:47:00.897295180Z" level=info msg="Start snapshots syncer" Aug 13 00:47:00.899609 containerd[1532]: time="2025-08-13T00:47:00.899563904Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:47:00.899951 containerd[1532]: time="2025-08-13T00:47:00.899895756Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:47:00.900153 containerd[1532]: time="2025-08-13T00:47:00.899964004Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:47:00.900940 containerd[1532]: time="2025-08-13T00:47:00.900906731Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:47:00.901158 containerd[1532]: time="2025-08-13T00:47:00.901122105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:47:00.901209 containerd[1532]: time="2025-08-13T00:47:00.901166118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:47:00.901209 containerd[1532]: time="2025-08-13T00:47:00.901183080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:47:00.901209 containerd[1532]: time="2025-08-13T00:47:00.901196675Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:47:00.901284 containerd[1532]: time="2025-08-13T00:47:00.901213456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:47:00.901284 containerd[1532]: time="2025-08-13T00:47:00.901228074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:47:00.901284 containerd[1532]: time="2025-08-13T00:47:00.901242371Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:47:00.901284 containerd[1532]: time="2025-08-13T00:47:00.901282556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:47:00.901388 containerd[1532]: time="2025-08-13T00:47:00.901298676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:47:00.901388 containerd[1532]: time="2025-08-13T00:47:00.901312903Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:47:00.903261 containerd[1532]: time="2025-08-13T00:47:00.903221291Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:47:00.903359 containerd[1532]: time="2025-08-13T00:47:00.903328562Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:47:00.903359 containerd[1532]: time="2025-08-13T00:47:00.903352988Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:47:00.903421 containerd[1532]: time="2025-08-13T00:47:00.903369449Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:47:00.903421 containerd[1532]: time="2025-08-13T00:47:00.903380660Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:47:00.903421 containerd[1532]: time="2025-08-13T00:47:00.903393654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:47:00.903421 containerd[1532]: time="2025-08-13T00:47:00.903408422Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:47:00.903545 containerd[1532]: time="2025-08-13T00:47:00.903431285Z" level=info msg="runtime interface created" Aug 13 00:47:00.903545 containerd[1532]: time="2025-08-13T00:47:00.903439450Z" level=info msg="created NRI interface" Aug 13 00:47:00.903545 containerd[1532]: time="2025-08-13T00:47:00.903469998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:47:00.903545 containerd[1532]: time="2025-08-13T00:47:00.903490235Z" level=info msg="Connect containerd service" Aug 13 00:47:00.903545 containerd[1532]: time="2025-08-13T00:47:00.903523879Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:47:00.906244 containerd[1532]: time="2025-08-13T00:47:00.906022323Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:47:00.959491 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:47:01.071108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.139840743Z" level=info msg="Start subscribing containerd event" Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.139931413Z" level=info msg="Start recovering state" Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140130275Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140221106Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140272883Z" level=info msg="Start event monitor" Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140298832Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140313679Z" level=info msg="Start streaming server" Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140341562Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140351520Z" level=info msg="runtime interface starting up..." Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140359205Z" level=info msg="starting plugins..." Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140384352Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:47:01.141349 containerd[1532]: time="2025-08-13T00:47:01.140623620Z" level=info msg="containerd successfully booted in 0.292135s" Aug 13 00:47:01.140951 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:47:01.192655 tar[1519]: linux-amd64/LICENSE Aug 13 00:47:01.192655 tar[1519]: linux-amd64/README.md Aug 13 00:47:01.235250 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:47:01.477927 systemd-networkd[1461]: eth0: Gained IPv6LL Aug 13 00:47:01.488608 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:47:01.500130 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:47:01.510259 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 00:47:01.530304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:01.558286 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:47:01.618820 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 00:47:01.628061 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 00:47:01.631389 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:47:01.676900 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:47:03.349752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:03.351720 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:47:03.353070 systemd[1]: Startup finished in 3.505s (kernel) + 6.805s (initrd) + 6.286s (userspace) = 16.596s. Aug 13 00:47:03.376896 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:03.914579 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:47:03.916374 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:51910.service - OpenSSH per-connection server daemon (10.0.0.1:51910). Aug 13 00:47:03.990198 kubelet[1661]: E0813 00:47:03.990126 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:03.990551 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 51910 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:03.992576 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:03.994210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:03.994413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:03.994826 systemd[1]: kubelet.service: Consumed 2.075s CPU time, 264.4M memory peak. Aug 13 00:47:04.000714 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:47:04.002139 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:47:04.008942 systemd-logind[1509]: New session 1 of user core. Aug 13 00:47:04.082067 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:47:04.085530 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:47:04.107655 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:04.111055 systemd-logind[1509]: New session c1 of user core. Aug 13 00:47:04.322752 systemd[1679]: Queued start job for default target default.target. Aug 13 00:47:04.333894 systemd[1679]: Created slice app.slice - User Application Slice. Aug 13 00:47:04.333922 systemd[1679]: Reached target paths.target - Paths. Aug 13 00:47:04.333971 systemd[1679]: Reached target timers.target - Timers. Aug 13 00:47:04.335811 systemd[1679]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:47:04.348898 systemd[1679]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:47:04.349087 systemd[1679]: Reached target sockets.target - Sockets. Aug 13 00:47:04.349153 systemd[1679]: Reached target basic.target - Basic System. Aug 13 00:47:04.349204 systemd[1679]: Reached target default.target - Main User Target. Aug 13 00:47:04.349249 systemd[1679]: Startup finished in 230ms. Aug 13 00:47:04.349764 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:47:04.351798 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:47:04.425370 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:51918.service - OpenSSH per-connection server daemon (10.0.0.1:51918). Aug 13 00:47:04.494675 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 51918 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:04.496439 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:04.501348 systemd-logind[1509]: New session 2 of user core. Aug 13 00:47:04.514660 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:47:04.570665 sshd[1692]: Connection closed by 10.0.0.1 port 51918 Aug 13 00:47:04.570948 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:04.593678 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:51918.service: Deactivated successfully. Aug 13 00:47:04.595887 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:47:04.596709 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:47:04.600194 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:51922.service - OpenSSH per-connection server daemon (10.0.0.1:51922). Aug 13 00:47:04.601214 systemd-logind[1509]: Removed session 2. Aug 13 00:47:04.657295 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 51922 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:04.659572 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:04.664587 systemd-logind[1509]: New session 3 of user core. Aug 13 00:47:04.674661 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:47:04.724479 sshd[1701]: Connection closed by 10.0.0.1 port 51922 Aug 13 00:47:04.724813 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:04.742316 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:51922.service: Deactivated successfully. Aug 13 00:47:04.744061 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:47:04.745965 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:47:04.750188 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:51926.service - OpenSSH per-connection server daemon (10.0.0.1:51926). Aug 13 00:47:04.751225 systemd-logind[1509]: Removed session 3. Aug 13 00:47:04.802963 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 51926 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:04.804803 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:04.809995 systemd-logind[1509]: New session 4 of user core. Aug 13 00:47:04.820644 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:47:04.875916 sshd[1709]: Connection closed by 10.0.0.1 port 51926 Aug 13 00:47:04.876440 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:04.889158 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:51926.service: Deactivated successfully. Aug 13 00:47:04.891017 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:47:04.891807 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:47:04.894542 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:51928.service - OpenSSH per-connection server daemon (10.0.0.1:51928). Aug 13 00:47:04.895303 systemd-logind[1509]: Removed session 4. Aug 13 00:47:04.949254 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 51928 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:04.950644 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:04.956054 systemd-logind[1509]: New session 5 of user core. Aug 13 00:47:04.969573 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:47:05.034112 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:47:05.034435 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:05.051138 sudo[1718]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:05.053225 sshd[1717]: Connection closed by 10.0.0.1 port 51928 Aug 13 00:47:05.053634 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:05.072421 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:51928.service: Deactivated successfully. Aug 13 00:47:05.074651 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:47:05.075581 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:47:05.078926 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:51932.service - OpenSSH per-connection server daemon (10.0.0.1:51932). Aug 13 00:47:05.079800 systemd-logind[1509]: Removed session 5. Aug 13 00:47:05.129577 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 51932 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:05.131296 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:05.136365 systemd-logind[1509]: New session 6 of user core. Aug 13 00:47:05.146614 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:47:05.203746 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:47:05.204096 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:06.138834 sudo[1728]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:06.147171 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:47:06.147618 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:06.159762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:47:06.220742 augenrules[1750]: No rules Aug 13 00:47:06.222928 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:47:06.223311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:47:06.224680 sudo[1727]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:06.226239 sshd[1726]: Connection closed by 10.0.0.1 port 51932 Aug 13 00:47:06.226574 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:06.239296 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:51932.service: Deactivated successfully. Aug 13 00:47:06.241228 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:47:06.241994 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:47:06.244886 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:51946.service - OpenSSH per-connection server daemon (10.0.0.1:51946). Aug 13 00:47:06.245528 systemd-logind[1509]: Removed session 6. Aug 13 00:47:06.293899 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 51946 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:06.295419 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:06.300426 systemd-logind[1509]: New session 7 of user core. Aug 13 00:47:06.308586 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:47:06.364320 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:47:06.364699 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:07.019681 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:47:07.040864 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:47:07.432342 dockerd[1783]: time="2025-08-13T00:47:07.432148478Z" level=info msg="Starting up" Aug 13 00:47:07.434807 dockerd[1783]: time="2025-08-13T00:47:07.434776856Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:47:08.320991 dockerd[1783]: time="2025-08-13T00:47:08.320899816Z" level=info msg="Loading containers: start." Aug 13 00:47:08.332496 kernel: Initializing XFRM netlink socket Aug 13 00:47:08.651709 systemd-networkd[1461]: docker0: Link UP Aug 13 00:47:08.658373 dockerd[1783]: time="2025-08-13T00:47:08.658285715Z" level=info msg="Loading containers: done." Aug 13 00:47:08.678236 dockerd[1783]: time="2025-08-13T00:47:08.678159839Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:47:08.678418 dockerd[1783]: time="2025-08-13T00:47:08.678290133Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:47:08.678418 dockerd[1783]: time="2025-08-13T00:47:08.678414015Z" level=info msg="Initializing buildkit" Aug 13 00:47:08.715248 dockerd[1783]: time="2025-08-13T00:47:08.715189700Z" level=info msg="Completed buildkit initialization" Aug 13 00:47:08.719511 dockerd[1783]: time="2025-08-13T00:47:08.719479844Z" level=info msg="Daemon has completed initialization" Aug 13 00:47:08.719598 dockerd[1783]: time="2025-08-13T00:47:08.719555305Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:47:08.720252 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:47:09.856933 containerd[1532]: time="2025-08-13T00:47:09.856862436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:47:11.312506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597254504.mount: Deactivated successfully. Aug 13 00:47:14.156191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:47:14.158497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:14.641754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:14.647143 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:15.624028 kubelet[2054]: E0813 00:47:15.623933 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:15.631617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:15.631825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:15.632334 systemd[1]: kubelet.service: Consumed 348ms CPU time, 111.2M memory peak. Aug 13 00:47:17.704384 containerd[1532]: time="2025-08-13T00:47:17.704277367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:17.850102 containerd[1532]: time="2025-08-13T00:47:17.850002491Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 00:47:17.942412 containerd[1532]: time="2025-08-13T00:47:17.942284671Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:18.020072 containerd[1532]: time="2025-08-13T00:47:18.019873480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:18.021372 containerd[1532]: time="2025-08-13T00:47:18.021313580Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 8.164386132s" Aug 13 00:47:18.021491 containerd[1532]: time="2025-08-13T00:47:18.021394902Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:47:18.022327 containerd[1532]: time="2025-08-13T00:47:18.022278299Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:47:23.600781 containerd[1532]: time="2025-08-13T00:47:23.600711539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:23.632441 containerd[1532]: time="2025-08-13T00:47:23.632403452Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 00:47:23.673973 containerd[1532]: time="2025-08-13T00:47:23.673881423Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:23.723084 containerd[1532]: time="2025-08-13T00:47:23.723048631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:23.724942 containerd[1532]: time="2025-08-13T00:47:23.724880075Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 5.702569406s" Aug 13 00:47:23.725023 containerd[1532]: time="2025-08-13T00:47:23.724947241Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:47:23.725786 containerd[1532]: time="2025-08-13T00:47:23.725758382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:47:25.656254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:47:25.658211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:26.624631 containerd[1532]: time="2025-08-13T00:47:26.624493119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:26.648905 containerd[1532]: time="2025-08-13T00:47:26.648836984Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 00:47:26.669806 containerd[1532]: time="2025-08-13T00:47:26.669745688Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:26.684589 containerd[1532]: time="2025-08-13T00:47:26.684440040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:26.686013 containerd[1532]: time="2025-08-13T00:47:26.685935924Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 2.960136404s" Aug 13 00:47:26.686013 containerd[1532]: time="2025-08-13T00:47:26.686014261Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:47:26.692420 containerd[1532]: time="2025-08-13T00:47:26.692341365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:47:26.779732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:26.801320 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:27.589560 kubelet[2079]: E0813 00:47:27.589432 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:27.593805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:27.594029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:27.594542 systemd[1]: kubelet.service: Consumed 373ms CPU time, 111.4M memory peak. Aug 13 00:47:29.398938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253252849.mount: Deactivated successfully. Aug 13 00:47:30.684241 containerd[1532]: time="2025-08-13T00:47:30.684129396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:30.732403 containerd[1532]: time="2025-08-13T00:47:30.732310125Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 00:47:30.905561 containerd[1532]: time="2025-08-13T00:47:30.905482503Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:30.913625 containerd[1532]: time="2025-08-13T00:47:30.913546122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:30.914076 containerd[1532]: time="2025-08-13T00:47:30.914020141Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 4.221610629s" Aug 13 00:47:30.914142 containerd[1532]: time="2025-08-13T00:47:30.914080665Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:47:30.914946 containerd[1532]: time="2025-08-13T00:47:30.914683866Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:47:31.469869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388276588.mount: Deactivated successfully. Aug 13 00:47:32.794228 containerd[1532]: time="2025-08-13T00:47:32.794153821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:32.802574 containerd[1532]: time="2025-08-13T00:47:32.802486405Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:47:32.805884 containerd[1532]: time="2025-08-13T00:47:32.805833681Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:32.808739 containerd[1532]: time="2025-08-13T00:47:32.808697731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:32.809673 containerd[1532]: time="2025-08-13T00:47:32.809640238Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.894920596s" Aug 13 00:47:32.809724 containerd[1532]: time="2025-08-13T00:47:32.809673891Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:47:32.810260 containerd[1532]: time="2025-08-13T00:47:32.810168789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:47:33.593841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657152785.mount: Deactivated successfully. Aug 13 00:47:33.600594 containerd[1532]: time="2025-08-13T00:47:33.600542092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:33.601536 containerd[1532]: time="2025-08-13T00:47:33.601412259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:47:33.602501 containerd[1532]: time="2025-08-13T00:47:33.602456348Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:33.604956 containerd[1532]: time="2025-08-13T00:47:33.604885850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:33.605467 containerd[1532]: time="2025-08-13T00:47:33.605423958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 795.205347ms" Aug 13 00:47:33.605532 containerd[1532]: time="2025-08-13T00:47:33.605477958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:47:33.606021 containerd[1532]: time="2025-08-13T00:47:33.605961426Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:47:34.759217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542083201.mount: Deactivated successfully. Aug 13 00:47:37.657715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:47:37.660873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:38.184410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:38.209795 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:38.258192 kubelet[2168]: E0813 00:47:38.258110 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:38.262365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:38.262578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:38.262962 systemd[1]: kubelet.service: Consumed 265ms CPU time, 110.6M memory peak. Aug 13 00:47:41.100766 containerd[1532]: time="2025-08-13T00:47:41.100688145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:41.101830 containerd[1532]: time="2025-08-13T00:47:41.101773375Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 00:47:41.103157 containerd[1532]: time="2025-08-13T00:47:41.103123085Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:41.105871 containerd[1532]: time="2025-08-13T00:47:41.105816454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:41.106944 containerd[1532]: time="2025-08-13T00:47:41.106902986Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 7.500912326s" Aug 13 00:47:41.106944 containerd[1532]: time="2025-08-13T00:47:41.106934786Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:47:43.923939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:43.924171 systemd[1]: kubelet.service: Consumed 265ms CPU time, 110.6M memory peak. Aug 13 00:47:43.927172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:43.952628 systemd[1]: Reload requested from client PID 2252 ('systemctl') (unit session-7.scope)... Aug 13 00:47:43.952656 systemd[1]: Reloading... Aug 13 00:47:44.048505 zram_generator::config[2299]: No configuration found. Aug 13 00:47:44.549318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:44.672104 systemd[1]: Reloading finished in 719 ms. Aug 13 00:47:44.747491 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:47:44.747605 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:47:44.747949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:44.748002 systemd[1]: kubelet.service: Consumed 162ms CPU time, 98.3M memory peak. Aug 13 00:47:44.749914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:44.928681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:44.944845 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:44.996708 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:44.996708 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:47:44.996708 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:44.997121 kubelet[2343]: I0813 00:47:44.996780 2343 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:47:45.175687 kubelet[2343]: I0813 00:47:45.175611 2343 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:47:45.175687 kubelet[2343]: I0813 00:47:45.175661 2343 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:47:45.176000 kubelet[2343]: I0813 00:47:45.175974 2343 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:47:45.319849 kubelet[2343]: E0813 00:47:45.319704 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:45.320861 kubelet[2343]: I0813 00:47:45.320828 2343 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:47:45.332980 kubelet[2343]: I0813 00:47:45.332945 2343 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:47:45.342411 kubelet[2343]: I0813 00:47:45.342361 2343 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:47:45.342571 kubelet[2343]: I0813 00:47:45.342547 2343 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:47:45.342761 kubelet[2343]: I0813 00:47:45.342708 2343 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:47:45.342960 kubelet[2343]: I0813 00:47:45.342756 2343 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:47:45.343119 kubelet[2343]: I0813 00:47:45.342978 2343 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:47:45.343119 kubelet[2343]: I0813 00:47:45.342988 2343 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:47:45.343168 kubelet[2343]: I0813 00:47:45.343157 2343 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:45.349594 kubelet[2343]: I0813 00:47:45.349530 2343 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:47:45.349659 kubelet[2343]: I0813 00:47:45.349602 2343 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:47:45.349689 kubelet[2343]: I0813 00:47:45.349661 2343 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:47:45.349729 kubelet[2343]: I0813 00:47:45.349692 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:47:45.352810 kubelet[2343]: I0813 00:47:45.352778 2343 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:47:45.353229 kubelet[2343]: I0813 00:47:45.353194 2343 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:47:45.353317 kubelet[2343]: W0813 00:47:45.353285 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:47:45.354056 kubelet[2343]: W0813 00:47:45.353990 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:45.354168 kubelet[2343]: E0813 00:47:45.354142 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:45.354310 kubelet[2343]: W0813 00:47:45.354207 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:45.354410 kubelet[2343]: E0813 00:47:45.354372 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:45.356148 kubelet[2343]: I0813 00:47:45.356118 2343 server.go:1274] "Started kubelet" Aug 13 00:47:45.356769 kubelet[2343]: I0813 00:47:45.356680 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:47:45.358497 kubelet[2343]: I0813 00:47:45.358473 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:47:45.361993 kubelet[2343]: I0813 00:47:45.360999 2343 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:47:45.361993 kubelet[2343]: I0813 00:47:45.361170 2343 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:47:45.361993 kubelet[2343]: E0813 00:47:45.360684 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d1909ff1905 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:47:45.356085509 +0000 UTC m=+0.406531066,LastTimestamp:2025-08-13 00:47:45.356085509 +0000 UTC m=+0.406531066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:47:45.362529 kubelet[2343]: I0813 00:47:45.362496 2343 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:47:45.362966 kubelet[2343]: I0813 00:47:45.362931 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:47:45.364543 kubelet[2343]: I0813 00:47:45.364520 2343 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:47:45.364698 kubelet[2343]: I0813 00:47:45.364683 2343 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:47:45.364853 kubelet[2343]: I0813 00:47:45.364840 2343 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:47:45.365054 kubelet[2343]: W0813 00:47:45.364986 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:45.365054 kubelet[2343]: E0813 00:47:45.365043 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:45.365192 kubelet[2343]: E0813 00:47:45.365139 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:45.365729 kubelet[2343]: I0813 00:47:45.365690 2343 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:47:45.365832 kubelet[2343]: I0813 00:47:45.365806 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:47:45.366189 kubelet[2343]: E0813 00:47:45.366160 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" Aug 13 00:47:45.366670 kubelet[2343]: E0813 00:47:45.366644 2343 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:47:45.367201 kubelet[2343]: I0813 00:47:45.367177 2343 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:47:45.382345 kubelet[2343]: I0813 00:47:45.382246 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:47:45.384428 kubelet[2343]: I0813 00:47:45.384359 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:47:45.384428 kubelet[2343]: I0813 00:47:45.384402 2343 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:47:45.384756 kubelet[2343]: I0813 00:47:45.384731 2343 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:47:45.384756 kubelet[2343]: I0813 00:47:45.384753 2343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:47:45.384844 kubelet[2343]: I0813 00:47:45.384776 2343 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:45.385737 kubelet[2343]: I0813 00:47:45.384439 2343 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:47:45.386266 kubelet[2343]: E0813 00:47:45.385767 2343 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:47:45.387326 kubelet[2343]: W0813 00:47:45.387234 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:45.387821 kubelet[2343]: E0813 00:47:45.387768 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:45.465967 kubelet[2343]: E0813 00:47:45.465907 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:45.486329 kubelet[2343]: E0813 00:47:45.486242 2343 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:47:45.566715 kubelet[2343]: E0813 00:47:45.566654 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:45.567123 kubelet[2343]: E0813 00:47:45.567084 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" Aug 13 00:47:45.667726 kubelet[2343]: E0813 00:47:45.667653 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:45.686973 kubelet[2343]: E0813 00:47:45.686880 2343 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:47:45.768525 kubelet[2343]: E0813 00:47:45.768436 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:45.794346 update_engine[1511]: I20250813 00:47:45.794107 1511 update_attempter.cc:509] Updating boot flags... Aug 13 00:47:45.868706 kubelet[2343]: E0813 00:47:45.868623 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:45.968764 kubelet[2343]: E0813 00:47:45.968588 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" Aug 13 00:47:45.969657 kubelet[2343]: E0813 00:47:45.969599 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:46.070208 kubelet[2343]: E0813 00:47:46.070131 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:46.087376 kubelet[2343]: E0813 00:47:46.087319 2343 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:47:46.171102 kubelet[2343]: E0813 00:47:46.171038 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:46.236157 kubelet[2343]: I0813 00:47:46.235960 2343 policy_none.go:49] "None policy: Start" Aug 13 00:47:46.237319 kubelet[2343]: I0813 00:47:46.237283 2343 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:47:46.237402 kubelet[2343]: I0813 00:47:46.237328 2343 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:47:46.257041 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:47:46.284026 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:47:46.291059 kubelet[2343]: E0813 00:47:46.285895 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:46.298296 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:47:46.313305 kubelet[2343]: I0813 00:47:46.313257 2343 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:47:46.313615 kubelet[2343]: I0813 00:47:46.313590 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:47:46.313704 kubelet[2343]: I0813 00:47:46.313657 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:47:46.316463 kubelet[2343]: I0813 00:47:46.316407 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:47:46.318573 kubelet[2343]: E0813 00:47:46.318542 2343 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:47:46.386119 kubelet[2343]: W0813 00:47:46.385883 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:46.386119 kubelet[2343]: E0813 00:47:46.386014 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:46.415107 kubelet[2343]: W0813 00:47:46.415022 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:46.415213 kubelet[2343]: E0813 00:47:46.415119 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:46.419665 kubelet[2343]: I0813 00:47:46.419514 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:47:46.421638 kubelet[2343]: E0813 00:47:46.421588 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Aug 13 00:47:46.559322 kubelet[2343]: W0813 00:47:46.559093 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:46.559322 kubelet[2343]: E0813 00:47:46.559178 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:46.623659 kubelet[2343]: I0813 00:47:46.623601 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:47:46.624064 kubelet[2343]: E0813 00:47:46.624034 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Aug 13 00:47:46.769547 kubelet[2343]: E0813 00:47:46.769480 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="1.6s" Aug 13 00:47:46.799504 kubelet[2343]: W0813 00:47:46.799417 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:46.799504 kubelet[2343]: E0813 00:47:46.799498 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:46.898201 systemd[1]: Created slice kubepods-burstable-pod3a5c073415bbeb3d44fb35062450ef59.slice - libcontainer container kubepods-burstable-pod3a5c073415bbeb3d44fb35062450ef59.slice. Aug 13 00:47:46.918368 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 13 00:47:46.938572 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 13 00:47:46.975675 kubelet[2343]: I0813 00:47:46.975579 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a5c073415bbeb3d44fb35062450ef59-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a5c073415bbeb3d44fb35062450ef59\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:46.975675 kubelet[2343]: I0813 00:47:46.975640 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:46.975675 kubelet[2343]: I0813 00:47:46.975659 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:46.975675 kubelet[2343]: I0813 00:47:46.975675 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:47:46.975675 kubelet[2343]: I0813 00:47:46.975688 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a5c073415bbeb3d44fb35062450ef59-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a5c073415bbeb3d44fb35062450ef59\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:46.976039 kubelet[2343]: I0813 00:47:46.975702 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a5c073415bbeb3d44fb35062450ef59-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3a5c073415bbeb3d44fb35062450ef59\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:46.976039 kubelet[2343]: I0813 00:47:46.975720 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:46.976039 kubelet[2343]: I0813 00:47:46.975771 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:46.976039 kubelet[2343]: I0813 00:47:46.975824 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:47.026347 kubelet[2343]: I0813 00:47:47.026303 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:47:47.026768 kubelet[2343]: E0813 00:47:47.026722 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Aug 13 00:47:47.216472 containerd[1532]: time="2025-08-13T00:47:47.216383812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3a5c073415bbeb3d44fb35062450ef59,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:47.222254 containerd[1532]: time="2025-08-13T00:47:47.222192794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:47.242461 containerd[1532]: time="2025-08-13T00:47:47.242379192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:47.257828 containerd[1532]: time="2025-08-13T00:47:47.257730889Z" level=info msg="connecting to shim 02052396899fcc992b1f52d7c96aec904698865e230d915c4491292343b0b73f" address="unix:///run/containerd/s/21e951241f7c3a0c335e20584c67e89b99f6b7ce2b4ac2bc5c55245e039ff31e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:47.310684 containerd[1532]: time="2025-08-13T00:47:47.310627774Z" level=info msg="connecting to shim c1c2a236085958467a70faf8c135dab0cff2b70312c6b347a0cb9d3ec180d43c" address="unix:///run/containerd/s/4269e57ad775dfb6121c53e8202714f5696634643381c25925d6788de3c0c982" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:47.321897 containerd[1532]: time="2025-08-13T00:47:47.321303657Z" level=info msg="connecting to shim f3dc48c7b35d9b79c1066e6050da1c0bc543dafcfb8f0af1def111b945d44078" address="unix:///run/containerd/s/4eceb6919a6de988eca3dfba017217288aa29cf280e7a136c5d838fb02eb78a8" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:47.344794 systemd[1]: Started cri-containerd-02052396899fcc992b1f52d7c96aec904698865e230d915c4491292343b0b73f.scope - libcontainer container 02052396899fcc992b1f52d7c96aec904698865e230d915c4491292343b0b73f. Aug 13 00:47:47.374760 systemd[1]: Started cri-containerd-c1c2a236085958467a70faf8c135dab0cff2b70312c6b347a0cb9d3ec180d43c.scope - libcontainer container c1c2a236085958467a70faf8c135dab0cff2b70312c6b347a0cb9d3ec180d43c. Aug 13 00:47:47.379047 systemd[1]: Started cri-containerd-f3dc48c7b35d9b79c1066e6050da1c0bc543dafcfb8f0af1def111b945d44078.scope - libcontainer container f3dc48c7b35d9b79c1066e6050da1c0bc543dafcfb8f0af1def111b945d44078. Aug 13 00:47:47.520144 containerd[1532]: time="2025-08-13T00:47:47.520004764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3a5c073415bbeb3d44fb35062450ef59,Namespace:kube-system,Attempt:0,} returns sandbox id \"02052396899fcc992b1f52d7c96aec904698865e230d915c4491292343b0b73f\"" Aug 13 00:47:47.521479 kubelet[2343]: E0813 00:47:47.521423 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:47.524340 containerd[1532]: time="2025-08-13T00:47:47.524299881Z" level=info msg="CreateContainer within sandbox \"02052396899fcc992b1f52d7c96aec904698865e230d915c4491292343b0b73f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:47:47.695075 containerd[1532]: time="2025-08-13T00:47:47.695013403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1c2a236085958467a70faf8c135dab0cff2b70312c6b347a0cb9d3ec180d43c\"" Aug 13 00:47:47.697610 containerd[1532]: time="2025-08-13T00:47:47.697525064Z" level=info msg="CreateContainer within sandbox \"c1c2a236085958467a70faf8c135dab0cff2b70312c6b347a0cb9d3ec180d43c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:47:47.739431 containerd[1532]: time="2025-08-13T00:47:47.739372712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3dc48c7b35d9b79c1066e6050da1c0bc543dafcfb8f0af1def111b945d44078\"" Aug 13 00:47:47.741855 containerd[1532]: time="2025-08-13T00:47:47.741823329Z" level=info msg="CreateContainer within sandbox \"f3dc48c7b35d9b79c1066e6050da1c0bc543dafcfb8f0af1def111b945d44078\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:47:47.829009 kubelet[2343]: I0813 00:47:47.828872 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:47:47.829418 kubelet[2343]: E0813 00:47:47.829379 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Aug 13 00:47:48.370579 kubelet[2343]: E0813 00:47:48.370514 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="3.2s" Aug 13 00:47:48.392622 kubelet[2343]: W0813 00:47:48.392551 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:48.392805 kubelet[2343]: E0813 00:47:48.392639 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:48.619982 containerd[1532]: time="2025-08-13T00:47:48.619929506Z" level=info msg="Container fe05918da581f4a8845ff51decc010259010e13a6a69358da159b42c3e0cc13f: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:48.822428 kubelet[2343]: W0813 00:47:48.822342 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:48.822428 kubelet[2343]: E0813 00:47:48.822434 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:48.824000 containerd[1532]: time="2025-08-13T00:47:48.823958907Z" level=info msg="Container 1b8ddc99ac25a53cf6147530b17c04344a27d3c9bd364b9b580d258dbb894a75: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:48.830946 containerd[1532]: time="2025-08-13T00:47:48.830887188Z" level=info msg="Container 48ac2cd3fd9824d6dd2618850916c7c501887d91846693ed00af00803a515c3a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:48.843658 containerd[1532]: time="2025-08-13T00:47:48.843600130Z" level=info msg="CreateContainer within sandbox \"c1c2a236085958467a70faf8c135dab0cff2b70312c6b347a0cb9d3ec180d43c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b8ddc99ac25a53cf6147530b17c04344a27d3c9bd364b9b580d258dbb894a75\"" Aug 13 00:47:48.844374 containerd[1532]: time="2025-08-13T00:47:48.844342771Z" level=info msg="StartContainer for \"1b8ddc99ac25a53cf6147530b17c04344a27d3c9bd364b9b580d258dbb894a75\"" Aug 13 00:47:48.845546 containerd[1532]: time="2025-08-13T00:47:48.845485087Z" level=info msg="connecting to shim 1b8ddc99ac25a53cf6147530b17c04344a27d3c9bd364b9b580d258dbb894a75" address="unix:///run/containerd/s/4269e57ad775dfb6121c53e8202714f5696634643381c25925d6788de3c0c982" protocol=ttrpc version=3 Aug 13 00:47:48.849475 containerd[1532]: time="2025-08-13T00:47:48.848410700Z" level=info msg="CreateContainer within sandbox \"02052396899fcc992b1f52d7c96aec904698865e230d915c4491292343b0b73f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe05918da581f4a8845ff51decc010259010e13a6a69358da159b42c3e0cc13f\"" Aug 13 00:47:48.850040 containerd[1532]: time="2025-08-13T00:47:48.850003293Z" level=info msg="StartContainer for \"fe05918da581f4a8845ff51decc010259010e13a6a69358da159b42c3e0cc13f\"" Aug 13 00:47:48.851746 containerd[1532]: time="2025-08-13T00:47:48.851697957Z" level=info msg="connecting to shim fe05918da581f4a8845ff51decc010259010e13a6a69358da159b42c3e0cc13f" address="unix:///run/containerd/s/21e951241f7c3a0c335e20584c67e89b99f6b7ce2b4ac2bc5c55245e039ff31e" protocol=ttrpc version=3 Aug 13 00:47:48.853201 containerd[1532]: time="2025-08-13T00:47:48.853150008Z" level=info msg="CreateContainer within sandbox \"f3dc48c7b35d9b79c1066e6050da1c0bc543dafcfb8f0af1def111b945d44078\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"48ac2cd3fd9824d6dd2618850916c7c501887d91846693ed00af00803a515c3a\"" Aug 13 00:47:48.853685 containerd[1532]: time="2025-08-13T00:47:48.853590417Z" level=info msg="StartContainer for \"48ac2cd3fd9824d6dd2618850916c7c501887d91846693ed00af00803a515c3a\"" Aug 13 00:47:48.854466 containerd[1532]: time="2025-08-13T00:47:48.854425721Z" level=info msg="connecting to shim 48ac2cd3fd9824d6dd2618850916c7c501887d91846693ed00af00803a515c3a" address="unix:///run/containerd/s/4eceb6919a6de988eca3dfba017217288aa29cf280e7a136c5d838fb02eb78a8" protocol=ttrpc version=3 Aug 13 00:47:48.870685 systemd[1]: Started cri-containerd-1b8ddc99ac25a53cf6147530b17c04344a27d3c9bd364b9b580d258dbb894a75.scope - libcontainer container 1b8ddc99ac25a53cf6147530b17c04344a27d3c9bd364b9b580d258dbb894a75. Aug 13 00:47:48.884706 systemd[1]: Started cri-containerd-48ac2cd3fd9824d6dd2618850916c7c501887d91846693ed00af00803a515c3a.scope - libcontainer container 48ac2cd3fd9824d6dd2618850916c7c501887d91846693ed00af00803a515c3a. Aug 13 00:47:48.889581 systemd[1]: Started cri-containerd-fe05918da581f4a8845ff51decc010259010e13a6a69358da159b42c3e0cc13f.scope - libcontainer container fe05918da581f4a8845ff51decc010259010e13a6a69358da159b42c3e0cc13f. Aug 13 00:47:48.961017 containerd[1532]: time="2025-08-13T00:47:48.960800535Z" level=info msg="StartContainer for \"1b8ddc99ac25a53cf6147530b17c04344a27d3c9bd364b9b580d258dbb894a75\" returns successfully" Aug 13 00:47:48.972481 containerd[1532]: time="2025-08-13T00:47:48.972417318Z" level=info msg="StartContainer for \"fe05918da581f4a8845ff51decc010259010e13a6a69358da159b42c3e0cc13f\" returns successfully" Aug 13 00:47:48.989926 kubelet[2343]: W0813 00:47:48.989831 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Aug 13 00:47:48.989926 kubelet[2343]: E0813 00:47:48.989929 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:49.004632 containerd[1532]: time="2025-08-13T00:47:49.004561587Z" level=info msg="StartContainer for \"48ac2cd3fd9824d6dd2618850916c7c501887d91846693ed00af00803a515c3a\" returns successfully" Aug 13 00:47:49.438475 kubelet[2343]: I0813 00:47:49.438347 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:47:50.486066 kubelet[2343]: I0813 00:47:50.486007 2343 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:47:50.486066 kubelet[2343]: E0813 00:47:50.486053 2343 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:47:50.494705 kubelet[2343]: E0813 00:47:50.494671 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:50.595494 kubelet[2343]: E0813 00:47:50.595210 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:50.696375 kubelet[2343]: E0813 00:47:50.696307 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:50.797232 kubelet[2343]: E0813 00:47:50.797071 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:50.898115 kubelet[2343]: E0813 00:47:50.898063 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:50.998950 kubelet[2343]: E0813 00:47:50.998895 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:51.099604 kubelet[2343]: E0813 00:47:51.099441 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:51.200360 kubelet[2343]: E0813 00:47:51.200269 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:51.301402 kubelet[2343]: E0813 00:47:51.301306 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:51.402127 kubelet[2343]: E0813 00:47:51.401989 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:51.502809 kubelet[2343]: E0813 00:47:51.502750 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:51.603561 kubelet[2343]: E0813 00:47:51.603512 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:51.704163 kubelet[2343]: E0813 00:47:51.704113 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:51.804837 kubelet[2343]: E0813 00:47:51.804777 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:52.356472 kubelet[2343]: I0813 00:47:52.356410 2343 apiserver.go:52] "Watching apiserver" Aug 13 00:47:52.365385 kubelet[2343]: I0813 00:47:52.365358 2343 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:47:52.403790 systemd[1]: Reload requested from client PID 2633 ('systemctl') (unit session-7.scope)... Aug 13 00:47:52.403806 systemd[1]: Reloading... Aug 13 00:47:52.515491 zram_generator::config[2682]: No configuration found. Aug 13 00:47:52.711331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:52.869715 systemd[1]: Reloading finished in 465 ms. Aug 13 00:47:52.898421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:52.922964 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:47:52.923327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:52.923388 systemd[1]: kubelet.service: Consumed 848ms CPU time, 131.5M memory peak. Aug 13 00:47:52.926403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:53.147628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:53.169007 (kubelet)[2721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:53.218398 kubelet[2721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:53.218398 kubelet[2721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:47:53.218398 kubelet[2721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:53.218879 kubelet[2721]: I0813 00:47:53.218606 2721 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:47:53.226694 kubelet[2721]: I0813 00:47:53.226650 2721 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:47:53.226694 kubelet[2721]: I0813 00:47:53.226679 2721 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:47:53.227070 kubelet[2721]: I0813 00:47:53.227035 2721 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:47:53.228529 kubelet[2721]: I0813 00:47:53.228498 2721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:47:53.230892 kubelet[2721]: I0813 00:47:53.230825 2721 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:47:53.234884 kubelet[2721]: I0813 00:47:53.234844 2721 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:47:53.240307 kubelet[2721]: I0813 00:47:53.240257 2721 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:47:53.240409 kubelet[2721]: I0813 00:47:53.240386 2721 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:47:53.240601 kubelet[2721]: I0813 00:47:53.240565 2721 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:47:53.240763 kubelet[2721]: I0813 00:47:53.240589 2721 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:47:53.240842 kubelet[2721]: I0813 00:47:53.240770 2721 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:47:53.240842 kubelet[2721]: I0813 00:47:53.240780 2721 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:47:53.240842 kubelet[2721]: I0813 00:47:53.240807 2721 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:53.241131 kubelet[2721]: I0813 00:47:53.240910 2721 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:47:53.241131 kubelet[2721]: I0813 00:47:53.240923 2721 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:47:53.241131 kubelet[2721]: I0813 00:47:53.240956 2721 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:47:53.241131 kubelet[2721]: I0813 00:47:53.240982 2721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:47:53.241928 kubelet[2721]: I0813 00:47:53.241908 2721 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:47:53.242590 kubelet[2721]: I0813 00:47:53.242572 2721 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:47:53.243085 kubelet[2721]: I0813 00:47:53.243069 2721 server.go:1274] "Started kubelet" Aug 13 00:47:53.243749 kubelet[2721]: I0813 00:47:53.243719 2721 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:47:53.244540 kubelet[2721]: I0813 00:47:53.244498 2721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:47:53.244845 kubelet[2721]: I0813 00:47:53.244814 2721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:47:53.244845 kubelet[2721]: I0813 00:47:53.244832 2721 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:47:53.244945 kubelet[2721]: I0813 00:47:53.244829 2721 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:47:53.251194 kubelet[2721]: I0813 00:47:53.251148 2721 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:47:53.251936 kubelet[2721]: I0813 00:47:53.251905 2721 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:47:53.253176 kubelet[2721]: I0813 00:47:53.251996 2721 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:47:53.253176 kubelet[2721]: I0813 00:47:53.252172 2721 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:47:53.253904 kubelet[2721]: I0813 00:47:53.253872 2721 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:47:53.254081 kubelet[2721]: I0813 00:47:53.254012 2721 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:47:53.256325 kubelet[2721]: I0813 00:47:53.256302 2721 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:47:53.256679 kubelet[2721]: E0813 00:47:53.256643 2721 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:47:53.267329 kubelet[2721]: I0813 00:47:53.267209 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:47:53.269488 kubelet[2721]: I0813 00:47:53.269418 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:47:53.269488 kubelet[2721]: I0813 00:47:53.269476 2721 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:47:53.269488 kubelet[2721]: I0813 00:47:53.269496 2721 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:47:53.269724 kubelet[2721]: E0813 00:47:53.269559 2721 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:47:53.302496 kubelet[2721]: I0813 00:47:53.302435 2721 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:47:53.302496 kubelet[2721]: I0813 00:47:53.302478 2721 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:47:53.302496 kubelet[2721]: I0813 00:47:53.302512 2721 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:53.302711 kubelet[2721]: I0813 00:47:53.302692 2721 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:47:53.302739 kubelet[2721]: I0813 00:47:53.302705 2721 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:47:53.302739 kubelet[2721]: I0813 00:47:53.302724 2721 policy_none.go:49] "None policy: Start" Aug 13 00:47:53.303588 kubelet[2721]: I0813 00:47:53.303558 2721 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:47:53.303639 kubelet[2721]: I0813 00:47:53.303594 2721 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:47:53.303818 kubelet[2721]: I0813 00:47:53.303796 2721 state_mem.go:75] "Updated machine memory state" Aug 13 00:47:53.309159 kubelet[2721]: I0813 00:47:53.309133 2721 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:47:53.309381 kubelet[2721]: I0813 00:47:53.309348 2721 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:47:53.309708 kubelet[2721]: I0813 00:47:53.309381 2721 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:47:53.309708 kubelet[2721]: I0813 00:47:53.309644 2721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:47:53.414021 kubelet[2721]: I0813 00:47:53.413944 2721 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:47:53.453619 kubelet[2721]: I0813 00:47:53.453573 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:47:53.453619 kubelet[2721]: I0813 00:47:53.453614 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a5c073415bbeb3d44fb35062450ef59-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3a5c073415bbeb3d44fb35062450ef59\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:53.453823 kubelet[2721]: I0813 00:47:53.453636 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:53.453823 kubelet[2721]: I0813 00:47:53.453652 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:53.453823 kubelet[2721]: I0813 00:47:53.453668 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:53.453823 kubelet[2721]: I0813 00:47:53.453687 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:53.453823 kubelet[2721]: I0813 00:47:53.453702 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a5c073415bbeb3d44fb35062450ef59-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a5c073415bbeb3d44fb35062450ef59\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:53.453968 kubelet[2721]: I0813 00:47:53.453725 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a5c073415bbeb3d44fb35062450ef59-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a5c073415bbeb3d44fb35062450ef59\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:53.453968 kubelet[2721]: I0813 00:47:53.453741 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:53.471679 kubelet[2721]: E0813 00:47:53.471595 2721 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:53.472288 kubelet[2721]: E0813 00:47:53.472249 2721 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:47:53.487339 kubelet[2721]: I0813 00:47:53.487296 2721 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 00:47:53.487413 kubelet[2721]: I0813 00:47:53.487388 2721 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:47:53.631545 sudo[2756]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:47:53.631929 sudo[2756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:47:54.169508 sudo[2756]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:54.242441 kubelet[2721]: I0813 00:47:54.242173 2721 apiserver.go:52] "Watching apiserver" Aug 13 00:47:54.252192 kubelet[2721]: I0813 00:47:54.252139 2721 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:47:54.290031 kubelet[2721]: E0813 00:47:54.289985 2721 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:54.334477 kubelet[2721]: I0813 00:47:54.331867 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.3318365 podStartE2EDuration="2.3318365s" podCreationTimestamp="2025-08-13 00:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:54.320830171 +0000 UTC m=+1.147837586" watchObservedRunningTime="2025-08-13 00:47:54.3318365 +0000 UTC m=+1.158843905" Aug 13 00:47:54.334477 kubelet[2721]: I0813 00:47:54.332020 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.33201378 podStartE2EDuration="2.33201378s" podCreationTimestamp="2025-08-13 00:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:54.33173496 +0000 UTC m=+1.158742375" watchObservedRunningTime="2025-08-13 00:47:54.33201378 +0000 UTC m=+1.159021195" Aug 13 00:47:55.675285 sudo[1762]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:55.676885 sshd[1761]: Connection closed by 10.0.0.1 port 51946 Aug 13 00:47:55.677562 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:55.681431 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:51946.service: Deactivated successfully. Aug 13 00:47:55.683579 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:47:55.683795 systemd[1]: session-7.scope: Consumed 5.252s CPU time, 259.5M memory peak. Aug 13 00:47:55.685532 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:47:55.686827 systemd-logind[1509]: Removed session 7. Aug 13 00:47:57.831909 kubelet[2721]: I0813 00:47:57.831854 2721 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:47:57.832533 kubelet[2721]: I0813 00:47:57.832420 2721 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:47:57.832567 containerd[1532]: time="2025-08-13T00:47:57.832238800Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:47:58.218907 kubelet[2721]: I0813 00:47:58.218799 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.218731144 podStartE2EDuration="5.218731144s" podCreationTimestamp="2025-08-13 00:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:54.342217611 +0000 UTC m=+1.169225026" watchObservedRunningTime="2025-08-13 00:47:58.218731144 +0000 UTC m=+5.045738559" Aug 13 00:47:58.237354 systemd[1]: Created slice kubepods-besteffort-podb87c4dcd_0ba8_4ff9_b196_9f1e33a27dd5.slice - libcontainer container kubepods-besteffort-podb87c4dcd_0ba8_4ff9_b196_9f1e33a27dd5.slice. Aug 13 00:47:58.255665 systemd[1]: Created slice kubepods-burstable-pod0022fdea_b6b3_4b2e_a435_150ec8018ca4.slice - libcontainer container kubepods-burstable-pod0022fdea_b6b3_4b2e_a435_150ec8018ca4.slice. Aug 13 00:47:58.278011 kubelet[2721]: I0813 00:47:58.277929 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-cgroup\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278011 kubelet[2721]: I0813 00:47:58.277995 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5-kube-proxy\") pod \"kube-proxy-v9frv\" (UID: \"b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5\") " pod="kube-system/kube-proxy-v9frv" Aug 13 00:47:58.278228 kubelet[2721]: I0813 00:47:58.278039 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5-lib-modules\") pod \"kube-proxy-v9frv\" (UID: \"b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5\") " pod="kube-system/kube-proxy-v9frv" Aug 13 00:47:58.278228 kubelet[2721]: I0813 00:47:58.278088 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-lib-modules\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278228 kubelet[2721]: I0813 00:47:58.278121 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-xtables-lock\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278228 kubelet[2721]: I0813 00:47:58.278156 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-etc-cni-netd\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278228 kubelet[2721]: I0813 00:47:58.278177 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-bpf-maps\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278228 kubelet[2721]: I0813 00:47:58.278199 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0022fdea-b6b3-4b2e-a435-150ec8018ca4-clustermesh-secrets\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278376 kubelet[2721]: I0813 00:47:58.278222 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-host-proc-sys-kernel\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278376 kubelet[2721]: I0813 00:47:58.278238 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5-xtables-lock\") pod \"kube-proxy-v9frv\" (UID: \"b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5\") " pod="kube-system/kube-proxy-v9frv" Aug 13 00:47:58.278376 kubelet[2721]: I0813 00:47:58.278258 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-run\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278376 kubelet[2721]: I0813 00:47:58.278279 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-host-proc-sys-net\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278376 kubelet[2721]: I0813 00:47:58.278315 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ss6t\" (UniqueName: \"kubernetes.io/projected/b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5-kube-api-access-4ss6t\") pod \"kube-proxy-v9frv\" (UID: \"b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5\") " pod="kube-system/kube-proxy-v9frv" Aug 13 00:47:58.278515 kubelet[2721]: I0813 00:47:58.278350 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-hostproc\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278515 kubelet[2721]: I0813 00:47:58.278392 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-hubble-tls\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278515 kubelet[2721]: I0813 00:47:58.278427 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl82s\" (UniqueName: \"kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-kube-api-access-dl82s\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278515 kubelet[2721]: I0813 00:47:58.278475 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cni-path\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.278515 kubelet[2721]: I0813 00:47:58.278497 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-config-path\") pod \"cilium-d65w4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " pod="kube-system/cilium-d65w4" Aug 13 00:47:58.387502 kubelet[2721]: E0813 00:47:58.384708 2721 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:47:58.387502 kubelet[2721]: E0813 00:47:58.384757 2721 projected.go:194] Error preparing data for projected volume kube-api-access-4ss6t for pod kube-system/kube-proxy-v9frv: configmap "kube-root-ca.crt" not found Aug 13 00:47:58.387502 kubelet[2721]: E0813 00:47:58.384845 2721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5-kube-api-access-4ss6t podName:b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5 nodeName:}" failed. No retries permitted until 2025-08-13 00:47:58.884818798 +0000 UTC m=+5.711826213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4ss6t" (UniqueName: "kubernetes.io/projected/b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5-kube-api-access-4ss6t") pod "kube-proxy-v9frv" (UID: "b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5") : configmap "kube-root-ca.crt" not found Aug 13 00:47:58.391563 kubelet[2721]: E0813 00:47:58.391480 2721 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:47:58.391563 kubelet[2721]: E0813 00:47:58.391510 2721 projected.go:194] Error preparing data for projected volume kube-api-access-dl82s for pod kube-system/cilium-d65w4: configmap "kube-root-ca.crt" not found Aug 13 00:47:58.394527 kubelet[2721]: E0813 00:47:58.394126 2721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-kube-api-access-dl82s podName:0022fdea-b6b3-4b2e-a435-150ec8018ca4 nodeName:}" failed. No retries permitted until 2025-08-13 00:47:58.894096863 +0000 UTC m=+5.721104278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dl82s" (UniqueName: "kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-kube-api-access-dl82s") pod "cilium-d65w4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4") : configmap "kube-root-ca.crt" not found Aug 13 00:47:58.879498 systemd[1]: Created slice kubepods-besteffort-pod26300763_a663_47e3_997d_be33d222eba4.slice - libcontainer container kubepods-besteffort-pod26300763_a663_47e3_997d_be33d222eba4.slice. Aug 13 00:47:58.883478 kubelet[2721]: I0813 00:47:58.881747 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26300763-a663-47e3-997d-be33d222eba4-cilium-config-path\") pod \"cilium-operator-5d85765b45-h5j5s\" (UID: \"26300763-a663-47e3-997d-be33d222eba4\") " pod="kube-system/cilium-operator-5d85765b45-h5j5s" Aug 13 00:47:58.884111 kubelet[2721]: I0813 00:47:58.884059 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s575h\" (UniqueName: \"kubernetes.io/projected/26300763-a663-47e3-997d-be33d222eba4-kube-api-access-s575h\") pod \"cilium-operator-5d85765b45-h5j5s\" (UID: \"26300763-a663-47e3-997d-be33d222eba4\") " pod="kube-system/cilium-operator-5d85765b45-h5j5s" Aug 13 00:47:59.148864 containerd[1532]: time="2025-08-13T00:47:59.148611281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v9frv,Uid:b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:59.162578 containerd[1532]: time="2025-08-13T00:47:59.162519120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d65w4,Uid:0022fdea-b6b3-4b2e-a435-150ec8018ca4,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:59.185291 containerd[1532]: time="2025-08-13T00:47:59.185247454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-h5j5s,Uid:26300763-a663-47e3-997d-be33d222eba4,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:59.431417 containerd[1532]: time="2025-08-13T00:47:59.431359772Z" level=info msg="connecting to shim e3054426761e7c46e6edd2bf803526078f69bb3b63d8e102115142fa5e096e3f" address="unix:///run/containerd/s/7ff6d624ea7d0052e6f29938e57982f06f3b9791b83662c63c9e87f4816138b1" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:59.442104 containerd[1532]: time="2025-08-13T00:47:59.442029803Z" level=info msg="connecting to shim 0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef" address="unix:///run/containerd/s/52ee9920ca510d8cf5c5914dd79decd770e32f563f671376454d5cc8b163bfaa" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:59.464817 containerd[1532]: time="2025-08-13T00:47:59.464746896Z" level=info msg="connecting to shim 7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae" address="unix:///run/containerd/s/327e5301c95f2502e12bde3f04505b246c1ba733f401de563ff34a9018c32b2d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:59.483747 systemd[1]: Started cri-containerd-e3054426761e7c46e6edd2bf803526078f69bb3b63d8e102115142fa5e096e3f.scope - libcontainer container e3054426761e7c46e6edd2bf803526078f69bb3b63d8e102115142fa5e096e3f. Aug 13 00:47:59.489207 systemd[1]: Started cri-containerd-0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef.scope - libcontainer container 0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef. Aug 13 00:47:59.497223 systemd[1]: Started cri-containerd-7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae.scope - libcontainer container 7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae. Aug 13 00:47:59.534941 containerd[1532]: time="2025-08-13T00:47:59.534855049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d65w4,Uid:0022fdea-b6b3-4b2e-a435-150ec8018ca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\"" Aug 13 00:47:59.535961 containerd[1532]: time="2025-08-13T00:47:59.535926131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v9frv,Uid:b87c4dcd-0ba8-4ff9-b196-9f1e33a27dd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3054426761e7c46e6edd2bf803526078f69bb3b63d8e102115142fa5e096e3f\"" Aug 13 00:47:59.537948 containerd[1532]: time="2025-08-13T00:47:59.537903324Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:47:59.539470 containerd[1532]: time="2025-08-13T00:47:59.539417923Z" level=info msg="CreateContainer within sandbox \"e3054426761e7c46e6edd2bf803526078f69bb3b63d8e102115142fa5e096e3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:47:59.557475 containerd[1532]: time="2025-08-13T00:47:59.557416262Z" level=info msg="Container fe3bc698e64f6f953d61d3b25893f185c16998823f45a7130f9428f55abb549b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:59.569320 containerd[1532]: time="2025-08-13T00:47:59.569277637Z" level=info msg="CreateContainer within sandbox \"e3054426761e7c46e6edd2bf803526078f69bb3b63d8e102115142fa5e096e3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe3bc698e64f6f953d61d3b25893f185c16998823f45a7130f9428f55abb549b\"" Aug 13 00:47:59.570883 containerd[1532]: time="2025-08-13T00:47:59.570856276Z" level=info msg="StartContainer for \"fe3bc698e64f6f953d61d3b25893f185c16998823f45a7130f9428f55abb549b\"" Aug 13 00:47:59.571366 containerd[1532]: time="2025-08-13T00:47:59.571338456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-h5j5s,Uid:26300763-a663-47e3-997d-be33d222eba4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\"" Aug 13 00:47:59.572257 containerd[1532]: time="2025-08-13T00:47:59.572216607Z" level=info msg="connecting to shim fe3bc698e64f6f953d61d3b25893f185c16998823f45a7130f9428f55abb549b" address="unix:///run/containerd/s/7ff6d624ea7d0052e6f29938e57982f06f3b9791b83662c63c9e87f4816138b1" protocol=ttrpc version=3 Aug 13 00:47:59.597590 systemd[1]: Started cri-containerd-fe3bc698e64f6f953d61d3b25893f185c16998823f45a7130f9428f55abb549b.scope - libcontainer container fe3bc698e64f6f953d61d3b25893f185c16998823f45a7130f9428f55abb549b. Aug 13 00:47:59.657683 containerd[1532]: time="2025-08-13T00:47:59.657644586Z" level=info msg="StartContainer for \"fe3bc698e64f6f953d61d3b25893f185c16998823f45a7130f9428f55abb549b\" returns successfully" Aug 13 00:48:00.311300 kubelet[2721]: I0813 00:48:00.311217 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v9frv" podStartSLOduration=2.311193598 podStartE2EDuration="2.311193598s" podCreationTimestamp="2025-08-13 00:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:00.310972585 +0000 UTC m=+7.137980020" watchObservedRunningTime="2025-08-13 00:48:00.311193598 +0000 UTC m=+7.138201013" Aug 13 00:48:06.482316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834649108.mount: Deactivated successfully. Aug 13 00:48:10.345297 containerd[1532]: time="2025-08-13T00:48:10.345215884Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:10.345984 containerd[1532]: time="2025-08-13T00:48:10.345947613Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:48:10.347387 containerd[1532]: time="2025-08-13T00:48:10.347313690Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:10.349825 containerd[1532]: time="2025-08-13T00:48:10.349105152Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.811163558s" Aug 13 00:48:10.349985 containerd[1532]: time="2025-08-13T00:48:10.349952027Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:48:10.356153 containerd[1532]: time="2025-08-13T00:48:10.356098882Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:48:10.364519 containerd[1532]: time="2025-08-13T00:48:10.364418923Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:48:10.374329 containerd[1532]: time="2025-08-13T00:48:10.374261272Z" level=info msg="Container 14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:10.379162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730476563.mount: Deactivated successfully. Aug 13 00:48:10.382424 containerd[1532]: time="2025-08-13T00:48:10.382350522Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\"" Aug 13 00:48:10.383259 containerd[1532]: time="2025-08-13T00:48:10.383025544Z" level=info msg="StartContainer for \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\"" Aug 13 00:48:10.384139 containerd[1532]: time="2025-08-13T00:48:10.384105146Z" level=info msg="connecting to shim 14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0" address="unix:///run/containerd/s/52ee9920ca510d8cf5c5914dd79decd770e32f563f671376454d5cc8b163bfaa" protocol=ttrpc version=3 Aug 13 00:48:10.446610 systemd[1]: Started cri-containerd-14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0.scope - libcontainer container 14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0. Aug 13 00:48:10.482093 containerd[1532]: time="2025-08-13T00:48:10.481980389Z" level=info msg="StartContainer for \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\" returns successfully" Aug 13 00:48:10.491840 systemd[1]: cri-containerd-14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0.scope: Deactivated successfully. Aug 13 00:48:10.495730 containerd[1532]: time="2025-08-13T00:48:10.495677043Z" level=info msg="received exit event container_id:\"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\" id:\"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\" pid:3139 exited_at:{seconds:1755046090 nanos:495237500}" Aug 13 00:48:10.495861 containerd[1532]: time="2025-08-13T00:48:10.495836290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\" id:\"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\" pid:3139 exited_at:{seconds:1755046090 nanos:495237500}" Aug 13 00:48:10.516102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0-rootfs.mount: Deactivated successfully. Aug 13 00:48:11.438193 containerd[1532]: time="2025-08-13T00:48:11.438132925Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:48:11.590148 containerd[1532]: time="2025-08-13T00:48:11.590095146Z" level=info msg="Container 6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:11.602766 containerd[1532]: time="2025-08-13T00:48:11.602711580Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\"" Aug 13 00:48:11.603257 containerd[1532]: time="2025-08-13T00:48:11.603231404Z" level=info msg="StartContainer for \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\"" Aug 13 00:48:11.604229 containerd[1532]: time="2025-08-13T00:48:11.604204245Z" level=info msg="connecting to shim 6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e" address="unix:///run/containerd/s/52ee9920ca510d8cf5c5914dd79decd770e32f563f671376454d5cc8b163bfaa" protocol=ttrpc version=3 Aug 13 00:48:11.642662 systemd[1]: Started cri-containerd-6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e.scope - libcontainer container 6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e. Aug 13 00:48:11.676358 containerd[1532]: time="2025-08-13T00:48:11.676291622Z" level=info msg="StartContainer for \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\" returns successfully" Aug 13 00:48:11.689900 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:48:11.690211 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:48:11.691185 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:48:11.693173 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:48:11.694682 containerd[1532]: time="2025-08-13T00:48:11.694637510Z" level=info msg="received exit event container_id:\"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\" id:\"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\" pid:3184 exited_at:{seconds:1755046091 nanos:693580151}" Aug 13 00:48:11.694682 containerd[1532]: time="2025-08-13T00:48:11.694676092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\" id:\"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\" pid:3184 exited_at:{seconds:1755046091 nanos:693580151}" Aug 13 00:48:11.695642 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:48:11.696426 systemd[1]: cri-containerd-6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e.scope: Deactivated successfully. Aug 13 00:48:11.715632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e-rootfs.mount: Deactivated successfully. Aug 13 00:48:11.728939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:48:12.441482 containerd[1532]: time="2025-08-13T00:48:12.441007907Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:48:12.795493 containerd[1532]: time="2025-08-13T00:48:12.795350118Z" level=info msg="Container 655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:12.800128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619856442.mount: Deactivated successfully. Aug 13 00:48:12.819157 containerd[1532]: time="2025-08-13T00:48:12.819085609Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\"" Aug 13 00:48:12.819705 containerd[1532]: time="2025-08-13T00:48:12.819664573Z" level=info msg="StartContainer for \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\"" Aug 13 00:48:12.821077 containerd[1532]: time="2025-08-13T00:48:12.821046500Z" level=info msg="connecting to shim 655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749" address="unix:///run/containerd/s/52ee9920ca510d8cf5c5914dd79decd770e32f563f671376454d5cc8b163bfaa" protocol=ttrpc version=3 Aug 13 00:48:12.847597 systemd[1]: Started cri-containerd-655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749.scope - libcontainer container 655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749. Aug 13 00:48:12.894210 systemd[1]: cri-containerd-655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749.scope: Deactivated successfully. Aug 13 00:48:12.895140 containerd[1532]: time="2025-08-13T00:48:12.895096176Z" level=info msg="StartContainer for \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\" returns successfully" Aug 13 00:48:12.896295 containerd[1532]: time="2025-08-13T00:48:12.896209339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\" id:\"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\" pid:3239 exited_at:{seconds:1755046092 nanos:895894620}" Aug 13 00:48:12.896295 containerd[1532]: time="2025-08-13T00:48:12.896246859Z" level=info msg="received exit event container_id:\"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\" id:\"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\" pid:3239 exited_at:{seconds:1755046092 nanos:895894620}" Aug 13 00:48:13.444762 containerd[1532]: time="2025-08-13T00:48:13.444713943Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:48:13.559630 containerd[1532]: time="2025-08-13T00:48:13.559553226Z" level=info msg="Container 855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:13.566414 containerd[1532]: time="2025-08-13T00:48:13.566372793Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\"" Aug 13 00:48:13.567269 containerd[1532]: time="2025-08-13T00:48:13.567213736Z" level=info msg="StartContainer for \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\"" Aug 13 00:48:13.568126 containerd[1532]: time="2025-08-13T00:48:13.568099145Z" level=info msg="connecting to shim 855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322" address="unix:///run/containerd/s/52ee9920ca510d8cf5c5914dd79decd770e32f563f671376454d5cc8b163bfaa" protocol=ttrpc version=3 Aug 13 00:48:13.596614 systemd[1]: Started cri-containerd-855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322.scope - libcontainer container 855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322. Aug 13 00:48:13.599372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749-rootfs.mount: Deactivated successfully. Aug 13 00:48:13.636506 systemd[1]: cri-containerd-855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322.scope: Deactivated successfully. Aug 13 00:48:13.637967 containerd[1532]: time="2025-08-13T00:48:13.637929174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\" id:\"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\" pid:3282 exited_at:{seconds:1755046093 nanos:637493198}" Aug 13 00:48:13.641356 containerd[1532]: time="2025-08-13T00:48:13.641300676Z" level=info msg="received exit event container_id:\"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\" id:\"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\" pid:3282 exited_at:{seconds:1755046093 nanos:637493198}" Aug 13 00:48:13.653771 containerd[1532]: time="2025-08-13T00:48:13.653730857Z" level=info msg="StartContainer for \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\" returns successfully" Aug 13 00:48:13.670170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322-rootfs.mount: Deactivated successfully. Aug 13 00:48:13.856581 containerd[1532]: time="2025-08-13T00:48:13.856461563Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:13.857354 containerd[1532]: time="2025-08-13T00:48:13.857326192Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:48:13.858469 containerd[1532]: time="2025-08-13T00:48:13.858399791Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:13.859584 containerd[1532]: time="2025-08-13T00:48:13.859551958Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.503404315s" Aug 13 00:48:13.859584 containerd[1532]: time="2025-08-13T00:48:13.859587154Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:48:13.861391 containerd[1532]: time="2025-08-13T00:48:13.861364702Z" level=info msg="CreateContainer within sandbox \"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:48:13.868790 containerd[1532]: time="2025-08-13T00:48:13.868752703Z" level=info msg="Container da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:13.875704 containerd[1532]: time="2025-08-13T00:48:13.875665925Z" level=info msg="CreateContainer within sandbox \"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\"" Aug 13 00:48:13.876161 containerd[1532]: time="2025-08-13T00:48:13.876124093Z" level=info msg="StartContainer for \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\"" Aug 13 00:48:13.876934 containerd[1532]: time="2025-08-13T00:48:13.876902791Z" level=info msg="connecting to shim da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634" address="unix:///run/containerd/s/327e5301c95f2502e12bde3f04505b246c1ba733f401de563ff34a9018c32b2d" protocol=ttrpc version=3 Aug 13 00:48:13.898582 systemd[1]: Started cri-containerd-da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634.scope - libcontainer container da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634. Aug 13 00:48:13.930123 containerd[1532]: time="2025-08-13T00:48:13.930075777Z" level=info msg="StartContainer for \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" returns successfully" Aug 13 00:48:14.468638 containerd[1532]: time="2025-08-13T00:48:14.468579753Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:48:14.481440 kubelet[2721]: I0813 00:48:14.481367 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-h5j5s" podStartSLOduration=2.193632576 podStartE2EDuration="16.48134269s" podCreationTimestamp="2025-08-13 00:47:58 +0000 UTC" firstStartedPulling="2025-08-13 00:47:59.57246476 +0000 UTC m=+6.399472165" lastFinishedPulling="2025-08-13 00:48:13.860174864 +0000 UTC m=+20.687182279" observedRunningTime="2025-08-13 00:48:14.470736272 +0000 UTC m=+21.297743687" watchObservedRunningTime="2025-08-13 00:48:14.48134269 +0000 UTC m=+21.308350105" Aug 13 00:48:14.492732 containerd[1532]: time="2025-08-13T00:48:14.492654749Z" level=info msg="Container 8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:14.506553 containerd[1532]: time="2025-08-13T00:48:14.505725031Z" level=info msg="CreateContainer within sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\"" Aug 13 00:48:14.507489 containerd[1532]: time="2025-08-13T00:48:14.507462073Z" level=info msg="StartContainer for \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\"" Aug 13 00:48:14.509620 containerd[1532]: time="2025-08-13T00:48:14.509597641Z" level=info msg="connecting to shim 8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4" address="unix:///run/containerd/s/52ee9920ca510d8cf5c5914dd79decd770e32f563f671376454d5cc8b163bfaa" protocol=ttrpc version=3 Aug 13 00:48:14.540705 systemd[1]: Started cri-containerd-8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4.scope - libcontainer container 8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4. Aug 13 00:48:14.591478 containerd[1532]: time="2025-08-13T00:48:14.589147903Z" level=info msg="StartContainer for \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" returns successfully" Aug 13 00:48:14.748368 containerd[1532]: time="2025-08-13T00:48:14.748242996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" id:\"28cf37bfeee42917402eb126f8985d33e57a09e0eabdd11b6fbcdbeec3e3008b\" pid:3392 exited_at:{seconds:1755046094 nanos:747034814}" Aug 13 00:48:14.785469 kubelet[2721]: I0813 00:48:14.785401 2721 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:48:14.825659 systemd[1]: Created slice kubepods-burstable-pod58a46504_b0fd_4f1b_a808_93114d992c89.slice - libcontainer container kubepods-burstable-pod58a46504_b0fd_4f1b_a808_93114d992c89.slice. Aug 13 00:48:14.833253 systemd[1]: Created slice kubepods-burstable-podf4a91365_2138_4d7a_9758_ead86885239d.slice - libcontainer container kubepods-burstable-podf4a91365_2138_4d7a_9758_ead86885239d.slice. Aug 13 00:48:14.987319 kubelet[2721]: I0813 00:48:14.987147 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58a46504-b0fd-4f1b-a808-93114d992c89-config-volume\") pod \"coredns-7c65d6cfc9-dcbgs\" (UID: \"58a46504-b0fd-4f1b-a808-93114d992c89\") " pod="kube-system/coredns-7c65d6cfc9-dcbgs" Aug 13 00:48:14.987319 kubelet[2721]: I0813 00:48:14.987212 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnft9\" (UniqueName: \"kubernetes.io/projected/58a46504-b0fd-4f1b-a808-93114d992c89-kube-api-access-hnft9\") pod \"coredns-7c65d6cfc9-dcbgs\" (UID: \"58a46504-b0fd-4f1b-a808-93114d992c89\") " pod="kube-system/coredns-7c65d6cfc9-dcbgs" Aug 13 00:48:14.987319 kubelet[2721]: I0813 00:48:14.987240 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvh2n\" (UniqueName: \"kubernetes.io/projected/f4a91365-2138-4d7a-9758-ead86885239d-kube-api-access-zvh2n\") pod \"coredns-7c65d6cfc9-f5v6j\" (UID: \"f4a91365-2138-4d7a-9758-ead86885239d\") " pod="kube-system/coredns-7c65d6cfc9-f5v6j" Aug 13 00:48:14.987319 kubelet[2721]: I0813 00:48:14.987258 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a91365-2138-4d7a-9758-ead86885239d-config-volume\") pod \"coredns-7c65d6cfc9-f5v6j\" (UID: \"f4a91365-2138-4d7a-9758-ead86885239d\") " pod="kube-system/coredns-7c65d6cfc9-f5v6j" Aug 13 00:48:15.142686 containerd[1532]: time="2025-08-13T00:48:15.142558165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f5v6j,Uid:f4a91365-2138-4d7a-9758-ead86885239d,Namespace:kube-system,Attempt:0,}" Aug 13 00:48:15.147070 containerd[1532]: time="2025-08-13T00:48:15.147021954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dcbgs,Uid:58a46504-b0fd-4f1b-a808-93114d992c89,Namespace:kube-system,Attempt:0,}" Aug 13 00:48:15.489065 kubelet[2721]: I0813 00:48:15.488990 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d65w4" podStartSLOduration=6.670716795 podStartE2EDuration="17.488972224s" podCreationTimestamp="2025-08-13 00:47:58 +0000 UTC" firstStartedPulling="2025-08-13 00:47:59.537514196 +0000 UTC m=+6.364521601" lastFinishedPulling="2025-08-13 00:48:10.355769615 +0000 UTC m=+17.182777030" observedRunningTime="2025-08-13 00:48:15.48870817 +0000 UTC m=+22.315715585" watchObservedRunningTime="2025-08-13 00:48:15.488972224 +0000 UTC m=+22.315979639" Aug 13 00:48:17.672428 systemd-networkd[1461]: cilium_host: Link UP Aug 13 00:48:17.672665 systemd-networkd[1461]: cilium_net: Link UP Aug 13 00:48:17.673438 systemd-networkd[1461]: cilium_net: Gained carrier Aug 13 00:48:17.673682 systemd-networkd[1461]: cilium_host: Gained carrier Aug 13 00:48:17.788631 systemd-networkd[1461]: cilium_vxlan: Link UP Aug 13 00:48:17.788642 systemd-networkd[1461]: cilium_vxlan: Gained carrier Aug 13 00:48:18.031477 kernel: NET: Registered PF_ALG protocol family Aug 13 00:48:18.045621 systemd-networkd[1461]: cilium_host: Gained IPv6LL Aug 13 00:48:18.157649 systemd-networkd[1461]: cilium_net: Gained IPv6LL Aug 13 00:48:18.734025 systemd-networkd[1461]: lxc_health: Link UP Aug 13 00:48:18.744131 systemd-networkd[1461]: lxc_health: Gained carrier Aug 13 00:48:18.921495 kernel: eth0: renamed from tmped529 Aug 13 00:48:18.924090 systemd-networkd[1461]: lxc8525dccc6971: Link UP Aug 13 00:48:18.925575 systemd-networkd[1461]: lxc8525dccc6971: Gained carrier Aug 13 00:48:18.929425 systemd-networkd[1461]: lxc67fc84a5faac: Link UP Aug 13 00:48:18.940492 kernel: eth0: renamed from tmp76549 Aug 13 00:48:18.944043 systemd-networkd[1461]: lxc67fc84a5faac: Gained carrier Aug 13 00:48:19.560772 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL Aug 13 00:48:20.005694 systemd-networkd[1461]: lxc67fc84a5faac: Gained IPv6LL Aug 13 00:48:20.415744 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:38154.service - OpenSSH per-connection server daemon (10.0.0.1:38154). Aug 13 00:48:20.454019 systemd-networkd[1461]: lxc_health: Gained IPv6LL Aug 13 00:48:20.488623 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 38154 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:20.490639 sshd-session[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:20.496275 systemd-logind[1509]: New session 8 of user core. Aug 13 00:48:20.507678 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:48:20.642942 sshd[3861]: Connection closed by 10.0.0.1 port 38154 Aug 13 00:48:20.643231 sshd-session[3859]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:20.646690 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:38154.service: Deactivated successfully. Aug 13 00:48:20.649011 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:48:20.650743 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:48:20.652117 systemd-logind[1509]: Removed session 8. Aug 13 00:48:20.709693 systemd-networkd[1461]: lxc8525dccc6971: Gained IPv6LL Aug 13 00:48:22.933329 containerd[1532]: time="2025-08-13T00:48:22.933261990Z" level=info msg="connecting to shim 765496ef0b2a133561bc818923ac6cc72a1fd10052398b8f41627a0009cc41f9" address="unix:///run/containerd/s/d594258e66aa4e436a8baaf1215439e87f8cd8655c82aaa34be12846bd0f0def" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:22.944274 containerd[1532]: time="2025-08-13T00:48:22.944216934Z" level=info msg="connecting to shim ed529f7214e774a4d70032a854cffc881d74ca0ecec1feb3fc297ed8ac2b1947" address="unix:///run/containerd/s/468c855542abcc2051a195e803fb90c94c8bcce469317e701ef01a7c8e10978e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:22.963977 systemd[1]: Started cri-containerd-765496ef0b2a133561bc818923ac6cc72a1fd10052398b8f41627a0009cc41f9.scope - libcontainer container 765496ef0b2a133561bc818923ac6cc72a1fd10052398b8f41627a0009cc41f9. Aug 13 00:48:22.967989 systemd[1]: Started cri-containerd-ed529f7214e774a4d70032a854cffc881d74ca0ecec1feb3fc297ed8ac2b1947.scope - libcontainer container ed529f7214e774a4d70032a854cffc881d74ca0ecec1feb3fc297ed8ac2b1947. Aug 13 00:48:22.980513 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:22.983280 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:23.139109 containerd[1532]: time="2025-08-13T00:48:23.139061697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f5v6j,Uid:f4a91365-2138-4d7a-9758-ead86885239d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed529f7214e774a4d70032a854cffc881d74ca0ecec1feb3fc297ed8ac2b1947\"" Aug 13 00:48:23.141380 containerd[1532]: time="2025-08-13T00:48:23.141343813Z" level=info msg="CreateContainer within sandbox \"ed529f7214e774a4d70032a854cffc881d74ca0ecec1feb3fc297ed8ac2b1947\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:48:23.206708 containerd[1532]: time="2025-08-13T00:48:23.206569723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dcbgs,Uid:58a46504-b0fd-4f1b-a808-93114d992c89,Namespace:kube-system,Attempt:0,} returns sandbox id \"765496ef0b2a133561bc818923ac6cc72a1fd10052398b8f41627a0009cc41f9\"" Aug 13 00:48:23.209143 containerd[1532]: time="2025-08-13T00:48:23.209104141Z" level=info msg="CreateContainer within sandbox \"765496ef0b2a133561bc818923ac6cc72a1fd10052398b8f41627a0009cc41f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:48:23.946484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127049319.mount: Deactivated successfully. Aug 13 00:48:24.053893 containerd[1532]: time="2025-08-13T00:48:24.053812918Z" level=info msg="Container 8e056ec97c8be8b635f298414ae0bde1e536e03265fd89543479c76ae0999b5c: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:24.056109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868338597.mount: Deactivated successfully. Aug 13 00:48:24.071092 containerd[1532]: time="2025-08-13T00:48:24.071004498Z" level=info msg="Container 49549e6bb8cd0e5e6a7cd4937038394a357556d15a495bd7fa40f5a15a5d5172: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:24.081188 containerd[1532]: time="2025-08-13T00:48:24.081138936Z" level=info msg="CreateContainer within sandbox \"ed529f7214e774a4d70032a854cffc881d74ca0ecec1feb3fc297ed8ac2b1947\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e056ec97c8be8b635f298414ae0bde1e536e03265fd89543479c76ae0999b5c\"" Aug 13 00:48:24.081879 containerd[1532]: time="2025-08-13T00:48:24.081850098Z" level=info msg="StartContainer for \"8e056ec97c8be8b635f298414ae0bde1e536e03265fd89543479c76ae0999b5c\"" Aug 13 00:48:24.083751 containerd[1532]: time="2025-08-13T00:48:24.083724710Z" level=info msg="connecting to shim 8e056ec97c8be8b635f298414ae0bde1e536e03265fd89543479c76ae0999b5c" address="unix:///run/containerd/s/468c855542abcc2051a195e803fb90c94c8bcce469317e701ef01a7c8e10978e" protocol=ttrpc version=3 Aug 13 00:48:24.087374 containerd[1532]: time="2025-08-13T00:48:24.087332329Z" level=info msg="CreateContainer within sandbox \"765496ef0b2a133561bc818923ac6cc72a1fd10052398b8f41627a0009cc41f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49549e6bb8cd0e5e6a7cd4937038394a357556d15a495bd7fa40f5a15a5d5172\"" Aug 13 00:48:24.088802 containerd[1532]: time="2025-08-13T00:48:24.087802009Z" level=info msg="StartContainer for \"49549e6bb8cd0e5e6a7cd4937038394a357556d15a495bd7fa40f5a15a5d5172\"" Aug 13 00:48:24.088802 containerd[1532]: time="2025-08-13T00:48:24.088735618Z" level=info msg="connecting to shim 49549e6bb8cd0e5e6a7cd4937038394a357556d15a495bd7fa40f5a15a5d5172" address="unix:///run/containerd/s/d594258e66aa4e436a8baaf1215439e87f8cd8655c82aaa34be12846bd0f0def" protocol=ttrpc version=3 Aug 13 00:48:24.108598 systemd[1]: Started cri-containerd-8e056ec97c8be8b635f298414ae0bde1e536e03265fd89543479c76ae0999b5c.scope - libcontainer container 8e056ec97c8be8b635f298414ae0bde1e536e03265fd89543479c76ae0999b5c. Aug 13 00:48:24.113213 systemd[1]: Started cri-containerd-49549e6bb8cd0e5e6a7cd4937038394a357556d15a495bd7fa40f5a15a5d5172.scope - libcontainer container 49549e6bb8cd0e5e6a7cd4937038394a357556d15a495bd7fa40f5a15a5d5172. Aug 13 00:48:24.156681 containerd[1532]: time="2025-08-13T00:48:24.156636366Z" level=info msg="StartContainer for \"49549e6bb8cd0e5e6a7cd4937038394a357556d15a495bd7fa40f5a15a5d5172\" returns successfully" Aug 13 00:48:24.157086 containerd[1532]: time="2025-08-13T00:48:24.157060831Z" level=info msg="StartContainer for \"8e056ec97c8be8b635f298414ae0bde1e536e03265fd89543479c76ae0999b5c\" returns successfully" Aug 13 00:48:24.537984 kubelet[2721]: I0813 00:48:24.537885 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f5v6j" podStartSLOduration=26.537864158 podStartE2EDuration="26.537864158s" podCreationTimestamp="2025-08-13 00:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:24.536943393 +0000 UTC m=+31.363950828" watchObservedRunningTime="2025-08-13 00:48:24.537864158 +0000 UTC m=+31.364871573" Aug 13 00:48:24.550478 kubelet[2721]: I0813 00:48:24.550371 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dcbgs" podStartSLOduration=26.550351434 podStartE2EDuration="26.550351434s" podCreationTimestamp="2025-08-13 00:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:24.549852348 +0000 UTC m=+31.376859763" watchObservedRunningTime="2025-08-13 00:48:24.550351434 +0000 UTC m=+31.377358839" Aug 13 00:48:24.925038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004044506.mount: Deactivated successfully. Aug 13 00:48:25.673227 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). Aug 13 00:48:25.737322 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:25.739019 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:25.744148 systemd-logind[1509]: New session 9 of user core. Aug 13 00:48:25.749635 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:48:25.871747 sshd[4053]: Connection closed by 10.0.0.1 port 38168 Aug 13 00:48:25.872075 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:25.875249 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:38168.service: Deactivated successfully. Aug 13 00:48:25.877323 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:48:25.878890 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:48:25.880601 systemd-logind[1509]: Removed session 9. Aug 13 00:48:27.256059 kubelet[2721]: I0813 00:48:27.255679 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:48:30.897051 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:53692.service - OpenSSH per-connection server daemon (10.0.0.1:53692). Aug 13 00:48:31.066348 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 53692 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:31.068020 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:31.072718 systemd-logind[1509]: New session 10 of user core. Aug 13 00:48:31.086673 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:48:31.296588 sshd[4072]: Connection closed by 10.0.0.1 port 53692 Aug 13 00:48:31.296943 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:31.300921 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:53692.service: Deactivated successfully. Aug 13 00:48:31.302965 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:48:31.303917 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:48:31.305187 systemd-logind[1509]: Removed session 10. Aug 13 00:48:36.312054 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:53704.service - OpenSSH per-connection server daemon (10.0.0.1:53704). Aug 13 00:48:36.356703 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 53704 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:36.358758 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:36.364343 systemd-logind[1509]: New session 11 of user core. Aug 13 00:48:36.377711 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:48:36.499470 sshd[4091]: Connection closed by 10.0.0.1 port 53704 Aug 13 00:48:36.499918 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:36.512824 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:53704.service: Deactivated successfully. Aug 13 00:48:36.515429 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:48:36.516488 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:48:36.520118 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:53720.service - OpenSSH per-connection server daemon (10.0.0.1:53720). Aug 13 00:48:36.521393 systemd-logind[1509]: Removed session 11. Aug 13 00:48:36.589828 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 53720 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:36.591382 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:36.596727 systemd-logind[1509]: New session 12 of user core. Aug 13 00:48:36.603923 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:48:36.850465 sshd[4107]: Connection closed by 10.0.0.1 port 53720 Aug 13 00:48:36.850876 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:36.862063 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:53720.service: Deactivated successfully. Aug 13 00:48:36.865192 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:48:36.868824 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:48:36.874887 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:53722.service - OpenSSH per-connection server daemon (10.0.0.1:53722). Aug 13 00:48:36.877836 systemd-logind[1509]: Removed session 12. Aug 13 00:48:36.924515 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 53722 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:36.926130 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:36.931617 systemd-logind[1509]: New session 13 of user core. Aug 13 00:48:36.942626 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:48:37.080475 sshd[4121]: Connection closed by 10.0.0.1 port 53722 Aug 13 00:48:37.080859 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:37.086073 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:53722.service: Deactivated successfully. Aug 13 00:48:37.088311 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:48:37.089262 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:48:37.091178 systemd-logind[1509]: Removed session 13. Aug 13 00:48:42.097538 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:47584.service - OpenSSH per-connection server daemon (10.0.0.1:47584). Aug 13 00:48:42.147641 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 47584 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:42.149149 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:42.154204 systemd-logind[1509]: New session 14 of user core. Aug 13 00:48:42.166700 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:48:42.285581 sshd[4136]: Connection closed by 10.0.0.1 port 47584 Aug 13 00:48:42.285936 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:42.290661 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:47584.service: Deactivated successfully. Aug 13 00:48:42.293134 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:48:42.294164 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:48:42.295794 systemd-logind[1509]: Removed session 14. Aug 13 00:48:47.302202 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:47592.service - OpenSSH per-connection server daemon (10.0.0.1:47592). Aug 13 00:48:47.353488 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 47592 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:47.354962 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:47.359334 systemd-logind[1509]: New session 15 of user core. Aug 13 00:48:47.367566 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:48:47.496509 sshd[4151]: Connection closed by 10.0.0.1 port 47592 Aug 13 00:48:47.496802 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:47.500880 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:47592.service: Deactivated successfully. Aug 13 00:48:47.502943 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:48:47.503851 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:48:47.505103 systemd-logind[1509]: Removed session 15. Aug 13 00:48:52.516681 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:44030.service - OpenSSH per-connection server daemon (10.0.0.1:44030). Aug 13 00:48:52.575888 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 44030 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:52.577745 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:52.582877 systemd-logind[1509]: New session 16 of user core. Aug 13 00:48:52.590691 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:48:52.716862 sshd[4166]: Connection closed by 10.0.0.1 port 44030 Aug 13 00:48:52.717255 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:52.728780 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:44030.service: Deactivated successfully. Aug 13 00:48:52.731073 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:48:52.731982 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:48:52.735242 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:44040.service - OpenSSH per-connection server daemon (10.0.0.1:44040). Aug 13 00:48:52.736214 systemd-logind[1509]: Removed session 16. Aug 13 00:48:52.788558 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 44040 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:52.789995 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:52.794911 systemd-logind[1509]: New session 17 of user core. Aug 13 00:48:52.804617 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:48:53.429836 sshd[4181]: Connection closed by 10.0.0.1 port 44040 Aug 13 00:48:53.430204 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:53.444345 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:44040.service: Deactivated successfully. Aug 13 00:48:53.447041 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:48:53.448069 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:48:53.453245 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:44054.service - OpenSSH per-connection server daemon (10.0.0.1:44054). Aug 13 00:48:53.454472 systemd-logind[1509]: Removed session 17. Aug 13 00:48:53.509407 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 44054 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:53.512271 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:53.520838 systemd-logind[1509]: New session 18 of user core. Aug 13 00:48:53.541953 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:48:55.307979 sshd[4198]: Connection closed by 10.0.0.1 port 44054 Aug 13 00:48:55.308687 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:55.322972 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:44054.service: Deactivated successfully. Aug 13 00:48:55.326414 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:48:55.329165 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:48:55.334589 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:44068.service - OpenSSH per-connection server daemon (10.0.0.1:44068). Aug 13 00:48:55.335941 systemd-logind[1509]: Removed session 18. Aug 13 00:48:55.398235 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 44068 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:55.400225 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:55.405921 systemd-logind[1509]: New session 19 of user core. Aug 13 00:48:55.422770 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:48:55.640559 sshd[4221]: Connection closed by 10.0.0.1 port 44068 Aug 13 00:48:55.642820 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:55.653094 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:44068.service: Deactivated successfully. Aug 13 00:48:55.655967 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:48:55.657369 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:48:55.661243 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:44072.service - OpenSSH per-connection server daemon (10.0.0.1:44072). Aug 13 00:48:55.661943 systemd-logind[1509]: Removed session 19. Aug 13 00:48:55.723247 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 44072 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:55.725534 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:55.731361 systemd-logind[1509]: New session 20 of user core. Aug 13 00:48:55.745637 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:48:55.856623 sshd[4235]: Connection closed by 10.0.0.1 port 44072 Aug 13 00:48:55.856967 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:55.861101 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:44072.service: Deactivated successfully. Aug 13 00:48:55.863092 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:48:55.864020 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:48:55.865387 systemd-logind[1509]: Removed session 20. Aug 13 00:49:00.869476 systemd[1]: Started sshd@20-10.0.0.114:22-10.0.0.1:50292.service - OpenSSH per-connection server daemon (10.0.0.1:50292). Aug 13 00:49:00.909563 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 50292 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:00.911075 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:00.915703 systemd-logind[1509]: New session 21 of user core. Aug 13 00:49:00.927684 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:49:01.040889 sshd[4253]: Connection closed by 10.0.0.1 port 50292 Aug 13 00:49:01.041258 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:01.047188 systemd[1]: sshd@20-10.0.0.114:22-10.0.0.1:50292.service: Deactivated successfully. Aug 13 00:49:01.049892 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:49:01.050906 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:49:01.052894 systemd-logind[1509]: Removed session 21. Aug 13 00:49:06.054317 systemd[1]: Started sshd@21-10.0.0.114:22-10.0.0.1:50304.service - OpenSSH per-connection server daemon (10.0.0.1:50304). Aug 13 00:49:06.115032 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 50304 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:06.116643 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:06.121631 systemd-logind[1509]: New session 22 of user core. Aug 13 00:49:06.132626 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:49:06.243617 sshd[4271]: Connection closed by 10.0.0.1 port 50304 Aug 13 00:49:06.243957 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:06.248598 systemd[1]: sshd@21-10.0.0.114:22-10.0.0.1:50304.service: Deactivated successfully. Aug 13 00:49:06.250767 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:49:06.251569 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:49:06.253098 systemd-logind[1509]: Removed session 22. Aug 13 00:49:11.262590 systemd[1]: Started sshd@22-10.0.0.114:22-10.0.0.1:58500.service - OpenSSH per-connection server daemon (10.0.0.1:58500). Aug 13 00:49:11.303927 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 58500 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:11.314497 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:11.319432 systemd-logind[1509]: New session 23 of user core. Aug 13 00:49:11.333609 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:49:11.551591 sshd[4287]: Connection closed by 10.0.0.1 port 58500 Aug 13 00:49:11.551918 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:11.556729 systemd[1]: sshd@22-10.0.0.114:22-10.0.0.1:58500.service: Deactivated successfully. Aug 13 00:49:11.559125 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:49:11.559951 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:49:11.561663 systemd-logind[1509]: Removed session 23. Aug 13 00:49:16.569317 systemd[1]: Started sshd@23-10.0.0.114:22-10.0.0.1:58512.service - OpenSSH per-connection server daemon (10.0.0.1:58512). Aug 13 00:49:16.624462 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 58512 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:16.626162 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:16.631195 systemd-logind[1509]: New session 24 of user core. Aug 13 00:49:16.646615 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:49:16.758507 sshd[4303]: Connection closed by 10.0.0.1 port 58512 Aug 13 00:49:16.759018 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:16.769145 systemd[1]: sshd@23-10.0.0.114:22-10.0.0.1:58512.service: Deactivated successfully. Aug 13 00:49:16.771571 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:49:16.772510 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:49:16.776024 systemd[1]: Started sshd@24-10.0.0.114:22-10.0.0.1:58518.service - OpenSSH per-connection server daemon (10.0.0.1:58518). Aug 13 00:49:16.776973 systemd-logind[1509]: Removed session 24. Aug 13 00:49:16.828946 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 58518 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:16.830586 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:16.835467 systemd-logind[1509]: New session 25 of user core. Aug 13 00:49:16.849619 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:49:18.214555 containerd[1532]: time="2025-08-13T00:49:18.214493737Z" level=info msg="StopContainer for \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" with timeout 30 (s)" Aug 13 00:49:18.222814 containerd[1532]: time="2025-08-13T00:49:18.222770183Z" level=info msg="Stop container \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" with signal terminated" Aug 13 00:49:18.238278 systemd[1]: cri-containerd-da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634.scope: Deactivated successfully. Aug 13 00:49:18.240765 containerd[1532]: time="2025-08-13T00:49:18.240714004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" id:\"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" pid:3323 exited_at:{seconds:1755046158 nanos:239462539}" Aug 13 00:49:18.240843 containerd[1532]: time="2025-08-13T00:49:18.240813683Z" level=info msg="received exit event container_id:\"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" id:\"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" pid:3323 exited_at:{seconds:1755046158 nanos:239462539}" Aug 13 00:49:18.255585 containerd[1532]: time="2025-08-13T00:49:18.255236041Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:49:18.255585 containerd[1532]: time="2025-08-13T00:49:18.255321874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" id:\"920d2fc37ff8f720e00df7283fe2712f430a840cbc69e7d6ca421cab490f7c44\" pid:4348 exited_at:{seconds:1755046158 nanos:254957733}" Aug 13 00:49:18.257630 containerd[1532]: time="2025-08-13T00:49:18.257592163Z" level=info msg="StopContainer for \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" with timeout 2 (s)" Aug 13 00:49:18.257891 containerd[1532]: time="2025-08-13T00:49:18.257869008Z" level=info msg="Stop container \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" with signal terminated" Aug 13 00:49:18.267107 systemd-networkd[1461]: lxc_health: Link DOWN Aug 13 00:49:18.267118 systemd-networkd[1461]: lxc_health: Lost carrier Aug 13 00:49:18.270185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634-rootfs.mount: Deactivated successfully. Aug 13 00:49:18.288923 systemd[1]: cri-containerd-8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4.scope: Deactivated successfully. Aug 13 00:49:18.289356 systemd[1]: cri-containerd-8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4.scope: Consumed 7.022s CPU time, 124.3M memory peak, 328K read from disk, 13.3M written to disk. Aug 13 00:49:18.290975 containerd[1532]: time="2025-08-13T00:49:18.290757238Z" level=info msg="received exit event container_id:\"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" id:\"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" pid:3359 exited_at:{seconds:1755046158 nanos:290561346}" Aug 13 00:49:18.290975 containerd[1532]: time="2025-08-13T00:49:18.290940035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" id:\"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" pid:3359 exited_at:{seconds:1755046158 nanos:290561346}" Aug 13 00:49:18.291776 containerd[1532]: time="2025-08-13T00:49:18.291744050Z" level=info msg="StopContainer for \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" returns successfully" Aug 13 00:49:18.292563 containerd[1532]: time="2025-08-13T00:49:18.292527307Z" level=info msg="StopPodSandbox for \"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\"" Aug 13 00:49:18.292695 containerd[1532]: time="2025-08-13T00:49:18.292609663Z" level=info msg="Container to stop \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:49:18.300767 systemd[1]: cri-containerd-7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae.scope: Deactivated successfully. Aug 13 00:49:18.302655 containerd[1532]: time="2025-08-13T00:49:18.302601924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\" id:\"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\" pid:2911 exit_status:137 exited_at:{seconds:1755046158 nanos:301845218}" Aug 13 00:49:18.318188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4-rootfs.mount: Deactivated successfully. Aug 13 00:49:18.329090 containerd[1532]: time="2025-08-13T00:49:18.329029505Z" level=info msg="StopContainer for \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" returns successfully" Aug 13 00:49:18.329935 containerd[1532]: time="2025-08-13T00:49:18.329914915Z" level=info msg="StopPodSandbox for \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\"" Aug 13 00:49:18.330003 containerd[1532]: time="2025-08-13T00:49:18.329988273Z" level=info msg="Container to stop \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:49:18.330032 containerd[1532]: time="2025-08-13T00:49:18.330006028Z" level=info msg="Container to stop \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:49:18.330032 containerd[1532]: time="2025-08-13T00:49:18.330020194Z" level=info msg="Container to stop \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:49:18.330032 containerd[1532]: time="2025-08-13T00:49:18.330028660Z" level=info msg="Container to stop \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:49:18.330120 containerd[1532]: time="2025-08-13T00:49:18.330037277Z" level=info msg="Container to stop \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:49:18.332632 kubelet[2721]: E0813 00:49:18.332566 2721 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:49:18.339690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae-rootfs.mount: Deactivated successfully. Aug 13 00:49:18.340960 systemd[1]: cri-containerd-0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef.scope: Deactivated successfully. Aug 13 00:49:18.347560 containerd[1532]: time="2025-08-13T00:49:18.347510906Z" level=info msg="shim disconnected" id=7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae namespace=k8s.io Aug 13 00:49:18.348068 containerd[1532]: time="2025-08-13T00:49:18.347859167Z" level=warning msg="cleaning up after shim disconnected" id=7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae namespace=k8s.io Aug 13 00:49:18.361373 containerd[1532]: time="2025-08-13T00:49:18.347912377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:49:18.370662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef-rootfs.mount: Deactivated successfully. Aug 13 00:49:18.375879 containerd[1532]: time="2025-08-13T00:49:18.375811390Z" level=info msg="shim disconnected" id=0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef namespace=k8s.io Aug 13 00:49:18.375879 containerd[1532]: time="2025-08-13T00:49:18.375870542Z" level=warning msg="cleaning up after shim disconnected" id=0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef namespace=k8s.io Aug 13 00:49:18.376101 containerd[1532]: time="2025-08-13T00:49:18.375879919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:49:18.393691 containerd[1532]: time="2025-08-13T00:49:18.393565189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" id:\"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" pid:2902 exit_status:137 exited_at:{seconds:1755046158 nanos:340773851}" Aug 13 00:49:18.396795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae-shm.mount: Deactivated successfully. Aug 13 00:49:18.396910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef-shm.mount: Deactivated successfully. Aug 13 00:49:18.409881 containerd[1532]: time="2025-08-13T00:49:18.409834703Z" level=info msg="TearDown network for sandbox \"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\" successfully" Aug 13 00:49:18.409881 containerd[1532]: time="2025-08-13T00:49:18.409876262Z" level=info msg="StopPodSandbox for \"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\" returns successfully" Aug 13 00:49:18.410603 containerd[1532]: time="2025-08-13T00:49:18.410568316Z" level=info msg="TearDown network for sandbox \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" successfully" Aug 13 00:49:18.410603 containerd[1532]: time="2025-08-13T00:49:18.410598653Z" level=info msg="StopPodSandbox for \"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" returns successfully" Aug 13 00:49:18.412490 containerd[1532]: time="2025-08-13T00:49:18.412327514Z" level=info msg="received exit event sandbox_id:\"0ab615e47fb452b9955191ea8d94e974963d63f1fc1effa5624c48e30a6730ef\" exit_status:137 exited_at:{seconds:1755046158 nanos:340773851}" Aug 13 00:49:18.412490 containerd[1532]: time="2025-08-13T00:49:18.412435869Z" level=info msg="received exit event sandbox_id:\"7c47606176286e1b7dcfe48f339d5e84e360d9f21eb8d1367c6324263d9c69ae\" exit_status:137 exited_at:{seconds:1755046158 nanos:301845218}" Aug 13 00:49:18.557103 kubelet[2721]: I0813 00:49:18.556905 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-host-proc-sys-kernel\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557103 kubelet[2721]: I0813 00:49:18.556964 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-bpf-maps\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557103 kubelet[2721]: I0813 00:49:18.556994 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-hubble-tls\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557103 kubelet[2721]: I0813 00:49:18.557009 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-lib-modules\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557103 kubelet[2721]: I0813 00:49:18.557025 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-etc-cni-netd\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557103 kubelet[2721]: I0813 00:49:18.557046 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-host-proc-sys-net\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557522 kubelet[2721]: I0813 00:49:18.557062 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-config-path\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557522 kubelet[2721]: I0813 00:49:18.557075 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-cgroup\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557522 kubelet[2721]: I0813 00:49:18.557073 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.557522 kubelet[2721]: I0813 00:49:18.557089 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-hostproc\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557522 kubelet[2721]: I0813 00:49:18.557076 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.557668 kubelet[2721]: I0813 00:49:18.557122 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cni-path" (OuterVolumeSpecName: "cni-path") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.557668 kubelet[2721]: I0813 00:49:18.557103 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cni-path\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557668 kubelet[2721]: I0813 00:49:18.557184 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl82s\" (UniqueName: \"kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-kube-api-access-dl82s\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557668 kubelet[2721]: I0813 00:49:18.557206 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26300763-a663-47e3-997d-be33d222eba4-cilium-config-path\") pod \"26300763-a663-47e3-997d-be33d222eba4\" (UID: \"26300763-a663-47e3-997d-be33d222eba4\") " Aug 13 00:49:18.557668 kubelet[2721]: I0813 00:49:18.557227 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0022fdea-b6b3-4b2e-a435-150ec8018ca4-clustermesh-secrets\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557668 kubelet[2721]: I0813 00:49:18.557244 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s575h\" (UniqueName: \"kubernetes.io/projected/26300763-a663-47e3-997d-be33d222eba4-kube-api-access-s575h\") pod \"26300763-a663-47e3-997d-be33d222eba4\" (UID: \"26300763-a663-47e3-997d-be33d222eba4\") " Aug 13 00:49:18.557828 kubelet[2721]: I0813 00:49:18.557262 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-xtables-lock\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557828 kubelet[2721]: I0813 00:49:18.557284 2721 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-run\") pod \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\" (UID: \"0022fdea-b6b3-4b2e-a435-150ec8018ca4\") " Aug 13 00:49:18.557828 kubelet[2721]: I0813 00:49:18.557312 2721 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.557828 kubelet[2721]: I0813 00:49:18.557326 2721 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.557828 kubelet[2721]: I0813 00:49:18.557337 2721 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.560571 kubelet[2721]: I0813 00:49:18.557126 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.560680 kubelet[2721]: I0813 00:49:18.557215 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.560837 kubelet[2721]: I0813 00:49:18.557226 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-hostproc" (OuterVolumeSpecName: "hostproc") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.560837 kubelet[2721]: I0813 00:49:18.557241 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.560837 kubelet[2721]: I0813 00:49:18.557354 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.560837 kubelet[2721]: I0813 00:49:18.558879 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.561052 kubelet[2721]: I0813 00:49:18.560795 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:49:18.562734 kubelet[2721]: I0813 00:49:18.562652 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:49:18.562854 kubelet[2721]: I0813 00:49:18.562788 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-kube-api-access-dl82s" (OuterVolumeSpecName: "kube-api-access-dl82s") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "kube-api-access-dl82s". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:49:18.563571 kubelet[2721]: I0813 00:49:18.563542 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0022fdea-b6b3-4b2e-a435-150ec8018ca4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:49:18.563919 kubelet[2721]: I0813 00:49:18.563882 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26300763-a663-47e3-997d-be33d222eba4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26300763-a663-47e3-997d-be33d222eba4" (UID: "26300763-a663-47e3-997d-be33d222eba4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:49:18.564048 kubelet[2721]: I0813 00:49:18.564014 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26300763-a663-47e3-997d-be33d222eba4-kube-api-access-s575h" (OuterVolumeSpecName: "kube-api-access-s575h") pod "26300763-a663-47e3-997d-be33d222eba4" (UID: "26300763-a663-47e3-997d-be33d222eba4"). InnerVolumeSpecName "kube-api-access-s575h". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:49:18.565954 kubelet[2721]: I0813 00:49:18.565916 2721 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0022fdea-b6b3-4b2e-a435-150ec8018ca4" (UID: "0022fdea-b6b3-4b2e-a435-150ec8018ca4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:49:18.623484 kubelet[2721]: I0813 00:49:18.623437 2721 scope.go:117] "RemoveContainer" containerID="da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634" Aug 13 00:49:18.626050 containerd[1532]: time="2025-08-13T00:49:18.626003223Z" level=info msg="RemoveContainer for \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\"" Aug 13 00:49:18.630193 systemd[1]: Removed slice kubepods-besteffort-pod26300763_a663_47e3_997d_be33d222eba4.slice - libcontainer container kubepods-besteffort-pod26300763_a663_47e3_997d_be33d222eba4.slice. Aug 13 00:49:18.633235 containerd[1532]: time="2025-08-13T00:49:18.633201412Z" level=info msg="RemoveContainer for \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" returns successfully" Aug 13 00:49:18.633643 kubelet[2721]: I0813 00:49:18.633572 2721 scope.go:117] "RemoveContainer" containerID="da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634" Aug 13 00:49:18.633836 containerd[1532]: time="2025-08-13T00:49:18.633784228Z" level=error msg="ContainerStatus for \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\": not found" Aug 13 00:49:18.636624 systemd[1]: Removed slice kubepods-burstable-pod0022fdea_b6b3_4b2e_a435_150ec8018ca4.slice - libcontainer container kubepods-burstable-pod0022fdea_b6b3_4b2e_a435_150ec8018ca4.slice. Aug 13 00:49:18.636728 systemd[1]: kubepods-burstable-pod0022fdea_b6b3_4b2e_a435_150ec8018ca4.slice: Consumed 7.137s CPU time, 124.6M memory peak, 340K read from disk, 13.3M written to disk. Aug 13 00:49:18.638508 kubelet[2721]: E0813 00:49:18.638470 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\": not found" containerID="da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634" Aug 13 00:49:18.638683 kubelet[2721]: I0813 00:49:18.638523 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634"} err="failed to get container status \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\": rpc error: code = NotFound desc = an error occurred when try to find container \"da0108a9384e97c0d95655919079954f3644fed0e2949d80e909b9cd5d6c1634\": not found" Aug 13 00:49:18.638720 kubelet[2721]: I0813 00:49:18.638686 2721 scope.go:117] "RemoveContainer" containerID="8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4" Aug 13 00:49:18.640595 containerd[1532]: time="2025-08-13T00:49:18.640561779Z" level=info msg="RemoveContainer for \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\"" Aug 13 00:49:18.651242 containerd[1532]: time="2025-08-13T00:49:18.651185920Z" level=info msg="RemoveContainer for \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" returns successfully" Aug 13 00:49:18.651476 kubelet[2721]: I0813 00:49:18.651433 2721 scope.go:117] "RemoveContainer" containerID="855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322" Aug 13 00:49:18.654668 containerd[1532]: time="2025-08-13T00:49:18.654111592Z" level=info msg="RemoveContainer for \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\"" Aug 13 00:49:18.658154 kubelet[2721]: I0813 00:49:18.658111 2721 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658154 kubelet[2721]: I0813 00:49:18.658141 2721 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658154 kubelet[2721]: I0813 00:49:18.658151 2721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl82s\" (UniqueName: \"kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-kube-api-access-dl82s\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658154 kubelet[2721]: I0813 00:49:18.658163 2721 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26300763-a663-47e3-997d-be33d222eba4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658154 kubelet[2721]: I0813 00:49:18.658171 2721 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s575h\" (UniqueName: \"kubernetes.io/projected/26300763-a663-47e3-997d-be33d222eba4-kube-api-access-s575h\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658154 kubelet[2721]: I0813 00:49:18.658179 2721 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0022fdea-b6b3-4b2e-a435-150ec8018ca4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658468 kubelet[2721]: I0813 00:49:18.658189 2721 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658468 kubelet[2721]: I0813 00:49:18.658196 2721 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658468 kubelet[2721]: I0813 00:49:18.658204 2721 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0022fdea-b6b3-4b2e-a435-150ec8018ca4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658468 kubelet[2721]: I0813 00:49:18.658212 2721 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658468 kubelet[2721]: I0813 00:49:18.658220 2721 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658468 kubelet[2721]: I0813 00:49:18.658228 2721 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0022fdea-b6b3-4b2e-a435-150ec8018ca4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.658468 kubelet[2721]: I0813 00:49:18.658236 2721 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0022fdea-b6b3-4b2e-a435-150ec8018ca4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:49:18.659911 containerd[1532]: time="2025-08-13T00:49:18.659872745Z" level=info msg="RemoveContainer for \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\" returns successfully" Aug 13 00:49:18.660115 kubelet[2721]: I0813 00:49:18.660088 2721 scope.go:117] "RemoveContainer" containerID="655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749" Aug 13 00:49:18.662400 containerd[1532]: time="2025-08-13T00:49:18.662372188Z" level=info msg="RemoveContainer for \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\"" Aug 13 00:49:18.666600 containerd[1532]: time="2025-08-13T00:49:18.666562119Z" level=info msg="RemoveContainer for \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\" returns successfully" Aug 13 00:49:18.666784 kubelet[2721]: I0813 00:49:18.666750 2721 scope.go:117] "RemoveContainer" containerID="6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e" Aug 13 00:49:18.668082 containerd[1532]: time="2025-08-13T00:49:18.668048058Z" level=info msg="RemoveContainer for \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\"" Aug 13 00:49:18.671715 containerd[1532]: time="2025-08-13T00:49:18.671671725Z" level=info msg="RemoveContainer for \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\" returns successfully" Aug 13 00:49:18.671832 kubelet[2721]: I0813 00:49:18.671806 2721 scope.go:117] "RemoveContainer" containerID="14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0" Aug 13 00:49:18.673059 containerd[1532]: time="2025-08-13T00:49:18.673015044Z" level=info msg="RemoveContainer for \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\"" Aug 13 00:49:18.676275 containerd[1532]: time="2025-08-13T00:49:18.676234553Z" level=info msg="RemoveContainer for \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\" returns successfully" Aug 13 00:49:18.676425 kubelet[2721]: I0813 00:49:18.676395 2721 scope.go:117] "RemoveContainer" containerID="8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4" Aug 13 00:49:18.676661 containerd[1532]: time="2025-08-13T00:49:18.676623582Z" level=error msg="ContainerStatus for \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\": not found" Aug 13 00:49:18.676777 kubelet[2721]: E0813 00:49:18.676752 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\": not found" containerID="8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4" Aug 13 00:49:18.676812 kubelet[2721]: I0813 00:49:18.676781 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4"} err="failed to get container status \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d2c9524fe6b1534660add42877728acbfa68ff216dcb959c4778ecd0db908b4\": not found" Aug 13 00:49:18.676812 kubelet[2721]: I0813 00:49:18.676807 2721 scope.go:117] "RemoveContainer" containerID="855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322" Aug 13 00:49:18.677021 containerd[1532]: time="2025-08-13T00:49:18.676954319Z" level=error msg="ContainerStatus for \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\": not found" Aug 13 00:49:18.677169 kubelet[2721]: E0813 00:49:18.677138 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\": not found" containerID="855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322" Aug 13 00:49:18.677205 kubelet[2721]: I0813 00:49:18.677179 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322"} err="failed to get container status \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\": rpc error: code = NotFound desc = an error occurred when try to find container \"855fa4cfc41630f9c9500c8038a7378804d5af65eb03b289192e301499998322\": not found" Aug 13 00:49:18.677233 kubelet[2721]: I0813 00:49:18.677208 2721 scope.go:117] "RemoveContainer" containerID="655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749" Aug 13 00:49:18.677430 containerd[1532]: time="2025-08-13T00:49:18.677386379Z" level=error msg="ContainerStatus for \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\": not found" Aug 13 00:49:18.677575 kubelet[2721]: E0813 00:49:18.677548 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\": not found" containerID="655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749" Aug 13 00:49:18.677619 kubelet[2721]: I0813 00:49:18.677574 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749"} err="failed to get container status \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\": rpc error: code = NotFound desc = an error occurred when try to find container \"655a344e4c30a54463b9b3f1c5d823c7542be41312d674c95a2313915e1dd749\": not found" Aug 13 00:49:18.677619 kubelet[2721]: I0813 00:49:18.677591 2721 scope.go:117] "RemoveContainer" containerID="6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e" Aug 13 00:49:18.677763 containerd[1532]: time="2025-08-13T00:49:18.677731484Z" level=error msg="ContainerStatus for \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\": not found" Aug 13 00:49:18.677874 kubelet[2721]: E0813 00:49:18.677852 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\": not found" containerID="6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e" Aug 13 00:49:18.677915 kubelet[2721]: I0813 00:49:18.677873 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e"} err="failed to get container status \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\": rpc error: code = NotFound desc = an error occurred when try to find container \"6603a9fad480e5942017d7f3539fe30d9ad20b0c408aefa06e1289f6ee84223e\": not found" Aug 13 00:49:18.677915 kubelet[2721]: I0813 00:49:18.677888 2721 scope.go:117] "RemoveContainer" containerID="14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0" Aug 13 00:49:18.678083 containerd[1532]: time="2025-08-13T00:49:18.678052633Z" level=error msg="ContainerStatus for \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\": not found" Aug 13 00:49:18.678258 kubelet[2721]: E0813 00:49:18.678208 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\": not found" containerID="14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0" Aug 13 00:49:18.678258 kubelet[2721]: I0813 00:49:18.678247 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0"} err="failed to get container status \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"14d54a5c8957992d3b69e94a8e3aa2fa482690bcc41d9a5b073b03927eec23e0\": not found" Aug 13 00:49:19.268576 systemd[1]: var-lib-kubelet-pods-26300763\x2da663\x2d47e3\x2d997d\x2dbe33d222eba4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds575h.mount: Deactivated successfully. Aug 13 00:49:19.268705 systemd[1]: var-lib-kubelet-pods-0022fdea\x2db6b3\x2d4b2e\x2da435\x2d150ec8018ca4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddl82s.mount: Deactivated successfully. Aug 13 00:49:19.268780 systemd[1]: var-lib-kubelet-pods-0022fdea\x2db6b3\x2d4b2e\x2da435\x2d150ec8018ca4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:49:19.268870 systemd[1]: var-lib-kubelet-pods-0022fdea\x2db6b3\x2d4b2e\x2da435\x2d150ec8018ca4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:49:19.275223 kubelet[2721]: I0813 00:49:19.275180 2721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0022fdea-b6b3-4b2e-a435-150ec8018ca4" path="/var/lib/kubelet/pods/0022fdea-b6b3-4b2e-a435-150ec8018ca4/volumes" Aug 13 00:49:19.276110 kubelet[2721]: I0813 00:49:19.276080 2721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26300763-a663-47e3-997d-be33d222eba4" path="/var/lib/kubelet/pods/26300763-a663-47e3-997d-be33d222eba4/volumes" Aug 13 00:49:20.189303 sshd[4319]: Connection closed by 10.0.0.1 port 58518 Aug 13 00:49:20.189722 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:20.200512 systemd[1]: sshd@24-10.0.0.114:22-10.0.0.1:58518.service: Deactivated successfully. Aug 13 00:49:20.202854 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:49:20.203809 systemd-logind[1509]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:49:20.208127 systemd[1]: Started sshd@25-10.0.0.114:22-10.0.0.1:33868.service - OpenSSH per-connection server daemon (10.0.0.1:33868). Aug 13 00:49:20.209310 systemd-logind[1509]: Removed session 25. Aug 13 00:49:20.257277 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 33868 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:20.258758 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:20.264420 systemd-logind[1509]: New session 26 of user core. Aug 13 00:49:20.271619 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:49:20.914298 sshd[4474]: Connection closed by 10.0.0.1 port 33868 Aug 13 00:49:20.916672 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:20.931717 systemd[1]: sshd@25-10.0.0.114:22-10.0.0.1:33868.service: Deactivated successfully. Aug 13 00:49:20.934553 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:49:20.936963 systemd-logind[1509]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:49:20.939373 kubelet[2721]: E0813 00:49:20.939339 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0022fdea-b6b3-4b2e-a435-150ec8018ca4" containerName="mount-cgroup" Aug 13 00:49:20.939373 kubelet[2721]: E0813 00:49:20.939368 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0022fdea-b6b3-4b2e-a435-150ec8018ca4" containerName="apply-sysctl-overwrites" Aug 13 00:49:20.939373 kubelet[2721]: E0813 00:49:20.939375 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0022fdea-b6b3-4b2e-a435-150ec8018ca4" containerName="mount-bpf-fs" Aug 13 00:49:20.940502 kubelet[2721]: E0813 00:49:20.939381 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0022fdea-b6b3-4b2e-a435-150ec8018ca4" containerName="clean-cilium-state" Aug 13 00:49:20.940502 kubelet[2721]: E0813 00:49:20.939387 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26300763-a663-47e3-997d-be33d222eba4" containerName="cilium-operator" Aug 13 00:49:20.940502 kubelet[2721]: E0813 00:49:20.939393 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0022fdea-b6b3-4b2e-a435-150ec8018ca4" containerName="cilium-agent" Aug 13 00:49:20.940502 kubelet[2721]: I0813 00:49:20.939422 2721 memory_manager.go:354] "RemoveStaleState removing state" podUID="26300763-a663-47e3-997d-be33d222eba4" containerName="cilium-operator" Aug 13 00:49:20.940502 kubelet[2721]: I0813 00:49:20.939428 2721 memory_manager.go:354] "RemoveStaleState removing state" podUID="0022fdea-b6b3-4b2e-a435-150ec8018ca4" containerName="cilium-agent" Aug 13 00:49:20.940083 systemd[1]: Started sshd@26-10.0.0.114:22-10.0.0.1:33880.service - OpenSSH per-connection server daemon (10.0.0.1:33880). Aug 13 00:49:20.947530 systemd-logind[1509]: Removed session 26. Aug 13 00:49:20.964123 systemd[1]: Created slice kubepods-burstable-pod0176f6f9_912e_4393_baa8_01b5cdb8f90f.slice - libcontainer container kubepods-burstable-pod0176f6f9_912e_4393_baa8_01b5cdb8f90f.slice. Aug 13 00:49:20.969750 kubelet[2721]: I0813 00:49:20.969696 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-hostproc\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.970063 kubelet[2721]: I0813 00:49:20.969924 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-host-proc-sys-kernel\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.970163 kubelet[2721]: I0813 00:49:20.970149 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-cilium-run\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.970969 kubelet[2721]: I0813 00:49:20.970696 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0176f6f9-912e-4393-baa8-01b5cdb8f90f-cilium-ipsec-secrets\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.970969 kubelet[2721]: I0813 00:49:20.970718 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-cni-path\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.970969 kubelet[2721]: I0813 00:49:20.970732 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-etc-cni-netd\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.970969 kubelet[2721]: I0813 00:49:20.970755 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9dm\" (UniqueName: \"kubernetes.io/projected/0176f6f9-912e-4393-baa8-01b5cdb8f90f-kube-api-access-qh9dm\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.970969 kubelet[2721]: I0813 00:49:20.970770 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0176f6f9-912e-4393-baa8-01b5cdb8f90f-clustermesh-secrets\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.970969 kubelet[2721]: I0813 00:49:20.970792 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0176f6f9-912e-4393-baa8-01b5cdb8f90f-hubble-tls\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.971146 kubelet[2721]: I0813 00:49:20.970808 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-lib-modules\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.971146 kubelet[2721]: I0813 00:49:20.970822 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-host-proc-sys-net\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.971146 kubelet[2721]: I0813 00:49:20.970837 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-cilium-cgroup\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.971146 kubelet[2721]: I0813 00:49:20.970853 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0176f6f9-912e-4393-baa8-01b5cdb8f90f-cilium-config-path\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.971146 kubelet[2721]: I0813 00:49:20.970867 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-bpf-maps\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:20.971146 kubelet[2721]: I0813 00:49:20.970883 2721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0176f6f9-912e-4393-baa8-01b5cdb8f90f-xtables-lock\") pod \"cilium-bm97j\" (UID: \"0176f6f9-912e-4393-baa8-01b5cdb8f90f\") " pod="kube-system/cilium-bm97j" Aug 13 00:49:21.028380 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 33880 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:21.030053 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:21.035468 systemd-logind[1509]: New session 27 of user core. Aug 13 00:49:21.047601 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:49:21.100158 sshd[4488]: Connection closed by 10.0.0.1 port 33880 Aug 13 00:49:21.100538 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:21.111683 systemd[1]: sshd@26-10.0.0.114:22-10.0.0.1:33880.service: Deactivated successfully. Aug 13 00:49:21.113792 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:49:21.114663 systemd-logind[1509]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:49:21.117965 systemd[1]: Started sshd@27-10.0.0.114:22-10.0.0.1:33896.service - OpenSSH per-connection server daemon (10.0.0.1:33896). Aug 13 00:49:21.118750 systemd-logind[1509]: Removed session 27. Aug 13 00:49:21.180992 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 33896 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:21.182801 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:21.188347 systemd-logind[1509]: New session 28 of user core. Aug 13 00:49:21.196753 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:49:21.272878 containerd[1532]: time="2025-08-13T00:49:21.272834293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bm97j,Uid:0176f6f9-912e-4393-baa8-01b5cdb8f90f,Namespace:kube-system,Attempt:0,}" Aug 13 00:49:21.291675 containerd[1532]: time="2025-08-13T00:49:21.291612571Z" level=info msg="connecting to shim 8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e" address="unix:///run/containerd/s/275dd60c026f715d5a529bcfcfb36618641fc354cc3f3a4d7bed0e4f8753af25" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:21.324656 systemd[1]: Started cri-containerd-8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e.scope - libcontainer container 8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e. Aug 13 00:49:21.350989 containerd[1532]: time="2025-08-13T00:49:21.350947644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bm97j,Uid:0176f6f9-912e-4393-baa8-01b5cdb8f90f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\"" Aug 13 00:49:21.353711 containerd[1532]: time="2025-08-13T00:49:21.353680235Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:49:21.372405 containerd[1532]: time="2025-08-13T00:49:21.372354076Z" level=info msg="Container f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:21.379968 containerd[1532]: time="2025-08-13T00:49:21.379918068Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68\"" Aug 13 00:49:21.380539 containerd[1532]: time="2025-08-13T00:49:21.380504090Z" level=info msg="StartContainer for \"f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68\"" Aug 13 00:49:21.381402 containerd[1532]: time="2025-08-13T00:49:21.381377054Z" level=info msg="connecting to shim f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68" address="unix:///run/containerd/s/275dd60c026f715d5a529bcfcfb36618641fc354cc3f3a4d7bed0e4f8753af25" protocol=ttrpc version=3 Aug 13 00:49:21.405596 systemd[1]: Started cri-containerd-f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68.scope - libcontainer container f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68. Aug 13 00:49:21.436372 containerd[1532]: time="2025-08-13T00:49:21.436246830Z" level=info msg="StartContainer for \"f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68\" returns successfully" Aug 13 00:49:21.451853 systemd[1]: cri-containerd-f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68.scope: Deactivated successfully. Aug 13 00:49:21.453497 containerd[1532]: time="2025-08-13T00:49:21.453410106Z" level=info msg="received exit event container_id:\"f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68\" id:\"f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68\" pid:4566 exited_at:{seconds:1755046161 nanos:453085270}" Aug 13 00:49:21.453836 containerd[1532]: time="2025-08-13T00:49:21.453703823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68\" id:\"f1f35d0f3ee3f6de3b2f3eeb4f228f6cfd58743a39db76a8aa56bcfec1897d68\" pid:4566 exited_at:{seconds:1755046161 nanos:453085270}" Aug 13 00:49:21.641411 containerd[1532]: time="2025-08-13T00:49:21.641354432Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:49:21.649243 containerd[1532]: time="2025-08-13T00:49:21.649175963Z" level=info msg="Container 530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:21.657472 containerd[1532]: time="2025-08-13T00:49:21.656898707Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b\"" Aug 13 00:49:21.660111 containerd[1532]: time="2025-08-13T00:49:21.659857467Z" level=info msg="StartContainer for \"530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b\"" Aug 13 00:49:21.662096 containerd[1532]: time="2025-08-13T00:49:21.662060425Z" level=info msg="connecting to shim 530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b" address="unix:///run/containerd/s/275dd60c026f715d5a529bcfcfb36618641fc354cc3f3a4d7bed0e4f8753af25" protocol=ttrpc version=3 Aug 13 00:49:21.685717 systemd[1]: Started cri-containerd-530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b.scope - libcontainer container 530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b. Aug 13 00:49:21.718889 containerd[1532]: time="2025-08-13T00:49:21.718848989Z" level=info msg="StartContainer for \"530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b\" returns successfully" Aug 13 00:49:21.726801 systemd[1]: cri-containerd-530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b.scope: Deactivated successfully. Aug 13 00:49:21.728213 containerd[1532]: time="2025-08-13T00:49:21.728151387Z" level=info msg="received exit event container_id:\"530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b\" id:\"530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b\" pid:4610 exited_at:{seconds:1755046161 nanos:727937502}" Aug 13 00:49:21.728213 containerd[1532]: time="2025-08-13T00:49:21.728209699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b\" id:\"530068bfb768f504e363eead87cdf7a40bc4fc394a184b7f65cbbf1510f0ce4b\" pid:4610 exited_at:{seconds:1755046161 nanos:727937502}" Aug 13 00:49:22.078275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800143656.mount: Deactivated successfully. Aug 13 00:49:22.645793 containerd[1532]: time="2025-08-13T00:49:22.645713640Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:49:22.659303 containerd[1532]: time="2025-08-13T00:49:22.659250463Z" level=info msg="Container 2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:22.669678 containerd[1532]: time="2025-08-13T00:49:22.669602760Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1\"" Aug 13 00:49:22.670346 containerd[1532]: time="2025-08-13T00:49:22.670303517Z" level=info msg="StartContainer for \"2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1\"" Aug 13 00:49:22.672196 containerd[1532]: time="2025-08-13T00:49:22.672160929Z" level=info msg="connecting to shim 2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1" address="unix:///run/containerd/s/275dd60c026f715d5a529bcfcfb36618641fc354cc3f3a4d7bed0e4f8753af25" protocol=ttrpc version=3 Aug 13 00:49:22.697579 systemd[1]: Started cri-containerd-2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1.scope - libcontainer container 2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1. Aug 13 00:49:22.743713 systemd[1]: cri-containerd-2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1.scope: Deactivated successfully. Aug 13 00:49:22.744182 containerd[1532]: time="2025-08-13T00:49:22.743969612Z" level=info msg="StartContainer for \"2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1\" returns successfully" Aug 13 00:49:22.745556 containerd[1532]: time="2025-08-13T00:49:22.745516295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1\" id:\"2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1\" pid:4654 exited_at:{seconds:1755046162 nanos:744780420}" Aug 13 00:49:22.746304 containerd[1532]: time="2025-08-13T00:49:22.745651471Z" level=info msg="received exit event container_id:\"2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1\" id:\"2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1\" pid:4654 exited_at:{seconds:1755046162 nanos:744780420}" Aug 13 00:49:22.771151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f9c96323b58c19da457711485d34fc5e0a50609b407acbdcddf1ad02a6e5cc1-rootfs.mount: Deactivated successfully. Aug 13 00:49:23.333320 kubelet[2721]: E0813 00:49:23.333262 2721 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:49:23.648479 containerd[1532]: time="2025-08-13T00:49:23.648319782Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:49:23.665948 containerd[1532]: time="2025-08-13T00:49:23.665893270Z" level=info msg="Container 0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:23.673167 containerd[1532]: time="2025-08-13T00:49:23.673127842Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7\"" Aug 13 00:49:23.673674 containerd[1532]: time="2025-08-13T00:49:23.673647177Z" level=info msg="StartContainer for \"0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7\"" Aug 13 00:49:23.674457 containerd[1532]: time="2025-08-13T00:49:23.674400695Z" level=info msg="connecting to shim 0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7" address="unix:///run/containerd/s/275dd60c026f715d5a529bcfcfb36618641fc354cc3f3a4d7bed0e4f8753af25" protocol=ttrpc version=3 Aug 13 00:49:23.693669 systemd[1]: Started cri-containerd-0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7.scope - libcontainer container 0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7. Aug 13 00:49:23.720750 systemd[1]: cri-containerd-0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7.scope: Deactivated successfully. Aug 13 00:49:23.722145 containerd[1532]: time="2025-08-13T00:49:23.722113447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7\" id:\"0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7\" pid:4695 exited_at:{seconds:1755046163 nanos:721610494}" Aug 13 00:49:23.722823 containerd[1532]: time="2025-08-13T00:49:23.722793707Z" level=info msg="received exit event container_id:\"0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7\" id:\"0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7\" pid:4695 exited_at:{seconds:1755046163 nanos:721610494}" Aug 13 00:49:23.730327 containerd[1532]: time="2025-08-13T00:49:23.730274005Z" level=info msg="StartContainer for \"0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7\" returns successfully" Aug 13 00:49:23.744470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0605143b083438f069e79c9f59af60065381150dde77d078a6862fecbd7722f7-rootfs.mount: Deactivated successfully. Aug 13 00:49:24.653631 containerd[1532]: time="2025-08-13T00:49:24.653575001Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:49:24.665719 containerd[1532]: time="2025-08-13T00:49:24.665674229Z" level=info msg="Container 0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:24.674553 containerd[1532]: time="2025-08-13T00:49:24.674517607Z" level=info msg="CreateContainer within sandbox \"8d220524c7f1772e6c61a65541ae7d070372715b7fb5bc0b0e6c3fef3b85c49e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\"" Aug 13 00:49:24.675070 containerd[1532]: time="2025-08-13T00:49:24.674989271Z" level=info msg="StartContainer for \"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\"" Aug 13 00:49:24.675894 containerd[1532]: time="2025-08-13T00:49:24.675866703Z" level=info msg="connecting to shim 0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613" address="unix:///run/containerd/s/275dd60c026f715d5a529bcfcfb36618641fc354cc3f3a4d7bed0e4f8753af25" protocol=ttrpc version=3 Aug 13 00:49:24.695595 systemd[1]: Started cri-containerd-0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613.scope - libcontainer container 0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613. Aug 13 00:49:24.735918 containerd[1532]: time="2025-08-13T00:49:24.735866129Z" level=info msg="StartContainer for \"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\" returns successfully" Aug 13 00:49:24.810266 containerd[1532]: time="2025-08-13T00:49:24.810224013Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\" id:\"ee10ab8c027e76665de84276edf492209d5460ae2ed4c005016d549b0e9a82d1\" pid:4762 exited_at:{seconds:1755046164 nanos:809950205}" Aug 13 00:49:25.183501 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 13 00:49:25.437981 kubelet[2721]: I0813 00:49:25.437557 2721 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:49:25Z","lastTransitionTime":"2025-08-13T00:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:49:25.944470 kubelet[2721]: I0813 00:49:25.944309 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bm97j" podStartSLOduration=5.944292464 podStartE2EDuration="5.944292464s" podCreationTimestamp="2025-08-13 00:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:49:25.943972728 +0000 UTC m=+92.770980153" watchObservedRunningTime="2025-08-13 00:49:25.944292464 +0000 UTC m=+92.771299879" Aug 13 00:49:28.141836 containerd[1532]: time="2025-08-13T00:49:28.141742491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\" id:\"0eae668d5210c25647c7b4d28ff566ee976f9bbd322eca57ea100c50cb37073c\" pid:5041 exit_status:1 exited_at:{seconds:1755046168 nanos:140682304}" Aug 13 00:49:29.093668 systemd-networkd[1461]: lxc_health: Link UP Aug 13 00:49:29.105812 systemd-networkd[1461]: lxc_health: Gained carrier Aug 13 00:49:30.635058 containerd[1532]: time="2025-08-13T00:49:30.634966670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\" id:\"5e104bcdbf2577d09f3416f133bb45378517bf49ad68cacf69f2b5afa67c6f22\" pid:5303 exited_at:{seconds:1755046170 nanos:634178749}" Aug 13 00:49:30.917698 systemd-networkd[1461]: lxc_health: Gained IPv6LL Aug 13 00:49:32.813106 containerd[1532]: time="2025-08-13T00:49:32.813045889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\" id:\"69808a87e09eb3de12519148dd56eafeeb3b021774fed4902eac56593f205b3f\" pid:5338 exited_at:{seconds:1755046172 nanos:812770277}" Aug 13 00:49:34.943372 containerd[1532]: time="2025-08-13T00:49:34.943304460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\" id:\"0d3df98ddb873c3e150ce1c0b5cee208e76be66837d921452c3fee2a21a62a5a\" pid:5363 exited_at:{seconds:1755046174 nanos:942768757}" Aug 13 00:49:37.101539 containerd[1532]: time="2025-08-13T00:49:37.101479509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a769c86c2ed550f528c2fb9f372d4c9e809a1ed77188f179c0c3c4f55fa5613\" id:\"a9ea859f86208ba3e972661f4430536f2109dc799cfbaa840686c434a2b7c561\" pid:5387 exited_at:{seconds:1755046177 nanos:101123946}" Aug 13 00:49:37.107688 sshd[4501]: Connection closed by 10.0.0.1 port 33896 Aug 13 00:49:37.108155 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:37.112244 systemd[1]: sshd@27-10.0.0.114:22-10.0.0.1:33896.service: Deactivated successfully. Aug 13 00:49:37.114108 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:49:37.115074 systemd-logind[1509]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:49:37.116492 systemd-logind[1509]: Removed session 28.