Sep 5 06:04:17.911057 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 5 04:19:33 -00 2025 Sep 5 06:04:17.911102 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4b2174e9c368fa97600991ce20efc370fbbf3ddfce3ea407f50212a1021bd496 Sep 5 06:04:17.911114 kernel: BIOS-provided physical RAM map: Sep 5 06:04:17.911123 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 06:04:17.911132 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 06:04:17.911141 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 06:04:17.911152 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 06:04:17.911162 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 06:04:17.911179 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 06:04:17.911188 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 06:04:17.911197 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 06:04:17.911206 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 06:04:17.911215 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 06:04:17.911224 kernel: NX (Execute Disable) protection: active Sep 5 06:04:17.911239 kernel: APIC: Static calls initialized Sep 5 06:04:17.911249 kernel: SMBIOS 2.8 present. Sep 5 06:04:17.911264 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 06:04:17.911274 kernel: DMI: Memory slots populated: 1/1 Sep 5 06:04:17.911284 kernel: Hypervisor detected: KVM Sep 5 06:04:17.911294 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 06:04:17.911304 kernel: kvm-clock: using sched offset of 4418764689 cycles Sep 5 06:04:17.911315 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 06:04:17.911325 kernel: tsc: Detected 2794.748 MHz processor Sep 5 06:04:17.911339 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 06:04:17.911350 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 06:04:17.911360 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 06:04:17.911371 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 06:04:17.911382 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 06:04:17.911392 kernel: Using GB pages for direct mapping Sep 5 06:04:17.911402 kernel: ACPI: Early table checksum verification disabled Sep 5 06:04:17.911412 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 06:04:17.911423 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:04:17.911437 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:04:17.911447 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:04:17.911457 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 06:04:17.911468 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:04:17.911478 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:04:17.911488 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:04:17.911498 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:04:17.911509 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 06:04:17.911528 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 06:04:17.911539 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 06:04:17.911549 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 06:04:17.911560 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 06:04:17.911570 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 06:04:17.911580 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 06:04:17.911594 kernel: No NUMA configuration found Sep 5 06:04:17.911605 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 06:04:17.911616 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 5 06:04:17.911627 kernel: Zone ranges: Sep 5 06:04:17.911637 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 06:04:17.911648 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 06:04:17.911658 kernel: Normal empty Sep 5 06:04:17.911668 kernel: Device empty Sep 5 06:04:17.911679 kernel: Movable zone start for each node Sep 5 06:04:17.911689 kernel: Early memory node ranges Sep 5 06:04:17.911704 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 06:04:17.911714 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 06:04:17.911725 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 06:04:17.911735 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 06:04:17.911746 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 06:04:17.911757 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 06:04:17.911767 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 06:04:17.911782 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 06:04:17.911793 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 06:04:17.911808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 06:04:17.911818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 06:04:17.911832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 06:04:17.911843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 06:04:17.911853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 06:04:17.911864 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 06:04:17.911885 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 06:04:17.911895 kernel: TSC deadline timer available Sep 5 06:04:17.911906 kernel: CPU topo: Max. logical packages: 1 Sep 5 06:04:17.911921 kernel: CPU topo: Max. logical dies: 1 Sep 5 06:04:17.911931 kernel: CPU topo: Max. dies per package: 1 Sep 5 06:04:17.911941 kernel: CPU topo: Max. threads per core: 1 Sep 5 06:04:17.911952 kernel: CPU topo: Num. cores per package: 4 Sep 5 06:04:17.911963 kernel: CPU topo: Num. threads per package: 4 Sep 5 06:04:17.911973 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 5 06:04:17.911983 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 06:04:17.911994 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 06:04:17.912021 kernel: kvm-guest: setup PV sched yield Sep 5 06:04:17.912038 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 06:04:17.912048 kernel: Booting paravirtualized kernel on KVM Sep 5 06:04:17.912059 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 06:04:17.912070 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 06:04:17.912081 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 5 06:04:17.912092 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 5 06:04:17.912102 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 06:04:17.912112 kernel: kvm-guest: PV spinlocks enabled Sep 5 06:04:17.912123 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 06:04:17.912139 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4b2174e9c368fa97600991ce20efc370fbbf3ddfce3ea407f50212a1021bd496 Sep 5 06:04:17.912151 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 06:04:17.912161 kernel: random: crng init done Sep 5 06:04:17.912172 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 06:04:17.912183 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 06:04:17.912193 kernel: Fallback order for Node 0: 0 Sep 5 06:04:17.912204 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 5 06:04:17.912214 kernel: Policy zone: DMA32 Sep 5 06:04:17.912225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 06:04:17.912239 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 06:04:17.912250 kernel: ftrace: allocating 40102 entries in 157 pages Sep 5 06:04:17.912260 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 06:04:17.912271 kernel: Dynamic Preempt: voluntary Sep 5 06:04:17.912281 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 06:04:17.912293 kernel: rcu: RCU event tracing is enabled. Sep 5 06:04:17.912304 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 06:04:17.912315 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 06:04:17.912330 kernel: Rude variant of Tasks RCU enabled. Sep 5 06:04:17.912345 kernel: Tracing variant of Tasks RCU enabled. Sep 5 06:04:17.912355 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 06:04:17.912366 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 06:04:17.912377 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:04:17.912387 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:04:17.912398 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:04:17.912409 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 06:04:17.912419 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 06:04:17.912445 kernel: Console: colour VGA+ 80x25 Sep 5 06:04:17.912456 kernel: printk: legacy console [ttyS0] enabled Sep 5 06:04:17.912467 kernel: ACPI: Core revision 20240827 Sep 5 06:04:17.912478 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 06:04:17.912492 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 06:04:17.912504 kernel: x2apic enabled Sep 5 06:04:17.912514 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 06:04:17.912529 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 06:04:17.912540 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 06:04:17.912556 kernel: kvm-guest: setup PV IPIs Sep 5 06:04:17.912567 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 06:04:17.912578 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 06:04:17.912589 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 06:04:17.912601 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 06:04:17.912612 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 06:04:17.912623 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 06:04:17.912634 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 06:04:17.912649 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 06:04:17.912660 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 06:04:17.912671 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 06:04:17.912681 kernel: active return thunk: retbleed_return_thunk Sep 5 06:04:17.912691 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 06:04:17.912702 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 06:04:17.912713 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 06:04:17.912724 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 06:04:17.912736 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 06:04:17.912752 kernel: active return thunk: srso_return_thunk Sep 5 06:04:17.912763 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 06:04:17.912775 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 06:04:17.912786 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 06:04:17.912797 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 06:04:17.912808 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 06:04:17.912819 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 06:04:17.912830 kernel: Freeing SMP alternatives memory: 32K Sep 5 06:04:17.912844 kernel: pid_max: default: 32768 minimum: 301 Sep 5 06:04:17.912855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 06:04:17.912866 kernel: landlock: Up and running. Sep 5 06:04:17.912887 kernel: SELinux: Initializing. Sep 5 06:04:17.912902 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 06:04:17.912914 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 06:04:17.912925 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 06:04:17.912936 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 06:04:17.912947 kernel: ... version: 0 Sep 5 06:04:17.912962 kernel: ... bit width: 48 Sep 5 06:04:17.912973 kernel: ... generic registers: 6 Sep 5 06:04:17.912984 kernel: ... value mask: 0000ffffffffffff Sep 5 06:04:17.912995 kernel: ... max period: 00007fffffffffff Sep 5 06:04:17.913022 kernel: ... fixed-purpose events: 0 Sep 5 06:04:17.913034 kernel: ... event mask: 000000000000003f Sep 5 06:04:17.913045 kernel: signal: max sigframe size: 1776 Sep 5 06:04:17.913056 kernel: rcu: Hierarchical SRCU implementation. Sep 5 06:04:17.913067 kernel: rcu: Max phase no-delay instances is 400. Sep 5 06:04:17.913079 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 06:04:17.913094 kernel: smp: Bringing up secondary CPUs ... Sep 5 06:04:17.913105 kernel: smpboot: x86: Booting SMP configuration: Sep 5 06:04:17.913116 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 06:04:17.913127 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 06:04:17.913138 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 06:04:17.913150 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54068K init, 2900K bss, 136904K reserved, 0K cma-reserved) Sep 5 06:04:17.913161 kernel: devtmpfs: initialized Sep 5 06:04:17.913172 kernel: x86/mm: Memory block size: 128MB Sep 5 06:04:17.913183 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 06:04:17.913198 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 06:04:17.913209 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 06:04:17.913220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 06:04:17.913231 kernel: audit: initializing netlink subsys (disabled) Sep 5 06:04:17.913242 kernel: audit: type=2000 audit(1757052255.341:1): state=initialized audit_enabled=0 res=1 Sep 5 06:04:17.913253 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 06:04:17.913264 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 06:04:17.913275 kernel: cpuidle: using governor menu Sep 5 06:04:17.913286 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 06:04:17.913301 kernel: dca service started, version 1.12.1 Sep 5 06:04:17.913312 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 5 06:04:17.913324 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 06:04:17.913335 kernel: PCI: Using configuration type 1 for base access Sep 5 06:04:17.913346 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 06:04:17.913357 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 06:04:17.913368 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 06:04:17.913379 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 06:04:17.913393 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 06:04:17.913404 kernel: ACPI: Added _OSI(Module Device) Sep 5 06:04:17.913415 kernel: ACPI: Added _OSI(Processor Device) Sep 5 06:04:17.913426 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 06:04:17.913437 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 06:04:17.913448 kernel: ACPI: Interpreter enabled Sep 5 06:04:17.913459 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 06:04:17.913470 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 06:04:17.913481 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 06:04:17.913496 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 06:04:17.913507 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 06:04:17.913518 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 06:04:17.913794 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 06:04:17.913979 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 06:04:17.914166 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 06:04:17.914183 kernel: PCI host bridge to bus 0000:00 Sep 5 06:04:17.914371 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 06:04:17.914533 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 06:04:17.914686 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 06:04:17.914840 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 06:04:17.915003 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 06:04:17.915178 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 06:04:17.915330 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 06:04:17.915549 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 06:04:17.915770 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 5 06:04:17.915954 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 5 06:04:17.916184 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 5 06:04:17.916355 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 5 06:04:17.916519 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 06:04:17.916706 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 06:04:17.916895 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 5 06:04:17.917093 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 5 06:04:17.917273 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 06:04:17.917459 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 06:04:17.917626 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 5 06:04:17.917796 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 5 06:04:17.917977 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 06:04:17.918196 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 06:04:17.918380 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 5 06:04:17.918552 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 5 06:04:17.918812 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 06:04:17.919006 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 5 06:04:17.919244 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 06:04:17.919419 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 06:04:17.919610 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 06:04:17.919780 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 5 06:04:17.919958 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 5 06:04:17.920177 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 06:04:17.920346 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 5 06:04:17.920364 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 06:04:17.920382 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 06:04:17.920392 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 06:04:17.920404 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 06:04:17.920414 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 06:04:17.920425 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 06:04:17.920435 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 06:04:17.920445 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 06:04:17.920455 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 06:04:17.920470 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 06:04:17.920485 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 06:04:17.920496 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 06:04:17.920508 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 06:04:17.920519 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 06:04:17.920530 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 06:04:17.920540 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 06:04:17.920551 kernel: iommu: Default domain type: Translated Sep 5 06:04:17.920562 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 06:04:17.920573 kernel: PCI: Using ACPI for IRQ routing Sep 5 06:04:17.920588 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 06:04:17.920599 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 06:04:17.920611 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 06:04:17.920796 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 06:04:17.920978 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 06:04:17.921166 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 06:04:17.921184 kernel: vgaarb: loaded Sep 5 06:04:17.921195 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 06:04:17.921213 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 06:04:17.921224 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 06:04:17.921235 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 06:04:17.921247 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 06:04:17.921257 kernel: pnp: PnP ACPI init Sep 5 06:04:17.921478 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 06:04:17.921501 kernel: pnp: PnP ACPI: found 6 devices Sep 5 06:04:17.921514 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 06:04:17.921531 kernel: NET: Registered PF_INET protocol family Sep 5 06:04:17.921542 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 06:04:17.921554 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 06:04:17.921565 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 06:04:17.921576 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 06:04:17.921587 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 06:04:17.921599 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 06:04:17.921611 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 06:04:17.921737 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 06:04:17.921754 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 06:04:17.921765 kernel: NET: Registered PF_XDP protocol family Sep 5 06:04:17.921933 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 06:04:17.922117 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 06:04:17.922271 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 06:04:17.922435 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 06:04:17.922588 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 06:04:17.922743 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 06:04:17.922766 kernel: PCI: CLS 0 bytes, default 64 Sep 5 06:04:17.922778 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 06:04:17.922789 kernel: Initialise system trusted keyrings Sep 5 06:04:17.922801 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 06:04:17.922812 kernel: Key type asymmetric registered Sep 5 06:04:17.922823 kernel: Asymmetric key parser 'x509' registered Sep 5 06:04:17.922834 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 06:04:17.922846 kernel: io scheduler mq-deadline registered Sep 5 06:04:17.922857 kernel: io scheduler kyber registered Sep 5 06:04:17.922868 kernel: io scheduler bfq registered Sep 5 06:04:17.922894 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 06:04:17.922906 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 06:04:17.922918 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 06:04:17.922929 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 06:04:17.922941 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 06:04:17.922952 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 06:04:17.922964 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 06:04:17.922975 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 06:04:17.922986 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 06:04:17.923196 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 06:04:17.923356 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 06:04:17.923515 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T06:04:17 UTC (1757052257) Sep 5 06:04:17.923672 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 06:04:17.923690 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 06:04:17.923701 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 5 06:04:17.923713 kernel: NET: Registered PF_INET6 protocol family Sep 5 06:04:17.923730 kernel: Segment Routing with IPv6 Sep 5 06:04:17.923741 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 06:04:17.923753 kernel: NET: Registered PF_PACKET protocol family Sep 5 06:04:17.923764 kernel: Key type dns_resolver registered Sep 5 06:04:17.923775 kernel: IPI shorthand broadcast: enabled Sep 5 06:04:17.923787 kernel: sched_clock: Marking stable (3151002965, 115094885)->(3299920097, -33822247) Sep 5 06:04:17.923798 kernel: registered taskstats version 1 Sep 5 06:04:17.923809 kernel: Loading compiled-in X.509 certificates Sep 5 06:04:17.923820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 0a288d3740f799f7923bd7314e999f997bd1026c' Sep 5 06:04:17.923836 kernel: Demotion targets for Node 0: null Sep 5 06:04:17.923847 kernel: Key type .fscrypt registered Sep 5 06:04:17.923858 kernel: Key type fscrypt-provisioning registered Sep 5 06:04:17.923869 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 06:04:17.923889 kernel: ima: Allocated hash algorithm: sha1 Sep 5 06:04:17.923901 kernel: ima: No architecture policies found Sep 5 06:04:17.923912 kernel: clk: Disabling unused clocks Sep 5 06:04:17.923924 kernel: Warning: unable to open an initial console. Sep 5 06:04:17.923935 kernel: Freeing unused kernel image (initmem) memory: 54068K Sep 5 06:04:17.923951 kernel: Write protecting the kernel read-only data: 24576k Sep 5 06:04:17.923962 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 5 06:04:17.923974 kernel: Run /init as init process Sep 5 06:04:17.923985 kernel: with arguments: Sep 5 06:04:17.923996 kernel: /init Sep 5 06:04:17.924007 kernel: with environment: Sep 5 06:04:17.924043 kernel: HOME=/ Sep 5 06:04:17.924054 kernel: TERM=linux Sep 5 06:04:17.924065 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 06:04:17.924082 systemd[1]: Successfully made /usr/ read-only. Sep 5 06:04:17.924115 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 06:04:17.924132 systemd[1]: Detected virtualization kvm. Sep 5 06:04:17.924144 systemd[1]: Detected architecture x86-64. Sep 5 06:04:17.924156 systemd[1]: Running in initrd. Sep 5 06:04:17.924172 systemd[1]: No hostname configured, using default hostname. Sep 5 06:04:17.924185 systemd[1]: Hostname set to . Sep 5 06:04:17.924197 systemd[1]: Initializing machine ID from VM UUID. Sep 5 06:04:17.924209 systemd[1]: Queued start job for default target initrd.target. Sep 5 06:04:17.924222 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:04:17.924234 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:04:17.924248 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 06:04:17.924260 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 06:04:17.924277 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 06:04:17.924291 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 06:04:17.924306 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 06:04:17.924318 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 06:04:17.924330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:04:17.924343 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:04:17.924356 systemd[1]: Reached target paths.target - Path Units. Sep 5 06:04:17.924372 systemd[1]: Reached target slices.target - Slice Units. Sep 5 06:04:17.924385 systemd[1]: Reached target swap.target - Swaps. Sep 5 06:04:17.924397 systemd[1]: Reached target timers.target - Timer Units. Sep 5 06:04:17.924410 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 06:04:17.924426 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 06:04:17.924438 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 06:04:17.924451 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 06:04:17.924463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:04:17.924476 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 06:04:17.924493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:04:17.924505 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 06:04:17.924518 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 06:04:17.924531 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 06:04:17.924547 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 06:04:17.924564 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 06:04:17.924576 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 06:04:17.924588 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 06:04:17.924601 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 06:04:17.924613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:04:17.924626 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 06:04:17.924644 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:04:17.924656 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 06:04:17.924670 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 06:04:17.924727 systemd-journald[220]: Collecting audit messages is disabled. Sep 5 06:04:17.924763 systemd-journald[220]: Journal started Sep 5 06:04:17.924790 systemd-journald[220]: Runtime Journal (/run/log/journal/9efe6684a08b462a83ebc83b636aaac9) is 6M, max 48.6M, 42.5M free. Sep 5 06:04:17.907435 systemd-modules-load[221]: Inserted module 'overlay' Sep 5 06:04:17.927411 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 06:04:17.928255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 06:04:17.932344 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 06:04:17.935511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 06:04:17.970118 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 06:04:17.970161 kernel: Bridge firewalling registered Sep 5 06:04:17.938778 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 5 06:04:17.953189 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 06:04:17.974419 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 06:04:17.981251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:04:17.981665 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:04:17.986483 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 06:04:17.989632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 06:04:17.995266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:04:18.006613 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:04:18.009542 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 06:04:18.013502 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 06:04:18.017949 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 06:04:18.047000 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4b2174e9c368fa97600991ce20efc370fbbf3ddfce3ea407f50212a1021bd496 Sep 5 06:04:18.062446 systemd-resolved[258]: Positive Trust Anchors: Sep 5 06:04:18.062462 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 06:04:18.062491 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 06:04:18.065109 systemd-resolved[258]: Defaulting to hostname 'linux'. Sep 5 06:04:18.066734 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 06:04:18.073270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:04:18.183078 kernel: SCSI subsystem initialized Sep 5 06:04:18.193044 kernel: Loading iSCSI transport class v2.0-870. Sep 5 06:04:18.222057 kernel: iscsi: registered transport (tcp) Sep 5 06:04:18.253246 kernel: iscsi: registered transport (qla4xxx) Sep 5 06:04:18.253322 kernel: QLogic iSCSI HBA Driver Sep 5 06:04:18.276144 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 06:04:18.313882 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:04:18.331262 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 06:04:18.407728 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 06:04:18.410933 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 06:04:18.483104 kernel: raid6: avx2x4 gen() 28089 MB/s Sep 5 06:04:18.528069 kernel: raid6: avx2x2 gen() 27826 MB/s Sep 5 06:04:18.545142 kernel: raid6: avx2x1 gen() 24361 MB/s Sep 5 06:04:18.545244 kernel: raid6: using algorithm avx2x4 gen() 28089 MB/s Sep 5 06:04:18.571172 kernel: raid6: .... xor() 7048 MB/s, rmw enabled Sep 5 06:04:18.571318 kernel: raid6: using avx2x2 recovery algorithm Sep 5 06:04:18.595085 kernel: xor: automatically using best checksumming function avx Sep 5 06:04:18.860054 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 06:04:18.868995 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 06:04:18.871581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:04:18.912093 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 5 06:04:18.917819 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:04:18.921286 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 06:04:19.001240 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 5 06:04:19.041023 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 06:04:19.073775 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 06:04:19.162779 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:04:19.168228 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 06:04:19.211211 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 06:04:19.213404 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 06:04:19.220377 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 06:04:19.220408 kernel: GPT:9289727 != 19775487 Sep 5 06:04:19.220423 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 06:04:19.220437 kernel: GPT:9289727 != 19775487 Sep 5 06:04:19.220451 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 06:04:19.220465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:04:19.240233 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 5 06:04:19.243622 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 06:04:19.252036 kernel: AES CTR mode by8 optimization enabled Sep 5 06:04:19.254211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 06:04:19.254447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:04:19.258604 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:04:19.264681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:04:19.270874 kernel: libata version 3.00 loaded. Sep 5 06:04:19.271561 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 06:04:19.302115 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 06:04:19.335942 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 06:04:19.336228 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 06:04:19.336241 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 06:04:19.336385 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 06:04:19.336524 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 06:04:19.329787 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 06:04:19.389570 kernel: scsi host0: ahci Sep 5 06:04:19.389884 kernel: scsi host1: ahci Sep 5 06:04:19.390122 kernel: scsi host2: ahci Sep 5 06:04:19.390346 kernel: scsi host3: ahci Sep 5 06:04:19.390500 kernel: scsi host4: ahci Sep 5 06:04:19.390646 kernel: scsi host5: ahci Sep 5 06:04:19.390792 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 5 06:04:19.390804 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 5 06:04:19.390815 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 5 06:04:19.390825 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 5 06:04:19.390846 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 5 06:04:19.390857 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 5 06:04:19.389897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:04:19.403355 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 06:04:19.414568 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 06:04:19.436456 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 06:04:19.440679 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 06:04:19.652235 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 06:04:19.652328 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 06:04:19.652344 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 06:04:19.654058 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 06:04:19.654156 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 06:04:19.666200 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 06:04:19.666284 kernel: ata3.00: applying bridge limits Sep 5 06:04:19.667070 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 06:04:19.668053 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 06:04:19.669069 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 06:04:19.670052 kernel: ata3.00: configured for UDMA/100 Sep 5 06:04:19.672029 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 06:04:19.685451 disk-uuid[631]: Primary Header is updated. Sep 5 06:04:19.685451 disk-uuid[631]: Secondary Entries is updated. Sep 5 06:04:19.685451 disk-uuid[631]: Secondary Header is updated. Sep 5 06:04:19.690060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:04:19.696040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:04:19.725114 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 06:04:19.725445 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 06:04:19.740076 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 06:04:20.177436 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 06:04:20.219371 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 06:04:20.221791 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:04:20.224401 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 06:04:20.227466 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 06:04:20.253583 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 06:04:20.703087 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:04:20.703820 disk-uuid[632]: The operation has completed successfully. Sep 5 06:04:20.734995 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 06:04:20.735152 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 06:04:20.769613 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 06:04:20.806592 sh[661]: Success Sep 5 06:04:20.827633 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 06:04:20.827734 kernel: device-mapper: uevent: version 1.0.3 Sep 5 06:04:20.829045 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 06:04:20.841050 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 5 06:04:20.879470 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 06:04:20.883846 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 06:04:20.903992 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 06:04:20.911040 kernel: BTRFS: device fsid 98069635-e988-4e04-b156-f40a4a69cf42 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (673) Sep 5 06:04:20.911077 kernel: BTRFS info (device dm-0): first mount of filesystem 98069635-e988-4e04-b156-f40a4a69cf42 Sep 5 06:04:20.930269 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 06:04:20.943163 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 06:04:20.943224 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 06:04:20.944852 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 06:04:20.946823 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 06:04:20.948674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 06:04:20.949823 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 06:04:20.951818 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 06:04:20.984504 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (704) Sep 5 06:04:20.984596 kernel: BTRFS info (device vda6): first mount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:04:20.984610 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 06:04:20.989049 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:04:20.989120 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:04:20.996058 kernel: BTRFS info (device vda6): last unmount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:04:20.997633 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 06:04:21.000676 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 06:04:21.132820 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 06:04:21.138323 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 06:04:21.149185 ignition[750]: Ignition 2.22.0 Sep 5 06:04:21.150038 ignition[750]: Stage: fetch-offline Sep 5 06:04:21.150083 ignition[750]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:04:21.150093 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:04:21.150174 ignition[750]: parsed url from cmdline: "" Sep 5 06:04:21.150178 ignition[750]: no config URL provided Sep 5 06:04:21.150183 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 06:04:21.150192 ignition[750]: no config at "/usr/lib/ignition/user.ign" Sep 5 06:04:21.150215 ignition[750]: op(1): [started] loading QEMU firmware config module Sep 5 06:04:21.150220 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 06:04:21.168235 ignition[750]: op(1): [finished] loading QEMU firmware config module Sep 5 06:04:21.309968 systemd-networkd[847]: lo: Link UP Sep 5 06:04:21.309980 systemd-networkd[847]: lo: Gained carrier Sep 5 06:04:21.311633 systemd-networkd[847]: Enumeration completed Sep 5 06:04:21.311774 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 06:04:21.312065 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:04:21.312069 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 06:04:21.312491 systemd-networkd[847]: eth0: Link UP Sep 5 06:04:21.315135 systemd-networkd[847]: eth0: Gained carrier Sep 5 06:04:21.315147 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:04:21.315587 systemd[1]: Reached target network.target - Network. Sep 5 06:04:21.329333 ignition[750]: parsing config with SHA512: eb5d6f43cbba56bf397cbf0635538a1e26d5a367267b38f74092e3e4749f7efbdee4568ff69597d3246aaea4fa7104eebb0339a68a62cd42b1a6337aec1a08c5 Sep 5 06:04:21.331127 systemd-networkd[847]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 06:04:21.335353 unknown[750]: fetched base config from "system" Sep 5 06:04:21.335367 unknown[750]: fetched user config from "qemu" Sep 5 06:04:21.335756 ignition[750]: fetch-offline: fetch-offline passed Sep 5 06:04:21.335824 ignition[750]: Ignition finished successfully Sep 5 06:04:21.339301 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 06:04:21.340678 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 06:04:21.341720 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 06:04:21.391048 ignition[857]: Ignition 2.22.0 Sep 5 06:04:21.391062 ignition[857]: Stage: kargs Sep 5 06:04:21.391220 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:04:21.391231 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:04:21.392201 ignition[857]: kargs: kargs passed Sep 5 06:04:21.392256 ignition[857]: Ignition finished successfully Sep 5 06:04:21.399496 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 06:04:21.402367 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 06:04:21.467238 ignition[865]: Ignition 2.22.0 Sep 5 06:04:21.467251 ignition[865]: Stage: disks Sep 5 06:04:21.467399 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:04:21.467409 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:04:21.468555 ignition[865]: disks: disks passed Sep 5 06:04:21.468607 ignition[865]: Ignition finished successfully Sep 5 06:04:21.474931 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 06:04:21.477045 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 06:04:21.477138 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 06:04:21.477465 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 06:04:21.477778 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 06:04:21.478267 systemd[1]: Reached target basic.target - Basic System. Sep 5 06:04:21.486264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 06:04:21.523275 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 06:04:21.531221 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 06:04:21.535560 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 06:04:21.664026 kernel: EXT4-fs (vda9): mounted filesystem 5e58259f-916a-43e8-ae75-b44bea97e14e r/w with ordered data mode. Quota mode: none. Sep 5 06:04:21.664990 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 06:04:21.667546 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 06:04:21.671345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 06:04:21.673808 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 06:04:21.675666 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 06:04:21.675714 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 06:04:21.675736 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 06:04:21.686654 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 06:04:21.688566 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 06:04:21.692171 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Sep 5 06:04:21.694202 kernel: BTRFS info (device vda6): first mount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:04:21.694230 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 06:04:21.697590 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:04:21.697629 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:04:21.700043 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 06:04:21.728396 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 06:04:21.734548 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Sep 5 06:04:21.740502 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 06:04:21.746326 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 06:04:21.873815 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 06:04:21.877230 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 06:04:21.880049 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 06:04:21.915637 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 06:04:21.917119 kernel: BTRFS info (device vda6): last unmount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:04:21.936164 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 06:04:21.977055 ignition[997]: INFO : Ignition 2.22.0 Sep 5 06:04:21.977055 ignition[997]: INFO : Stage: mount Sep 5 06:04:21.978963 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:04:21.978963 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:04:21.981095 ignition[997]: INFO : mount: mount passed Sep 5 06:04:21.981952 ignition[997]: INFO : Ignition finished successfully Sep 5 06:04:21.985537 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 06:04:21.987785 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 06:04:22.022171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 06:04:22.046037 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Sep 5 06:04:22.046090 kernel: BTRFS info (device vda6): first mount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:04:22.047851 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 06:04:22.051183 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:04:22.051226 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:04:22.053225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 06:04:22.104392 ignition[1026]: INFO : Ignition 2.22.0 Sep 5 06:04:22.104392 ignition[1026]: INFO : Stage: files Sep 5 06:04:22.106435 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:04:22.106435 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:04:22.106435 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Sep 5 06:04:22.110650 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 06:04:22.110650 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 06:04:22.115792 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 06:04:22.117367 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 06:04:22.119059 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 06:04:22.118276 unknown[1026]: wrote ssh authorized keys file for user: core Sep 5 06:04:22.121992 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 06:04:22.121992 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 5 06:04:22.197659 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 06:04:22.585193 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 06:04:22.585193 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 06:04:22.589507 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 06:04:22.589507 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 06:04:22.589507 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 06:04:22.589507 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 06:04:22.589507 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 06:04:22.589507 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 06:04:22.589507 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 06:04:22.602435 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 06:04:22.602435 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 06:04:22.602435 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 06:04:22.602435 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 06:04:22.602435 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 06:04:22.602435 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 5 06:04:23.160267 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 06:04:23.260246 systemd-networkd[847]: eth0: Gained IPv6LL Sep 5 06:04:24.528257 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 06:04:24.528257 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 06:04:24.531983 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 06:04:24.536035 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 06:04:24.536035 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 06:04:24.536035 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 06:04:24.536035 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 06:04:24.542378 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 06:04:24.542378 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 06:04:24.542378 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 06:04:24.584574 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 06:04:24.589241 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 06:04:24.590922 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 06:04:24.590922 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 06:04:24.590922 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 06:04:24.590922 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 06:04:24.590922 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 06:04:24.590922 ignition[1026]: INFO : files: files passed Sep 5 06:04:24.590922 ignition[1026]: INFO : Ignition finished successfully Sep 5 06:04:24.600837 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 06:04:24.604628 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 06:04:24.607226 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 06:04:24.736156 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 06:04:24.736302 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 06:04:24.740230 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 06:04:24.741920 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:04:24.741920 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:04:24.747041 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:04:24.745293 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 06:04:24.747591 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 06:04:24.751413 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 06:04:24.816837 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 06:04:24.817980 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 06:04:24.821190 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 06:04:24.823282 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 06:04:24.823423 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 06:04:24.824454 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 06:04:24.858209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 06:04:24.862256 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 06:04:24.895443 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:04:24.895658 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:04:24.899682 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 06:04:24.903036 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 06:04:24.903189 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 06:04:24.906994 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 06:04:24.907158 systemd[1]: Stopped target basic.target - Basic System. Sep 5 06:04:24.910344 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 06:04:24.911353 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 06:04:24.913700 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 06:04:24.918336 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 06:04:24.919520 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 06:04:24.920599 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 06:04:24.923562 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 06:04:24.924784 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 06:04:24.925094 systemd[1]: Stopped target swap.target - Swaps. Sep 5 06:04:24.925530 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 06:04:24.925646 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 06:04:24.931517 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:04:24.932577 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:04:24.932859 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 06:04:24.936486 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:04:24.939817 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 06:04:24.939937 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 06:04:24.942891 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 06:04:24.943040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 06:04:24.944144 systemd[1]: Stopped target paths.target - Path Units. Sep 5 06:04:24.944510 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 06:04:24.951101 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:04:24.953752 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 06:04:24.953941 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 06:04:24.955566 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 06:04:24.955701 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 06:04:24.957248 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 06:04:24.957355 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 06:04:24.958905 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 06:04:24.959077 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 06:04:24.960617 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 06:04:24.960757 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 06:04:24.965705 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 06:04:24.966721 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 06:04:24.966857 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:04:24.969816 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 06:04:24.982575 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 06:04:24.982755 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:04:24.987457 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 06:04:24.987609 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 06:04:24.995182 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 06:04:24.996390 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 06:04:25.009526 ignition[1082]: INFO : Ignition 2.22.0 Sep 5 06:04:25.009526 ignition[1082]: INFO : Stage: umount Sep 5 06:04:25.011241 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:04:25.011241 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:04:25.011241 ignition[1082]: INFO : umount: umount passed Sep 5 06:04:25.011241 ignition[1082]: INFO : Ignition finished successfully Sep 5 06:04:25.011679 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 06:04:25.018083 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 06:04:25.018213 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 06:04:25.020308 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 06:04:25.020419 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 06:04:25.022451 systemd[1]: Stopped target network.target - Network. Sep 5 06:04:25.022927 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 06:04:25.022989 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 06:04:25.025702 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 06:04:25.025759 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 06:04:25.027724 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 06:04:25.027782 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 06:04:25.030835 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 06:04:25.030932 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 06:04:25.031682 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 06:04:25.031761 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 06:04:25.032161 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 06:04:25.032514 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 06:04:25.040977 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 06:04:25.041284 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 06:04:25.045750 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 06:04:25.046062 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 06:04:25.046120 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:04:25.049854 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 06:04:25.054157 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 06:04:25.054342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 06:04:25.058637 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 06:04:25.058941 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 06:04:25.062209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 06:04:25.062269 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:04:25.065242 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 06:04:25.065323 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 06:04:25.065397 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 06:04:25.065715 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 06:04:25.065785 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:04:25.071248 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 06:04:25.071320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 06:04:25.074507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:04:25.076610 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 06:04:25.107953 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 06:04:25.108163 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:04:25.109491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 06:04:25.109541 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 06:04:25.112412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 06:04:25.112450 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:04:25.113375 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 06:04:25.113424 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 06:04:25.118801 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 06:04:25.118854 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 06:04:25.122782 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 06:04:25.122836 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 06:04:25.127640 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 06:04:25.128853 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 06:04:25.128935 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:04:25.132673 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 06:04:25.132764 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:04:25.136383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 06:04:25.136443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:04:25.140191 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 06:04:25.144179 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 06:04:25.155814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 06:04:25.155949 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 06:04:25.158276 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 06:04:25.161327 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 06:04:25.183412 systemd[1]: Switching root. Sep 5 06:04:25.225084 systemd-journald[220]: Journal stopped Sep 5 06:04:26.435774 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 5 06:04:26.435851 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 06:04:26.435878 kernel: SELinux: policy capability open_perms=1 Sep 5 06:04:26.435895 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 06:04:26.435908 kernel: SELinux: policy capability always_check_network=0 Sep 5 06:04:26.435919 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 06:04:26.435934 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 06:04:26.435965 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 06:04:26.435977 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 06:04:26.435988 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 06:04:26.436006 kernel: audit: type=1403 audit(1757052265.614:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 06:04:26.436042 systemd[1]: Successfully loaded SELinux policy in 67.013ms. Sep 5 06:04:26.436070 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.713ms. Sep 5 06:04:26.436085 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 06:04:26.436098 systemd[1]: Detected virtualization kvm. Sep 5 06:04:26.436113 systemd[1]: Detected architecture x86-64. Sep 5 06:04:26.436126 systemd[1]: Detected first boot. Sep 5 06:04:26.436138 systemd[1]: Initializing machine ID from VM UUID. Sep 5 06:04:26.436151 zram_generator::config[1127]: No configuration found. Sep 5 06:04:26.436170 kernel: Guest personality initialized and is inactive Sep 5 06:04:26.436187 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 06:04:26.436202 kernel: Initialized host personality Sep 5 06:04:26.436217 kernel: NET: Registered PF_VSOCK protocol family Sep 5 06:04:26.436237 systemd[1]: Populated /etc with preset unit settings. Sep 5 06:04:26.436257 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 06:04:26.436273 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 06:04:26.436289 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 06:04:26.436305 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 06:04:26.436332 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 06:04:26.436349 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 06:04:26.436366 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 06:04:26.436383 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 06:04:26.436438 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 06:04:26.436459 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 06:04:26.436476 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 06:04:26.436494 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 06:04:26.436510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:04:26.436526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:04:26.436542 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 06:04:26.436557 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 06:04:26.436574 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 06:04:26.436596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 06:04:26.436614 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 06:04:26.436632 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:04:26.436661 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:04:26.436679 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 06:04:26.436695 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 06:04:26.436711 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 06:04:26.436728 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 06:04:26.436765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:04:26.436791 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 06:04:26.436808 systemd[1]: Reached target slices.target - Slice Units. Sep 5 06:04:26.436825 systemd[1]: Reached target swap.target - Swaps. Sep 5 06:04:26.436843 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 06:04:26.436860 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 06:04:26.436876 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 06:04:26.436894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:04:26.436911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 06:04:26.436932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:04:26.436949 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 06:04:26.436965 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 06:04:26.436982 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 06:04:26.436998 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 06:04:26.437032 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:04:26.437050 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 06:04:26.437066 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 06:04:26.437082 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 06:04:26.437104 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 06:04:26.437120 systemd[1]: Reached target machines.target - Containers. Sep 5 06:04:26.437137 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 06:04:26.437166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:04:26.437182 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 06:04:26.437199 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 06:04:26.437216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:04:26.437233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 06:04:26.437253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:04:26.437271 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 06:04:26.437287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:04:26.437304 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 06:04:26.437320 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 06:04:26.437337 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 06:04:26.437353 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 06:04:26.437397 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 06:04:26.437423 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:04:26.437441 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 06:04:26.437458 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 06:04:26.437474 kernel: fuse: init (API version 7.41) Sep 5 06:04:26.437491 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 06:04:26.437506 kernel: loop: module loaded Sep 5 06:04:26.437522 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 06:04:26.437538 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 06:04:26.437568 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 06:04:26.437590 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 06:04:26.437606 systemd[1]: Stopped verity-setup.service. Sep 5 06:04:26.437623 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:04:26.437639 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 06:04:26.437664 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 06:04:26.437686 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 06:04:26.437703 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 06:04:26.437718 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 06:04:26.437763 systemd-journald[1198]: Collecting audit messages is disabled. Sep 5 06:04:26.437796 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 06:04:26.437808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:04:26.437821 systemd-journald[1198]: Journal started Sep 5 06:04:26.437851 systemd-journald[1198]: Runtime Journal (/run/log/journal/9efe6684a08b462a83ebc83b636aaac9) is 6M, max 48.6M, 42.5M free. Sep 5 06:04:26.183565 systemd[1]: Queued start job for default target multi-user.target. Sep 5 06:04:26.206858 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 06:04:26.207412 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 06:04:26.440156 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 06:04:26.442208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 06:04:26.442453 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 06:04:26.446078 kernel: ACPI: bus type drm_connector registered Sep 5 06:04:26.444391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:04:26.444642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:04:26.446899 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 06:04:26.448777 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 06:04:26.449097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 06:04:26.450530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:04:26.450827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:04:26.452512 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 06:04:26.452853 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 06:04:26.454382 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:04:26.454641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:04:26.456194 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 06:04:26.457775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:04:26.459472 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 06:04:26.461040 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 06:04:26.475863 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 06:04:26.478513 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 06:04:26.480786 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 06:04:26.482087 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 06:04:26.482126 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 06:04:26.484133 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 06:04:26.496911 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 06:04:26.498561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:04:26.500472 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 06:04:26.503851 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 06:04:26.505557 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 06:04:26.509167 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 06:04:26.510657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 06:04:26.512802 systemd-journald[1198]: Time spent on flushing to /var/log/journal/9efe6684a08b462a83ebc83b636aaac9 is 21.784ms for 977 entries. Sep 5 06:04:26.512802 systemd-journald[1198]: System Journal (/var/log/journal/9efe6684a08b462a83ebc83b636aaac9) is 8M, max 195.6M, 187.6M free. Sep 5 06:04:26.555244 systemd-journald[1198]: Received client request to flush runtime journal. Sep 5 06:04:26.513270 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 06:04:26.519130 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 06:04:26.523925 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 06:04:26.527002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:04:26.528688 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 06:04:26.529945 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 06:04:26.554950 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 06:04:26.557664 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 06:04:26.559039 kernel: loop0: detected capacity change from 0 to 111000 Sep 5 06:04:26.561876 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 06:04:26.565219 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 06:04:26.578889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:04:26.588390 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 06:04:26.590804 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 06:04:26.595253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 06:04:26.609660 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 06:04:26.625038 kernel: loop1: detected capacity change from 0 to 229808 Sep 5 06:04:26.631193 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 5 06:04:26.631214 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 5 06:04:26.644485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:04:26.768055 kernel: loop2: detected capacity change from 0 to 128016 Sep 5 06:04:26.805069 kernel: loop3: detected capacity change from 0 to 111000 Sep 5 06:04:26.818568 kernel: loop4: detected capacity change from 0 to 229808 Sep 5 06:04:26.830029 kernel: loop5: detected capacity change from 0 to 128016 Sep 5 06:04:26.840213 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 06:04:26.840829 (sd-merge)[1269]: Merged extensions into '/usr'. Sep 5 06:04:26.846615 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 06:04:26.846637 systemd[1]: Reloading... Sep 5 06:04:26.973037 zram_generator::config[1299]: No configuration found. Sep 5 06:04:27.056220 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 06:04:27.186533 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 06:04:27.186651 systemd[1]: Reloading finished in 339 ms. Sep 5 06:04:27.221829 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 06:04:27.223312 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 06:04:27.224773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 06:04:27.245591 systemd[1]: Starting ensure-sysext.service... Sep 5 06:04:27.247823 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 06:04:27.250313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:04:27.259796 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Sep 5 06:04:27.259813 systemd[1]: Reloading... Sep 5 06:04:27.343088 zram_generator::config[1371]: No configuration found. Sep 5 06:04:27.384641 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 06:04:27.384697 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 06:04:27.385143 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 06:04:27.385502 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 06:04:27.386559 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 06:04:27.386927 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 5 06:04:27.387020 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 5 06:04:27.391545 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 06:04:27.391561 systemd-tmpfiles[1335]: Skipping /boot Sep 5 06:04:27.394525 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Sep 5 06:04:27.402477 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 06:04:27.402495 systemd-tmpfiles[1335]: Skipping /boot Sep 5 06:04:27.538320 systemd[1]: Reloading finished in 278 ms. Sep 5 06:04:27.550186 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:04:27.559040 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 06:04:27.572577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:04:27.590092 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 5 06:04:27.599031 kernel: ACPI: button: Power Button [PWRF] Sep 5 06:04:27.604065 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 06:04:27.604427 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 06:04:27.617150 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 06:04:27.626778 systemd[1]: Finished ensure-sysext.service. Sep 5 06:04:27.648997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 06:04:27.655812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:04:27.660159 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:04:27.667265 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 06:04:27.668979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:04:27.673190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:04:27.678326 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 06:04:27.680702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:04:27.684272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:04:27.685760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:04:27.688244 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 06:04:27.689667 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:04:27.692314 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 06:04:27.703663 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 06:04:27.708881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 06:04:27.716295 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 06:04:27.718285 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 06:04:27.718387 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:04:27.719775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:04:27.721126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:04:27.721613 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 06:04:27.722987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 06:04:27.723544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:04:27.723831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:04:27.724820 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:04:27.725139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:04:27.729666 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 06:04:27.745376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 06:04:27.746104 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 06:04:27.749685 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 06:04:27.760657 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 06:04:27.773291 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 06:04:27.777694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:04:27.793572 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 06:04:27.797520 augenrules[1494]: No rules Sep 5 06:04:27.799798 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:04:27.800414 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:04:27.802474 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 06:04:27.833704 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 06:04:27.835382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 06:04:27.865404 kernel: kvm_amd: TSC scaling supported Sep 5 06:04:27.865518 kernel: kvm_amd: Nested Virtualization enabled Sep 5 06:04:27.865533 kernel: kvm_amd: Nested Paging enabled Sep 5 06:04:27.865576 kernel: kvm_amd: LBR virtualization supported Sep 5 06:04:27.867294 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 06:04:27.867370 kernel: kvm_amd: Virtual GIF supported Sep 5 06:04:27.892387 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 06:04:27.902911 kernel: EDAC MC: Ver: 3.0.0 Sep 5 06:04:27.946590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:04:27.975523 systemd-networkd[1467]: lo: Link UP Sep 5 06:04:27.975535 systemd-networkd[1467]: lo: Gained carrier Sep 5 06:04:27.977370 systemd-networkd[1467]: Enumeration completed Sep 5 06:04:27.977521 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 06:04:27.978081 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:04:27.978093 systemd-networkd[1467]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 06:04:27.978897 systemd-networkd[1467]: eth0: Link UP Sep 5 06:04:27.979207 systemd-networkd[1467]: eth0: Gained carrier Sep 5 06:04:27.979221 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:04:27.981740 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 06:04:27.984915 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 06:04:27.985607 systemd-resolved[1470]: Positive Trust Anchors: Sep 5 06:04:27.985625 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 06:04:27.985654 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 06:04:27.992133 systemd-networkd[1467]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 06:04:27.992511 systemd-resolved[1470]: Defaulting to hostname 'linux'. Sep 5 06:04:27.992898 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 06:04:27.994262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 06:04:27.995368 systemd[1]: Reached target network.target - Network. Sep 5 06:04:27.996231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:04:27.997346 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 06:04:27.998433 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 06:04:27.999406 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Sep 5 06:04:27.999641 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 06:04:28.001306 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 06:04:28.002634 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 06:04:28.003887 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 06:04:29.016644 systemd[1]: Reached target paths.target - Path Units. Sep 5 06:04:29.016666 systemd-resolved[1470]: Clock change detected. Flushing caches. Sep 5 06:04:29.017519 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 06:04:29.018467 systemd-timesyncd[1471]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 06:04:29.018526 systemd-timesyncd[1471]: Initial clock synchronization to Fri 2025-09-05 06:04:29.016612 UTC. Sep 5 06:04:29.018622 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 06:04:29.019826 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 06:04:29.021246 systemd[1]: Reached target timers.target - Timer Units. Sep 5 06:04:29.023310 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 06:04:29.026166 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 06:04:29.029404 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 06:04:29.030909 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 06:04:29.032186 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 06:04:29.039146 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 06:04:29.040519 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 06:04:29.042797 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 06:04:29.044227 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 06:04:29.047282 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 06:04:29.048224 systemd[1]: Reached target basic.target - Basic System. Sep 5 06:04:29.049205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 06:04:29.049235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 06:04:29.050360 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 06:04:29.052377 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 06:04:29.054275 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 06:04:29.058895 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 06:04:29.062593 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 06:04:29.063650 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 06:04:29.065073 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 06:04:29.068980 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 06:04:29.071117 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 06:04:29.071612 jq[1528]: false Sep 5 06:04:29.076889 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 06:04:29.081067 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 06:04:29.084769 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 5 06:04:29.082427 oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 5 06:04:29.085217 extend-filesystems[1529]: Found /dev/vda6 Sep 5 06:04:29.092945 extend-filesystems[1529]: Found /dev/vda9 Sep 5 06:04:29.092945 extend-filesystems[1529]: Checking size of /dev/vda9 Sep 5 06:04:29.091640 oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 5 06:04:29.098622 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 5 06:04:29.098622 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 06:04:29.098622 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 5 06:04:29.086393 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 06:04:29.091663 oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 06:04:29.092728 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 06:04:29.091753 oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 5 06:04:29.094069 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 06:04:29.095999 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 06:04:29.103114 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 06:04:29.103819 oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 5 06:04:29.106489 extend-filesystems[1529]: Resized partition /dev/vda9 Sep 5 06:04:29.108868 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 5 06:04:29.108868 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 06:04:29.103835 oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 06:04:29.107344 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 06:04:29.110031 extend-filesystems[1552]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 06:04:29.111069 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 06:04:29.111633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 06:04:29.112165 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 06:04:29.112671 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 06:04:29.115065 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 06:04:29.115845 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 06:04:29.116649 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 06:04:29.125884 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 06:04:29.126246 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 06:04:29.143790 jq[1551]: true Sep 5 06:04:29.146163 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 06:04:29.158882 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 06:04:29.159772 update_engine[1547]: I20250905 06:04:29.159530 1547 main.cc:92] Flatcar Update Engine starting Sep 5 06:04:29.212041 jq[1566]: true Sep 5 06:04:29.234576 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 06:04:29.234576 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 06:04:29.234576 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 06:04:29.253992 extend-filesystems[1529]: Resized filesystem in /dev/vda9 Sep 5 06:04:29.235474 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 06:04:29.235804 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 06:04:29.239194 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Sep 5 06:04:29.239218 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 06:04:29.239575 systemd-logind[1540]: New seat seat0. Sep 5 06:04:29.255213 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 06:04:29.257906 tar[1556]: linux-amd64/LICENSE Sep 5 06:04:29.257906 tar[1556]: linux-amd64/helm Sep 5 06:04:29.264492 dbus-daemon[1526]: [system] SELinux support is enabled Sep 5 06:04:29.264679 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 06:04:29.268859 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 06:04:29.268891 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 06:04:29.270147 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 06:04:29.270169 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 06:04:29.271479 systemd[1]: Started update-engine.service - Update Engine. Sep 5 06:04:29.272864 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 06:04:29.273607 update_engine[1547]: I20250905 06:04:29.273494 1547 update_check_scheduler.cc:74] Next update check in 8m41s Sep 5 06:04:29.274664 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 06:04:29.335998 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Sep 5 06:04:29.338623 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 06:04:29.341157 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 06:04:29.345874 locksmithd[1583]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 06:04:29.451714 containerd[1558]: time="2025-09-05T06:04:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 06:04:29.452395 containerd[1558]: time="2025-09-05T06:04:29.452315862Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463009270Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.32µs" Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463066407Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463086124Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463304614Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463319562Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463354978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463437833Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463450888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463815191Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463832574Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463846250Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464611 containerd[1558]: time="2025-09-05T06:04:29.463858342Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464978 containerd[1558]: time="2025-09-05T06:04:29.463961115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464978 containerd[1558]: time="2025-09-05T06:04:29.464201746Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464978 containerd[1558]: time="2025-09-05T06:04:29.464231462Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 06:04:29.464978 containerd[1558]: time="2025-09-05T06:04:29.464241280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 06:04:29.464978 containerd[1558]: time="2025-09-05T06:04:29.464283580Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 06:04:29.464978 containerd[1558]: time="2025-09-05T06:04:29.464530673Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 06:04:29.464978 containerd[1558]: time="2025-09-05T06:04:29.464617065Z" level=info msg="metadata content store policy set" policy=shared Sep 5 06:04:29.474205 containerd[1558]: time="2025-09-05T06:04:29.474100674Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 06:04:29.474205 containerd[1558]: time="2025-09-05T06:04:29.474177448Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 06:04:29.474205 containerd[1558]: time="2025-09-05T06:04:29.474193197Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 06:04:29.474205 containerd[1558]: time="2025-09-05T06:04:29.474206853Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 06:04:29.474205 containerd[1558]: time="2025-09-05T06:04:29.474220098Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 06:04:29.474205 containerd[1558]: time="2025-09-05T06:04:29.474233323Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 06:04:29.474480 containerd[1558]: time="2025-09-05T06:04:29.474246838Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 06:04:29.474480 containerd[1558]: time="2025-09-05T06:04:29.474260764Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 06:04:29.474480 containerd[1558]: time="2025-09-05T06:04:29.474274139Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 06:04:29.474480 containerd[1558]: time="2025-09-05T06:04:29.474285581Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 06:04:29.474480 containerd[1558]: time="2025-09-05T06:04:29.474295329Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 06:04:29.474480 containerd[1558]: time="2025-09-05T06:04:29.474308023Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477121360Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477167927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477187354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477200739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477212771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477225585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477238189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477250642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477262735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477276000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477290237Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477395965Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477414720Z" level=info msg="Start snapshots syncer" Sep 5 06:04:29.477578 containerd[1558]: time="2025-09-05T06:04:29.477455827Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 06:04:29.478012 containerd[1558]: time="2025-09-05T06:04:29.477782099Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 06:04:29.478012 containerd[1558]: time="2025-09-05T06:04:29.477844336Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 06:04:29.478138 containerd[1558]: time="2025-09-05T06:04:29.477938753Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 06:04:29.478138 containerd[1558]: time="2025-09-05T06:04:29.478093183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 06:04:29.478138 containerd[1558]: time="2025-09-05T06:04:29.478117488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 06:04:29.478138 containerd[1558]: time="2025-09-05T06:04:29.478131144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 06:04:29.478214 containerd[1558]: time="2025-09-05T06:04:29.478142956Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 06:04:29.478214 containerd[1558]: time="2025-09-05T06:04:29.478156572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 06:04:29.478214 containerd[1558]: time="2025-09-05T06:04:29.478168664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 06:04:29.478214 containerd[1558]: time="2025-09-05T06:04:29.478181258Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 06:04:29.478294 containerd[1558]: time="2025-09-05T06:04:29.478218658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 06:04:29.478294 containerd[1558]: time="2025-09-05T06:04:29.478231642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 06:04:29.478294 containerd[1558]: time="2025-09-05T06:04:29.478242332Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 06:04:29.478294 containerd[1558]: time="2025-09-05T06:04:29.478277398Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 06:04:29.478381 containerd[1558]: time="2025-09-05T06:04:29.478296113Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 06:04:29.478381 containerd[1558]: time="2025-09-05T06:04:29.478308817Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 06:04:29.478381 containerd[1558]: time="2025-09-05T06:04:29.478320990Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 06:04:29.478381 containerd[1558]: time="2025-09-05T06:04:29.478340977Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 06:04:29.478381 containerd[1558]: time="2025-09-05T06:04:29.478352299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 06:04:29.478381 containerd[1558]: time="2025-09-05T06:04:29.478366345Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 06:04:29.478497 containerd[1558]: time="2025-09-05T06:04:29.478388897Z" level=info msg="runtime interface created" Sep 5 06:04:29.478497 containerd[1558]: time="2025-09-05T06:04:29.478396892Z" level=info msg="created NRI interface" Sep 5 06:04:29.478497 containerd[1558]: time="2025-09-05T06:04:29.478407362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 06:04:29.478497 containerd[1558]: time="2025-09-05T06:04:29.478420106Z" level=info msg="Connect containerd service" Sep 5 06:04:29.478497 containerd[1558]: time="2025-09-05T06:04:29.478446325Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 06:04:29.479500 containerd[1558]: time="2025-09-05T06:04:29.479369126Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 06:04:29.635188 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 06:04:29.823307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 06:04:29.941245 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 06:04:30.015379 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 06:04:30.015794 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 06:04:30.019008 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 06:04:30.033661 containerd[1558]: time="2025-09-05T06:04:30.033586930Z" level=info msg="Start subscribing containerd event" Sep 5 06:04:30.033888 containerd[1558]: time="2025-09-05T06:04:30.033764884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 06:04:30.033888 containerd[1558]: time="2025-09-05T06:04:30.033774923Z" level=info msg="Start recovering state" Sep 5 06:04:30.033888 containerd[1558]: time="2025-09-05T06:04:30.033814026Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 06:04:30.034129 containerd[1558]: time="2025-09-05T06:04:30.034097869Z" level=info msg="Start event monitor" Sep 5 06:04:30.034268 containerd[1558]: time="2025-09-05T06:04:30.034197926Z" level=info msg="Start cni network conf syncer for default" Sep 5 06:04:30.034344 containerd[1558]: time="2025-09-05T06:04:30.034329994Z" level=info msg="Start streaming server" Sep 5 06:04:30.034474 containerd[1558]: time="2025-09-05T06:04:30.034419251Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 06:04:30.034474 containerd[1558]: time="2025-09-05T06:04:30.034428539Z" level=info msg="runtime interface starting up..." Sep 5 06:04:30.034474 containerd[1558]: time="2025-09-05T06:04:30.034434670Z" level=info msg="starting plugins..." Sep 5 06:04:30.034753 containerd[1558]: time="2025-09-05T06:04:30.034452083Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 06:04:30.035042 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 06:04:30.036833 containerd[1558]: time="2025-09-05T06:04:30.034926182Z" level=info msg="containerd successfully booted in 0.586113s" Sep 5 06:04:30.039786 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 06:04:30.048543 tar[1556]: linux-amd64/README.md Sep 5 06:04:30.051795 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 06:04:30.053910 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 06:04:30.055161 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 06:04:30.071175 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 06:04:30.993017 systemd-networkd[1467]: eth0: Gained IPv6LL Sep 5 06:04:30.997268 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 06:04:30.999281 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 06:04:31.002202 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 06:04:31.005083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:31.007400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 06:04:31.041111 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 06:04:31.065968 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 06:04:31.066412 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 06:04:31.068581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 06:04:33.136154 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 06:04:33.139051 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:36630.service - OpenSSH per-connection server daemon (10.0.0.1:36630). Sep 5 06:04:33.268604 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 36630 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:04:33.271523 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:33.280637 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 06:04:33.283747 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 06:04:33.293440 systemd-logind[1540]: New session 1 of user core. Sep 5 06:04:33.312657 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 06:04:33.317686 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 06:04:33.344960 (systemd)[1661]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 06:04:33.348348 systemd-logind[1540]: New session c1 of user core. Sep 5 06:04:33.413216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:33.415371 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 06:04:33.432169 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:04:33.527950 systemd[1661]: Queued start job for default target default.target. Sep 5 06:04:33.547487 systemd[1661]: Created slice app.slice - User Application Slice. Sep 5 06:04:33.547520 systemd[1661]: Reached target paths.target - Paths. Sep 5 06:04:33.547574 systemd[1661]: Reached target timers.target - Timers. Sep 5 06:04:33.549792 systemd[1661]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 06:04:33.563986 systemd[1661]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 06:04:33.564140 systemd[1661]: Reached target sockets.target - Sockets. Sep 5 06:04:33.564228 systemd[1661]: Reached target basic.target - Basic System. Sep 5 06:04:33.564311 systemd[1661]: Reached target default.target - Main User Target. Sep 5 06:04:33.564373 systemd[1661]: Startup finished in 208ms. Sep 5 06:04:33.564833 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 06:04:33.575913 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 06:04:33.577569 systemd[1]: Startup finished in 3.218s (kernel) + 7.968s (initrd) + 7.015s (userspace) = 18.202s. Sep 5 06:04:33.666909 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:36646.service - OpenSSH per-connection server daemon (10.0.0.1:36646). Sep 5 06:04:33.914349 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 36646 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:04:33.916421 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:33.921422 systemd-logind[1540]: New session 2 of user core. Sep 5 06:04:33.942971 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 06:04:34.005604 sshd[1690]: Connection closed by 10.0.0.1 port 36646 Sep 5 06:04:34.007121 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:34.015446 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:36646.service: Deactivated successfully. Sep 5 06:04:34.017617 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 06:04:34.018394 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Sep 5 06:04:34.021337 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:36668.service - OpenSSH per-connection server daemon (10.0.0.1:36668). Sep 5 06:04:34.021883 systemd-logind[1540]: Removed session 2. Sep 5 06:04:34.088389 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 36668 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:04:34.090334 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:34.095837 systemd-logind[1540]: New session 3 of user core. Sep 5 06:04:34.156020 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 06:04:34.266898 sshd[1700]: Connection closed by 10.0.0.1 port 36668 Sep 5 06:04:34.267343 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:34.280595 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:36668.service: Deactivated successfully. Sep 5 06:04:34.282958 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 06:04:34.283846 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Sep 5 06:04:34.287991 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:36682.service - OpenSSH per-connection server daemon (10.0.0.1:36682). Sep 5 06:04:34.288782 systemd-logind[1540]: Removed session 3. Sep 5 06:04:34.299509 kubelet[1672]: E0905 06:04:34.299439 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:04:34.303965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:04:34.304237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:04:34.305111 systemd[1]: kubelet.service: Consumed 3.050s CPU time, 265.8M memory peak. Sep 5 06:04:34.341329 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 36682 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:04:34.342639 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:34.347533 systemd-logind[1540]: New session 4 of user core. Sep 5 06:04:34.357957 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 06:04:34.412967 sshd[1710]: Connection closed by 10.0.0.1 port 36682 Sep 5 06:04:34.413351 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:34.422173 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:36682.service: Deactivated successfully. Sep 5 06:04:34.424037 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 06:04:34.424853 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Sep 5 06:04:34.427854 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:36692.service - OpenSSH per-connection server daemon (10.0.0.1:36692). Sep 5 06:04:34.428492 systemd-logind[1540]: Removed session 4. Sep 5 06:04:34.491724 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 36692 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:04:34.493400 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:34.498251 systemd-logind[1540]: New session 5 of user core. Sep 5 06:04:34.511917 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 06:04:34.650210 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 06:04:34.650558 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:04:34.665773 sudo[1720]: pam_unix(sudo:session): session closed for user root Sep 5 06:04:34.667654 sshd[1719]: Connection closed by 10.0.0.1 port 36692 Sep 5 06:04:34.668422 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:34.688811 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:36692.service: Deactivated successfully. Sep 5 06:04:34.690907 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 06:04:34.691823 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Sep 5 06:04:34.694933 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:36704.service - OpenSSH per-connection server daemon (10.0.0.1:36704). Sep 5 06:04:34.695611 systemd-logind[1540]: Removed session 5. Sep 5 06:04:34.782880 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 36704 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:04:34.785309 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:34.789789 systemd-logind[1540]: New session 6 of user core. Sep 5 06:04:34.805874 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 06:04:34.860592 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 06:04:34.861014 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:04:34.868127 sudo[1731]: pam_unix(sudo:session): session closed for user root Sep 5 06:04:34.874752 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 06:04:34.875059 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:04:34.885431 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:04:34.939082 augenrules[1753]: No rules Sep 5 06:04:34.940929 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:04:34.941233 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:04:34.942537 sudo[1730]: pam_unix(sudo:session): session closed for user root Sep 5 06:04:34.945082 sshd[1729]: Connection closed by 10.0.0.1 port 36704 Sep 5 06:04:34.945522 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:34.955276 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:36704.service: Deactivated successfully. Sep 5 06:04:34.957607 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 06:04:34.958406 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Sep 5 06:04:34.961595 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:36714.service - OpenSSH per-connection server daemon (10.0.0.1:36714). Sep 5 06:04:34.962522 systemd-logind[1540]: Removed session 6. Sep 5 06:04:35.021507 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 36714 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:04:35.023301 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:35.027917 systemd-logind[1540]: New session 7 of user core. Sep 5 06:04:35.036890 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 06:04:35.091479 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 06:04:35.091817 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:04:36.005941 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 06:04:36.025765 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 06:04:36.765942 dockerd[1786]: time="2025-09-05T06:04:36.765862435Z" level=info msg="Starting up" Sep 5 06:04:36.766881 dockerd[1786]: time="2025-09-05T06:04:36.766818939Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 06:04:36.812550 dockerd[1786]: time="2025-09-05T06:04:36.812484965Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 06:04:37.110389 dockerd[1786]: time="2025-09-05T06:04:37.110049632Z" level=info msg="Loading containers: start." Sep 5 06:04:37.123392 kernel: Initializing XFRM netlink socket Sep 5 06:04:37.456583 systemd-networkd[1467]: docker0: Link UP Sep 5 06:04:37.462458 dockerd[1786]: time="2025-09-05T06:04:37.462410129Z" level=info msg="Loading containers: done." Sep 5 06:04:37.476999 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4012833554-merged.mount: Deactivated successfully. Sep 5 06:04:37.478947 dockerd[1786]: time="2025-09-05T06:04:37.478904758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 06:04:37.479012 dockerd[1786]: time="2025-09-05T06:04:37.478986522Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 06:04:37.479087 dockerd[1786]: time="2025-09-05T06:04:37.479067133Z" level=info msg="Initializing buildkit" Sep 5 06:04:37.510159 dockerd[1786]: time="2025-09-05T06:04:37.510088370Z" level=info msg="Completed buildkit initialization" Sep 5 06:04:37.516573 dockerd[1786]: time="2025-09-05T06:04:37.516526686Z" level=info msg="Daemon has completed initialization" Sep 5 06:04:37.516677 dockerd[1786]: time="2025-09-05T06:04:37.516607177Z" level=info msg="API listen on /run/docker.sock" Sep 5 06:04:37.516885 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 06:04:38.718539 containerd[1558]: time="2025-09-05T06:04:38.718471010Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 06:04:39.444766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount798844655.mount: Deactivated successfully. Sep 5 06:04:41.185770 containerd[1558]: time="2025-09-05T06:04:41.184285884Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 5 06:04:41.227708 containerd[1558]: time="2025-09-05T06:04:41.227538554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:41.230488 containerd[1558]: time="2025-09-05T06:04:41.230396545Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:41.231608 containerd[1558]: time="2025-09-05T06:04:41.231554627Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.513032641s" Sep 5 06:04:41.231677 containerd[1558]: time="2025-09-05T06:04:41.231609319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 5 06:04:41.232458 containerd[1558]: time="2025-09-05T06:04:41.232421383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:41.232517 containerd[1558]: time="2025-09-05T06:04:41.232473871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 06:04:43.325123 containerd[1558]: time="2025-09-05T06:04:43.324994321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:43.325993 containerd[1558]: time="2025-09-05T06:04:43.325912003Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 5 06:04:43.327524 containerd[1558]: time="2025-09-05T06:04:43.327443485Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:43.330794 containerd[1558]: time="2025-09-05T06:04:43.330720292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:43.332247 containerd[1558]: time="2025-09-05T06:04:43.332188365Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 2.099671874s" Sep 5 06:04:43.332247 containerd[1558]: time="2025-09-05T06:04:43.332232488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 5 06:04:43.332925 containerd[1558]: time="2025-09-05T06:04:43.332872238Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 06:04:44.554855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 06:04:44.557009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:45.199493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:45.218361 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:04:45.294589 kubelet[2071]: E0905 06:04:45.294489 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:04:45.303372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:04:45.303659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:04:45.304436 systemd[1]: kubelet.service: Consumed 315ms CPU time, 111M memory peak. Sep 5 06:04:45.918385 containerd[1558]: time="2025-09-05T06:04:45.918287288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:45.919491 containerd[1558]: time="2025-09-05T06:04:45.919430362Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 5 06:04:45.921018 containerd[1558]: time="2025-09-05T06:04:45.920955242Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:45.925273 containerd[1558]: time="2025-09-05T06:04:45.925158536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:45.926540 containerd[1558]: time="2025-09-05T06:04:45.926467321Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.593558915s" Sep 5 06:04:45.926540 containerd[1558]: time="2025-09-05T06:04:45.926535639Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 5 06:04:45.927696 containerd[1558]: time="2025-09-05T06:04:45.927656041Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 06:04:47.599676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318459216.mount: Deactivated successfully. Sep 5 06:04:48.472525 containerd[1558]: time="2025-09-05T06:04:48.472418368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:48.474459 containerd[1558]: time="2025-09-05T06:04:48.474405886Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 5 06:04:48.476031 containerd[1558]: time="2025-09-05T06:04:48.475973105Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:48.478568 containerd[1558]: time="2025-09-05T06:04:48.478529610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:48.479327 containerd[1558]: time="2025-09-05T06:04:48.479253278Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.551560749s" Sep 5 06:04:48.479327 containerd[1558]: time="2025-09-05T06:04:48.479311437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 5 06:04:48.480307 containerd[1558]: time="2025-09-05T06:04:48.480019355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 06:04:49.356236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765562928.mount: Deactivated successfully. Sep 5 06:04:50.676287 containerd[1558]: time="2025-09-05T06:04:50.676193029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:50.677066 containerd[1558]: time="2025-09-05T06:04:50.677009470Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 5 06:04:50.678599 containerd[1558]: time="2025-09-05T06:04:50.678542015Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:50.682234 containerd[1558]: time="2025-09-05T06:04:50.682178847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:50.683605 containerd[1558]: time="2025-09-05T06:04:50.683555479Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.203495517s" Sep 5 06:04:50.683605 containerd[1558]: time="2025-09-05T06:04:50.683600443Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 5 06:04:50.684319 containerd[1558]: time="2025-09-05T06:04:50.684259239Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 06:04:51.261556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227116933.mount: Deactivated successfully. Sep 5 06:04:51.267355 containerd[1558]: time="2025-09-05T06:04:51.267286461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:04:51.268047 containerd[1558]: time="2025-09-05T06:04:51.268010880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 06:04:51.269284 containerd[1558]: time="2025-09-05T06:04:51.269218555Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:04:51.271426 containerd[1558]: time="2025-09-05T06:04:51.271394988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:04:51.272229 containerd[1558]: time="2025-09-05T06:04:51.272180972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 587.868033ms" Sep 5 06:04:51.272229 containerd[1558]: time="2025-09-05T06:04:51.272216909Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 06:04:51.272823 containerd[1558]: time="2025-09-05T06:04:51.272730272Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 06:04:51.876472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187121433.mount: Deactivated successfully. Sep 5 06:04:54.242105 containerd[1558]: time="2025-09-05T06:04:54.242015202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:54.242928 containerd[1558]: time="2025-09-05T06:04:54.242888941Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 5 06:04:54.245756 containerd[1558]: time="2025-09-05T06:04:54.245658666Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:54.249614 containerd[1558]: time="2025-09-05T06:04:54.249540778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:54.250579 containerd[1558]: time="2025-09-05T06:04:54.250542968Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.977713329s" Sep 5 06:04:54.250628 containerd[1558]: time="2025-09-05T06:04:54.250583283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 5 06:04:55.493759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 06:04:55.495670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:55.706497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:55.727075 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:04:55.812113 kubelet[2235]: E0905 06:04:55.811946 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:04:55.816368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:04:55.816587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:04:55.817020 systemd[1]: kubelet.service: Consumed 264ms CPU time, 110.9M memory peak. Sep 5 06:04:57.268120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:57.268278 systemd[1]: kubelet.service: Consumed 264ms CPU time, 110.9M memory peak. Sep 5 06:04:57.270442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:57.302175 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-7.scope)... Sep 5 06:04:57.302191 systemd[1]: Reloading... Sep 5 06:04:57.384765 zram_generator::config[2293]: No configuration found. Sep 5 06:04:57.882279 systemd[1]: Reloading finished in 579 ms. Sep 5 06:04:57.945476 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 06:04:57.945601 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 06:04:57.945945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:57.945993 systemd[1]: kubelet.service: Consumed 171ms CPU time, 98.2M memory peak. Sep 5 06:04:57.947798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:58.142895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:58.156292 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 06:04:58.219812 kubelet[2340]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:04:58.219812 kubelet[2340]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 06:04:58.219812 kubelet[2340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:04:58.220231 kubelet[2340]: I0905 06:04:58.219861 2340 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 06:04:58.767254 kubelet[2340]: I0905 06:04:58.767205 2340 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 06:04:58.767254 kubelet[2340]: I0905 06:04:58.767236 2340 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 06:04:58.767483 kubelet[2340]: I0905 06:04:58.767471 2340 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 06:04:58.798766 kubelet[2340]: E0905 06:04:58.798655 2340 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 06:04:58.800135 kubelet[2340]: I0905 06:04:58.800105 2340 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 06:04:58.811114 kubelet[2340]: I0905 06:04:58.811049 2340 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 06:04:58.819258 kubelet[2340]: I0905 06:04:58.819225 2340 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 06:04:58.819622 kubelet[2340]: I0905 06:04:58.819546 2340 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 06:04:58.819859 kubelet[2340]: I0905 06:04:58.819610 2340 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 06:04:58.819982 kubelet[2340]: I0905 06:04:58.819875 2340 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 06:04:58.819982 kubelet[2340]: I0905 06:04:58.819889 2340 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 06:04:58.820469 kubelet[2340]: I0905 06:04:58.820433 2340 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:04:58.827669 kubelet[2340]: I0905 06:04:58.827622 2340 kubelet.go:480] "Attempting to sync node with API server" Sep 5 06:04:58.827726 kubelet[2340]: I0905 06:04:58.827681 2340 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 06:04:58.827786 kubelet[2340]: I0905 06:04:58.827754 2340 kubelet.go:386] "Adding apiserver pod source" Sep 5 06:04:58.830206 kubelet[2340]: E0905 06:04:58.830150 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 06:04:58.831451 kubelet[2340]: I0905 06:04:58.831422 2340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 06:04:58.833377 kubelet[2340]: E0905 06:04:58.833298 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 06:04:58.843192 kubelet[2340]: I0905 06:04:58.843154 2340 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 06:04:58.843818 kubelet[2340]: I0905 06:04:58.843790 2340 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 06:04:58.844480 kubelet[2340]: W0905 06:04:58.844451 2340 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 06:04:58.850323 kubelet[2340]: I0905 06:04:58.850292 2340 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 06:04:58.850382 kubelet[2340]: I0905 06:04:58.850361 2340 server.go:1289] "Started kubelet" Sep 5 06:04:58.852986 kubelet[2340]: I0905 06:04:58.851400 2340 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 06:04:58.852986 kubelet[2340]: I0905 06:04:58.851657 2340 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 06:04:58.852986 kubelet[2340]: I0905 06:04:58.852334 2340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 06:04:58.852986 kubelet[2340]: I0905 06:04:58.852369 2340 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 06:04:58.854530 kubelet[2340]: I0905 06:04:58.854498 2340 server.go:317] "Adding debug handlers to kubelet server" Sep 5 06:04:58.855836 kubelet[2340]: E0905 06:04:58.855776 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:58.857035 kubelet[2340]: I0905 06:04:58.855856 2340 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 06:04:58.857035 kubelet[2340]: I0905 06:04:58.856060 2340 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 06:04:58.857035 kubelet[2340]: I0905 06:04:58.856117 2340 reconciler.go:26] "Reconciler: start to sync state" Sep 5 06:04:58.857035 kubelet[2340]: E0905 06:04:58.856831 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 06:04:58.857035 kubelet[2340]: E0905 06:04:58.855663 2340 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18624dc1acc1c445 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 06:04:58.850321477 +0000 UTC m=+0.688173287,LastTimestamp:2025-09-05 06:04:58.850321477 +0000 UTC m=+0.688173287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 06:04:58.857035 kubelet[2340]: I0905 06:04:58.857006 2340 factory.go:223] Registration of the systemd container factory successfully Sep 5 06:04:58.857035 kubelet[2340]: E0905 06:04:58.857021 2340 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 06:04:58.857274 kubelet[2340]: I0905 06:04:58.857070 2340 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 06:04:58.857836 kubelet[2340]: I0905 06:04:58.857801 2340 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 06:04:58.858309 kubelet[2340]: E0905 06:04:58.858266 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Sep 5 06:04:58.858884 kubelet[2340]: I0905 06:04:58.858831 2340 factory.go:223] Registration of the containerd container factory successfully Sep 5 06:04:58.878043 kubelet[2340]: I0905 06:04:58.877999 2340 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 06:04:58.878043 kubelet[2340]: I0905 06:04:58.878018 2340 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 06:04:58.878043 kubelet[2340]: I0905 06:04:58.878034 2340 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:04:58.880349 kubelet[2340]: I0905 06:04:58.880309 2340 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 06:04:58.882186 kubelet[2340]: I0905 06:04:58.882160 2340 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 06:04:58.882186 kubelet[2340]: I0905 06:04:58.882187 2340 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 06:04:58.882259 kubelet[2340]: I0905 06:04:58.882210 2340 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 06:04:58.882259 kubelet[2340]: I0905 06:04:58.882219 2340 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 06:04:58.882302 kubelet[2340]: E0905 06:04:58.882293 2340 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 06:04:58.956793 kubelet[2340]: E0905 06:04:58.956725 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:58.983130 kubelet[2340]: E0905 06:04:58.983086 2340 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 06:04:59.057643 kubelet[2340]: E0905 06:04:59.057476 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.059212 kubelet[2340]: E0905 06:04:59.059175 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Sep 5 06:04:59.158700 kubelet[2340]: E0905 06:04:59.158620 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.183897 kubelet[2340]: E0905 06:04:59.183821 2340 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 06:04:59.259155 kubelet[2340]: E0905 06:04:59.259062 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.303256 kubelet[2340]: E0905 06:04:59.303197 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 06:04:59.359418 kubelet[2340]: E0905 06:04:59.359276 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.460048 kubelet[2340]: E0905 06:04:59.459976 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.460521 kubelet[2340]: E0905 06:04:59.460474 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Sep 5 06:04:59.501397 kubelet[2340]: I0905 06:04:59.501285 2340 policy_none.go:49] "None policy: Start" Sep 5 06:04:59.501397 kubelet[2340]: I0905 06:04:59.501350 2340 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 06:04:59.501397 kubelet[2340]: I0905 06:04:59.501373 2340 state_mem.go:35] "Initializing new in-memory state store" Sep 5 06:04:59.561057 kubelet[2340]: E0905 06:04:59.560973 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.584318 kubelet[2340]: E0905 06:04:59.584231 2340 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 06:04:59.662174 kubelet[2340]: E0905 06:04:59.661974 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.762460 kubelet[2340]: E0905 06:04:59.762395 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.863295 kubelet[2340]: E0905 06:04:59.863220 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.954600 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 06:04:59.963679 kubelet[2340]: E0905 06:04:59.963627 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:59.969861 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 06:04:59.992002 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 06:04:59.993454 kubelet[2340]: E0905 06:04:59.993427 2340 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 06:04:59.993666 kubelet[2340]: I0905 06:04:59.993651 2340 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 06:04:59.993763 kubelet[2340]: I0905 06:04:59.993669 2340 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 06:04:59.994003 kubelet[2340]: I0905 06:04:59.993982 2340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 06:04:59.994866 kubelet[2340]: E0905 06:04:59.994817 2340 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 06:04:59.994944 kubelet[2340]: E0905 06:04:59.994870 2340 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 06:05:00.011248 kubelet[2340]: E0905 06:05:00.011216 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 06:05:00.096092 kubelet[2340]: I0905 06:05:00.095978 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:05:00.096629 kubelet[2340]: E0905 06:05:00.096574 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 5 06:05:00.228330 kubelet[2340]: E0905 06:05:00.228163 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 06:05:00.261646 kubelet[2340]: E0905 06:05:00.261559 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Sep 5 06:05:00.298697 kubelet[2340]: I0905 06:05:00.298629 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:05:00.299152 kubelet[2340]: E0905 06:05:00.299093 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 5 06:05:00.393149 kubelet[2340]: E0905 06:05:00.393060 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 06:05:00.409985 kubelet[2340]: E0905 06:05:00.409917 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 06:05:00.410414 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 06:05:00.422763 kubelet[2340]: E0905 06:05:00.422713 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:05:00.427582 systemd[1]: Created slice kubepods-burstable-podf846e340f21ac6f695fdeec9de33bee7.slice - libcontainer container kubepods-burstable-podf846e340f21ac6f695fdeec9de33bee7.slice. Sep 5 06:05:00.445537 kubelet[2340]: E0905 06:05:00.445432 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:05:00.449218 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 06:05:00.451557 kubelet[2340]: E0905 06:05:00.451527 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:05:00.467166 kubelet[2340]: I0905 06:05:00.467087 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:00.467166 kubelet[2340]: I0905 06:05:00.467142 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:00.467166 kubelet[2340]: I0905 06:05:00.467173 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:00.467448 kubelet[2340]: I0905 06:05:00.467197 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f846e340f21ac6f695fdeec9de33bee7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f846e340f21ac6f695fdeec9de33bee7\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:00.467448 kubelet[2340]: I0905 06:05:00.467311 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:00.467448 kubelet[2340]: I0905 06:05:00.467355 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:00.467448 kubelet[2340]: I0905 06:05:00.467384 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:00.467448 kubelet[2340]: I0905 06:05:00.467408 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f846e340f21ac6f695fdeec9de33bee7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f846e340f21ac6f695fdeec9de33bee7\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:00.467585 kubelet[2340]: I0905 06:05:00.467430 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f846e340f21ac6f695fdeec9de33bee7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f846e340f21ac6f695fdeec9de33bee7\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:00.700646 kubelet[2340]: I0905 06:05:00.700599 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:05:00.701122 kubelet[2340]: E0905 06:05:00.701081 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 5 06:05:00.723267 kubelet[2340]: E0905 06:05:00.723237 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:00.723861 containerd[1558]: time="2025-09-05T06:05:00.723819134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:00.746234 kubelet[2340]: E0905 06:05:00.746192 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:00.746795 containerd[1558]: time="2025-09-05T06:05:00.746755636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f846e340f21ac6f695fdeec9de33bee7,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:00.752776 containerd[1558]: time="2025-09-05T06:05:00.752690909Z" level=info msg="connecting to shim 77125047ecb1cc88b413a031f6ef1e5c48a82d148fda8a6037e1df6e0d144508" address="unix:///run/containerd/s/682bf892806e2f11b17912a4acf5f12fa68b1d1f4d8c8685899c283891a5760d" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:00.754572 kubelet[2340]: E0905 06:05:00.754542 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:00.755170 containerd[1558]: time="2025-09-05T06:05:00.755126597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:00.828173 containerd[1558]: time="2025-09-05T06:05:00.827964771Z" level=info msg="connecting to shim 7a17c1d556addb85845fba43a60dd1824aab1d3eda124409cfa9be22d4f83542" address="unix:///run/containerd/s/7ef9ba381ed4b0abd53ef203ff85486c26a70c54c1d8f7dc944144230774d5ee" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:00.831856 containerd[1558]: time="2025-09-05T06:05:00.831786379Z" level=info msg="connecting to shim 5ed626d8047e04dd226a2618532655e35a7d228d25d31c0bab3168f691350bf3" address="unix:///run/containerd/s/764c130476001a0f1f215844efd8d055312c0f70694dc24dab4ae25e161cca7a" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:00.834913 systemd[1]: Started cri-containerd-77125047ecb1cc88b413a031f6ef1e5c48a82d148fda8a6037e1df6e0d144508.scope - libcontainer container 77125047ecb1cc88b413a031f6ef1e5c48a82d148fda8a6037e1df6e0d144508. Sep 5 06:05:00.905085 kubelet[2340]: E0905 06:05:00.904997 2340 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 06:05:00.943865 systemd[1]: Started cri-containerd-5ed626d8047e04dd226a2618532655e35a7d228d25d31c0bab3168f691350bf3.scope - libcontainer container 5ed626d8047e04dd226a2618532655e35a7d228d25d31c0bab3168f691350bf3. Sep 5 06:05:00.945863 systemd[1]: Started cri-containerd-7a17c1d556addb85845fba43a60dd1824aab1d3eda124409cfa9be22d4f83542.scope - libcontainer container 7a17c1d556addb85845fba43a60dd1824aab1d3eda124409cfa9be22d4f83542. Sep 5 06:05:01.066213 containerd[1558]: time="2025-09-05T06:05:01.066146762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"77125047ecb1cc88b413a031f6ef1e5c48a82d148fda8a6037e1df6e0d144508\"" Sep 5 06:05:01.097546 kubelet[2340]: E0905 06:05:01.067398 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:01.107332 containerd[1558]: time="2025-09-05T06:05:01.107295496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ed626d8047e04dd226a2618532655e35a7d228d25d31c0bab3168f691350bf3\"" Sep 5 06:05:01.107937 kubelet[2340]: E0905 06:05:01.107887 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:01.112418 containerd[1558]: time="2025-09-05T06:05:01.112369683Z" level=info msg="CreateContainer within sandbox \"77125047ecb1cc88b413a031f6ef1e5c48a82d148fda8a6037e1df6e0d144508\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 06:05:01.112912 containerd[1558]: time="2025-09-05T06:05:01.112875672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f846e340f21ac6f695fdeec9de33bee7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a17c1d556addb85845fba43a60dd1824aab1d3eda124409cfa9be22d4f83542\"" Sep 5 06:05:01.114681 kubelet[2340]: E0905 06:05:01.113314 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:01.116876 containerd[1558]: time="2025-09-05T06:05:01.116834268Z" level=info msg="CreateContainer within sandbox \"5ed626d8047e04dd226a2618532655e35a7d228d25d31c0bab3168f691350bf3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 06:05:01.121111 containerd[1558]: time="2025-09-05T06:05:01.121077036Z" level=info msg="CreateContainer within sandbox \"7a17c1d556addb85845fba43a60dd1824aab1d3eda124409cfa9be22d4f83542\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 06:05:01.126963 containerd[1558]: time="2025-09-05T06:05:01.126928702Z" level=info msg="Container 3ee35d0322059d6d1bd4658625a2768c673616e396869aad1f51f07f6bf2e840: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:01.136718 containerd[1558]: time="2025-09-05T06:05:01.136677137Z" level=info msg="Container 7f5e94400edeff9da047b3ed4345220c470bf4e2ab0bc24416efe33ca7856501: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:01.142980 containerd[1558]: time="2025-09-05T06:05:01.142942159Z" level=info msg="CreateContainer within sandbox \"77125047ecb1cc88b413a031f6ef1e5c48a82d148fda8a6037e1df6e0d144508\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3ee35d0322059d6d1bd4658625a2768c673616e396869aad1f51f07f6bf2e840\"" Sep 5 06:05:01.143618 containerd[1558]: time="2025-09-05T06:05:01.143586918Z" level=info msg="StartContainer for \"3ee35d0322059d6d1bd4658625a2768c673616e396869aad1f51f07f6bf2e840\"" Sep 5 06:05:01.145448 containerd[1558]: time="2025-09-05T06:05:01.144951347Z" level=info msg="connecting to shim 3ee35d0322059d6d1bd4658625a2768c673616e396869aad1f51f07f6bf2e840" address="unix:///run/containerd/s/682bf892806e2f11b17912a4acf5f12fa68b1d1f4d8c8685899c283891a5760d" protocol=ttrpc version=3 Sep 5 06:05:01.149609 containerd[1558]: time="2025-09-05T06:05:01.149537159Z" level=info msg="CreateContainer within sandbox \"5ed626d8047e04dd226a2618532655e35a7d228d25d31c0bab3168f691350bf3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7f5e94400edeff9da047b3ed4345220c470bf4e2ab0bc24416efe33ca7856501\"" Sep 5 06:05:01.150250 containerd[1558]: time="2025-09-05T06:05:01.150197197Z" level=info msg="StartContainer for \"7f5e94400edeff9da047b3ed4345220c470bf4e2ab0bc24416efe33ca7856501\"" Sep 5 06:05:01.150897 containerd[1558]: time="2025-09-05T06:05:01.150866703Z" level=info msg="Container e9cc74bb3b4774546fcacd917926f66302f6b05268bead8f5444598723cfaaa8: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:01.151263 containerd[1558]: time="2025-09-05T06:05:01.151229373Z" level=info msg="connecting to shim 7f5e94400edeff9da047b3ed4345220c470bf4e2ab0bc24416efe33ca7856501" address="unix:///run/containerd/s/764c130476001a0f1f215844efd8d055312c0f70694dc24dab4ae25e161cca7a" protocol=ttrpc version=3 Sep 5 06:05:01.160664 containerd[1558]: time="2025-09-05T06:05:01.160551138Z" level=info msg="CreateContainer within sandbox \"7a17c1d556addb85845fba43a60dd1824aab1d3eda124409cfa9be22d4f83542\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e9cc74bb3b4774546fcacd917926f66302f6b05268bead8f5444598723cfaaa8\"" Sep 5 06:05:01.161156 containerd[1558]: time="2025-09-05T06:05:01.161137047Z" level=info msg="StartContainer for \"e9cc74bb3b4774546fcacd917926f66302f6b05268bead8f5444598723cfaaa8\"" Sep 5 06:05:01.162383 containerd[1558]: time="2025-09-05T06:05:01.162362546Z" level=info msg="connecting to shim e9cc74bb3b4774546fcacd917926f66302f6b05268bead8f5444598723cfaaa8" address="unix:///run/containerd/s/7ef9ba381ed4b0abd53ef203ff85486c26a70c54c1d8f7dc944144230774d5ee" protocol=ttrpc version=3 Sep 5 06:05:01.172933 systemd[1]: Started cri-containerd-3ee35d0322059d6d1bd4658625a2768c673616e396869aad1f51f07f6bf2e840.scope - libcontainer container 3ee35d0322059d6d1bd4658625a2768c673616e396869aad1f51f07f6bf2e840. Sep 5 06:05:01.176684 systemd[1]: Started cri-containerd-7f5e94400edeff9da047b3ed4345220c470bf4e2ab0bc24416efe33ca7856501.scope - libcontainer container 7f5e94400edeff9da047b3ed4345220c470bf4e2ab0bc24416efe33ca7856501. Sep 5 06:05:01.193938 systemd[1]: Started cri-containerd-e9cc74bb3b4774546fcacd917926f66302f6b05268bead8f5444598723cfaaa8.scope - libcontainer container e9cc74bb3b4774546fcacd917926f66302f6b05268bead8f5444598723cfaaa8. Sep 5 06:05:01.250429 containerd[1558]: time="2025-09-05T06:05:01.249911362Z" level=info msg="StartContainer for \"7f5e94400edeff9da047b3ed4345220c470bf4e2ab0bc24416efe33ca7856501\" returns successfully" Sep 5 06:05:01.270245 containerd[1558]: time="2025-09-05T06:05:01.270209866Z" level=info msg="StartContainer for \"e9cc74bb3b4774546fcacd917926f66302f6b05268bead8f5444598723cfaaa8\" returns successfully" Sep 5 06:05:01.283132 containerd[1558]: time="2025-09-05T06:05:01.283095777Z" level=info msg="StartContainer for \"3ee35d0322059d6d1bd4658625a2768c673616e396869aad1f51f07f6bf2e840\" returns successfully" Sep 5 06:05:01.504183 kubelet[2340]: I0905 06:05:01.504144 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:05:01.895041 kubelet[2340]: E0905 06:05:01.894874 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:05:01.895196 kubelet[2340]: E0905 06:05:01.895046 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:01.900434 kubelet[2340]: E0905 06:05:01.900387 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:05:01.900583 kubelet[2340]: E0905 06:05:01.900555 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:01.901208 kubelet[2340]: E0905 06:05:01.901181 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:05:01.901327 kubelet[2340]: E0905 06:05:01.901305 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:02.903679 kubelet[2340]: E0905 06:05:02.903623 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:05:02.904201 kubelet[2340]: E0905 06:05:02.904039 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:02.904583 kubelet[2340]: E0905 06:05:02.904547 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:05:02.904772 kubelet[2340]: E0905 06:05:02.904688 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:03.601135 kubelet[2340]: E0905 06:05:03.601071 2340 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 06:05:03.736356 kubelet[2340]: I0905 06:05:03.736288 2340 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 06:05:03.757999 kubelet[2340]: I0905 06:05:03.757210 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:03.763900 kubelet[2340]: E0905 06:05:03.763828 2340 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:03.763900 kubelet[2340]: I0905 06:05:03.763882 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:03.765482 kubelet[2340]: E0905 06:05:03.765419 2340 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:03.765482 kubelet[2340]: I0905 06:05:03.765458 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:03.767418 kubelet[2340]: E0905 06:05:03.767360 2340 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:03.833918 kubelet[2340]: I0905 06:05:03.833871 2340 apiserver.go:52] "Watching apiserver" Sep 5 06:05:03.856551 kubelet[2340]: I0905 06:05:03.856343 2340 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 06:05:03.903932 kubelet[2340]: I0905 06:05:03.903881 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:03.906195 kubelet[2340]: E0905 06:05:03.906130 2340 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:03.906401 kubelet[2340]: E0905 06:05:03.906315 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:04.262000 kubelet[2340]: I0905 06:05:04.261961 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:04.264854 kubelet[2340]: E0905 06:05:04.264804 2340 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:04.265034 kubelet[2340]: E0905 06:05:04.264997 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:06.160132 systemd[1]: Reload requested from client PID 2626 ('systemctl') (unit session-7.scope)... Sep 5 06:05:06.160150 systemd[1]: Reloading... Sep 5 06:05:06.227358 kubelet[2340]: I0905 06:05:06.227296 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:06.250781 zram_generator::config[2669]: No configuration found. Sep 5 06:05:06.531788 kubelet[2340]: E0905 06:05:06.531701 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:06.985032 kubelet[2340]: E0905 06:05:06.909272 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:07.183903 systemd[1]: Reloading finished in 1023 ms. Sep 5 06:05:07.220732 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:05:07.240535 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 06:05:07.241198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:05:07.241281 systemd[1]: kubelet.service: Consumed 1.337s CPU time, 132.3M memory peak. Sep 5 06:05:07.244288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:05:07.506699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:05:07.523419 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 06:05:07.590769 kubelet[2714]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:05:07.590769 kubelet[2714]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 06:05:07.590769 kubelet[2714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:05:07.590769 kubelet[2714]: I0905 06:05:07.590080 2714 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 06:05:07.602929 kubelet[2714]: I0905 06:05:07.602880 2714 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 06:05:07.602929 kubelet[2714]: I0905 06:05:07.602916 2714 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 06:05:07.603169 kubelet[2714]: I0905 06:05:07.603157 2714 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 06:05:07.604794 kubelet[2714]: I0905 06:05:07.604523 2714 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 06:05:07.607058 kubelet[2714]: I0905 06:05:07.606983 2714 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 06:05:07.611254 kubelet[2714]: I0905 06:05:07.611012 2714 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 06:05:07.618008 kubelet[2714]: I0905 06:05:07.617969 2714 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 06:05:07.618231 kubelet[2714]: I0905 06:05:07.618180 2714 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 06:05:07.618412 kubelet[2714]: I0905 06:05:07.618206 2714 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 06:05:07.618412 kubelet[2714]: I0905 06:05:07.618386 2714 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 06:05:07.618412 kubelet[2714]: I0905 06:05:07.618396 2714 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 06:05:07.618582 kubelet[2714]: I0905 06:05:07.618445 2714 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:05:07.618623 kubelet[2714]: I0905 06:05:07.618607 2714 kubelet.go:480] "Attempting to sync node with API server" Sep 5 06:05:07.618623 kubelet[2714]: I0905 06:05:07.618618 2714 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 06:05:07.618675 kubelet[2714]: I0905 06:05:07.618644 2714 kubelet.go:386] "Adding apiserver pod source" Sep 5 06:05:07.618675 kubelet[2714]: I0905 06:05:07.618662 2714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 06:05:07.981011 kubelet[2714]: I0905 06:05:07.980845 2714 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 06:05:07.981350 kubelet[2714]: I0905 06:05:07.981324 2714 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 06:05:07.987811 kubelet[2714]: I0905 06:05:07.986404 2714 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 06:05:07.987811 kubelet[2714]: I0905 06:05:07.986452 2714 server.go:1289] "Started kubelet" Sep 5 06:05:07.987811 kubelet[2714]: I0905 06:05:07.986623 2714 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 06:05:07.989763 kubelet[2714]: I0905 06:05:07.988775 2714 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 06:05:07.989763 kubelet[2714]: I0905 06:05:07.989101 2714 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 06:05:07.997052 kubelet[2714]: I0905 06:05:07.997027 2714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 06:05:07.997917 kubelet[2714]: I0905 06:05:07.997898 2714 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 06:05:07.998074 kubelet[2714]: I0905 06:05:07.998062 2714 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 06:05:07.998231 kubelet[2714]: I0905 06:05:07.998219 2714 reconciler.go:26] "Reconciler: start to sync state" Sep 5 06:05:07.998930 kubelet[2714]: I0905 06:05:07.997155 2714 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 06:05:07.999489 kubelet[2714]: I0905 06:05:07.999462 2714 server.go:317] "Adding debug handlers to kubelet server" Sep 5 06:05:08.006311 kubelet[2714]: E0905 06:05:08.006270 2714 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 06:05:08.007536 kubelet[2714]: I0905 06:05:08.007504 2714 factory.go:223] Registration of the containerd container factory successfully Sep 5 06:05:08.007536 kubelet[2714]: I0905 06:05:08.007533 2714 factory.go:223] Registration of the systemd container factory successfully Sep 5 06:05:08.007660 kubelet[2714]: I0905 06:05:08.007632 2714 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 06:05:08.035135 kubelet[2714]: I0905 06:05:08.035063 2714 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 06:05:08.037583 kubelet[2714]: I0905 06:05:08.037560 2714 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 06:05:08.037583 kubelet[2714]: I0905 06:05:08.037586 2714 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 06:05:08.037716 kubelet[2714]: I0905 06:05:08.037617 2714 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 06:05:08.037716 kubelet[2714]: I0905 06:05:08.037625 2714 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 06:05:08.037716 kubelet[2714]: E0905 06:05:08.037673 2714 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 06:05:08.097002 kubelet[2714]: I0905 06:05:08.096860 2714 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 06:05:08.097002 kubelet[2714]: I0905 06:05:08.096885 2714 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 06:05:08.097002 kubelet[2714]: I0905 06:05:08.096940 2714 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:05:08.097214 kubelet[2714]: I0905 06:05:08.097171 2714 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 06:05:08.097214 kubelet[2714]: I0905 06:05:08.097186 2714 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 06:05:08.097259 kubelet[2714]: I0905 06:05:08.097209 2714 policy_none.go:49] "None policy: Start" Sep 5 06:05:08.097259 kubelet[2714]: I0905 06:05:08.097244 2714 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 06:05:08.097259 kubelet[2714]: I0905 06:05:08.097259 2714 state_mem.go:35] "Initializing new in-memory state store" Sep 5 06:05:08.097797 kubelet[2714]: I0905 06:05:08.097425 2714 state_mem.go:75] "Updated machine memory state" Sep 5 06:05:08.104432 kubelet[2714]: E0905 06:05:08.104392 2714 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 06:05:08.104654 kubelet[2714]: I0905 06:05:08.104621 2714 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 06:05:08.104654 kubelet[2714]: I0905 06:05:08.104636 2714 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 06:05:08.110895 kubelet[2714]: I0905 06:05:08.110852 2714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 06:05:08.144881 kubelet[2714]: E0905 06:05:08.114792 2714 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 06:05:08.146612 kubelet[2714]: I0905 06:05:08.146570 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:08.147209 kubelet[2714]: I0905 06:05:08.147189 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:08.148089 kubelet[2714]: I0905 06:05:08.148074 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:08.185832 kubelet[2714]: E0905 06:05:08.185786 2714 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:08.223671 kubelet[2714]: I0905 06:05:08.223586 2714 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:05:08.233557 kubelet[2714]: I0905 06:05:08.233225 2714 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 06:05:08.233557 kubelet[2714]: I0905 06:05:08.233478 2714 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 06:05:08.299859 kubelet[2714]: I0905 06:05:08.299774 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f846e340f21ac6f695fdeec9de33bee7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f846e340f21ac6f695fdeec9de33bee7\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:08.299859 kubelet[2714]: I0905 06:05:08.299828 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:08.299859 kubelet[2714]: I0905 06:05:08.299849 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:08.299859 kubelet[2714]: I0905 06:05:08.299865 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:08.299859 kubelet[2714]: I0905 06:05:08.299885 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:08.300333 kubelet[2714]: I0905 06:05:08.299981 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:08.300333 kubelet[2714]: I0905 06:05:08.300045 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:05:08.300333 kubelet[2714]: I0905 06:05:08.300080 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f846e340f21ac6f695fdeec9de33bee7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f846e340f21ac6f695fdeec9de33bee7\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:08.300333 kubelet[2714]: I0905 06:05:08.300110 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f846e340f21ac6f695fdeec9de33bee7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f846e340f21ac6f695fdeec9de33bee7\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:08.482270 kubelet[2714]: E0905 06:05:08.482209 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:08.482416 kubelet[2714]: E0905 06:05:08.482291 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:08.486557 kubelet[2714]: E0905 06:05:08.486409 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:08.974456 kubelet[2714]: I0905 06:05:08.974414 2714 apiserver.go:52] "Watching apiserver" Sep 5 06:05:08.999292 kubelet[2714]: I0905 06:05:08.999220 2714 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 06:05:09.067441 kubelet[2714]: E0905 06:05:09.067390 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:09.067679 kubelet[2714]: I0905 06:05:09.067638 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:09.067949 kubelet[2714]: I0905 06:05:09.067920 2714 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:09.076274 kubelet[2714]: E0905 06:05:09.076035 2714 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 06:05:09.076274 kubelet[2714]: E0905 06:05:09.076273 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:09.076496 kubelet[2714]: E0905 06:05:09.076463 2714 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 06:05:09.076693 kubelet[2714]: E0905 06:05:09.076556 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:09.092504 kubelet[2714]: I0905 06:05:09.092416 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.092392635 podStartE2EDuration="1.092392635s" podCreationTimestamp="2025-09-05 06:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:09.091547321 +0000 UTC m=+1.556840037" watchObservedRunningTime="2025-09-05 06:05:09.092392635 +0000 UTC m=+1.557685341" Sep 5 06:05:09.107773 kubelet[2714]: I0905 06:05:09.107645 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.107626334 podStartE2EDuration="1.107626334s" podCreationTimestamp="2025-09-05 06:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:09.098910891 +0000 UTC m=+1.564203607" watchObservedRunningTime="2025-09-05 06:05:09.107626334 +0000 UTC m=+1.572919050" Sep 5 06:05:09.115432 kubelet[2714]: I0905 06:05:09.115378 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.11536363 podStartE2EDuration="3.11536363s" podCreationTimestamp="2025-09-05 06:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:09.107772844 +0000 UTC m=+1.573065580" watchObservedRunningTime="2025-09-05 06:05:09.11536363 +0000 UTC m=+1.580656346" Sep 5 06:05:10.068846 kubelet[2714]: E0905 06:05:10.068798 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:10.068846 kubelet[2714]: E0905 06:05:10.068854 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:10.068846 kubelet[2714]: E0905 06:05:10.068798 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:11.595121 kubelet[2714]: E0905 06:05:11.594700 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:12.072118 kubelet[2714]: E0905 06:05:12.072057 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:12.758879 kubelet[2714]: I0905 06:05:12.758816 2714 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 06:05:12.759439 containerd[1558]: time="2025-09-05T06:05:12.759241642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 06:05:12.759803 kubelet[2714]: I0905 06:05:12.759458 2714 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 06:05:13.704480 systemd[1]: Created slice kubepods-besteffort-podaf5634f3_23e5_4522_ae90_33de93cfe6d2.slice - libcontainer container kubepods-besteffort-podaf5634f3_23e5_4522_ae90_33de93cfe6d2.slice. Sep 5 06:05:13.734291 kubelet[2714]: I0905 06:05:13.734235 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5634f3-23e5-4522-ae90-33de93cfe6d2-lib-modules\") pod \"kube-proxy-d2ssz\" (UID: \"af5634f3-23e5-4522-ae90-33de93cfe6d2\") " pod="kube-system/kube-proxy-d2ssz" Sep 5 06:05:13.734291 kubelet[2714]: I0905 06:05:13.734279 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af5634f3-23e5-4522-ae90-33de93cfe6d2-kube-proxy\") pod \"kube-proxy-d2ssz\" (UID: \"af5634f3-23e5-4522-ae90-33de93cfe6d2\") " pod="kube-system/kube-proxy-d2ssz" Sep 5 06:05:13.734291 kubelet[2714]: I0905 06:05:13.734292 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5634f3-23e5-4522-ae90-33de93cfe6d2-xtables-lock\") pod \"kube-proxy-d2ssz\" (UID: \"af5634f3-23e5-4522-ae90-33de93cfe6d2\") " pod="kube-system/kube-proxy-d2ssz" Sep 5 06:05:13.734477 kubelet[2714]: I0905 06:05:13.734307 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg4cr\" (UniqueName: \"kubernetes.io/projected/af5634f3-23e5-4522-ae90-33de93cfe6d2-kube-api-access-cg4cr\") pod \"kube-proxy-d2ssz\" (UID: \"af5634f3-23e5-4522-ae90-33de93cfe6d2\") " pod="kube-system/kube-proxy-d2ssz" Sep 5 06:05:14.188163 systemd[1]: Created slice kubepods-besteffort-podb9ad46a8_48d9_4e76_af1e_b7f9b7c78f4e.slice - libcontainer container kubepods-besteffort-podb9ad46a8_48d9_4e76_af1e_b7f9b7c78f4e.slice. Sep 5 06:05:14.237932 kubelet[2714]: I0905 06:05:14.237819 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b9ad46a8-48d9-4e76-af1e-b7f9b7c78f4e-var-lib-calico\") pod \"tigera-operator-755d956888-bpb56\" (UID: \"b9ad46a8-48d9-4e76-af1e-b7f9b7c78f4e\") " pod="tigera-operator/tigera-operator-755d956888-bpb56" Sep 5 06:05:14.237932 kubelet[2714]: I0905 06:05:14.237897 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn88b\" (UniqueName: \"kubernetes.io/projected/b9ad46a8-48d9-4e76-af1e-b7f9b7c78f4e-kube-api-access-xn88b\") pod \"tigera-operator-755d956888-bpb56\" (UID: \"b9ad46a8-48d9-4e76-af1e-b7f9b7c78f4e\") " pod="tigera-operator/tigera-operator-755d956888-bpb56" Sep 5 06:05:14.323383 kubelet[2714]: E0905 06:05:14.323315 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:14.324208 containerd[1558]: time="2025-09-05T06:05:14.324165725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d2ssz,Uid:af5634f3-23e5-4522-ae90-33de93cfe6d2,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:14.348586 containerd[1558]: time="2025-09-05T06:05:14.348474139Z" level=info msg="connecting to shim 52fdbdd06e231534ba429099a2366f35f0dfe4da3427d095af3ecb734f2cb4d2" address="unix:///run/containerd/s/f669ee4fcd062f962d4d7fb84c35140827238f4da7645bba25ea1876e456614e" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:14.374889 update_engine[1547]: I20250905 06:05:14.374794 1547 update_attempter.cc:509] Updating boot flags... Sep 5 06:05:14.387636 systemd[1]: Started cri-containerd-52fdbdd06e231534ba429099a2366f35f0dfe4da3427d095af3ecb734f2cb4d2.scope - libcontainer container 52fdbdd06e231534ba429099a2366f35f0dfe4da3427d095af3ecb734f2cb4d2. Sep 5 06:05:14.447411 containerd[1558]: time="2025-09-05T06:05:14.447177258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d2ssz,Uid:af5634f3-23e5-4522-ae90-33de93cfe6d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"52fdbdd06e231534ba429099a2366f35f0dfe4da3427d095af3ecb734f2cb4d2\"" Sep 5 06:05:14.451840 kubelet[2714]: E0905 06:05:14.451760 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:14.463435 containerd[1558]: time="2025-09-05T06:05:14.463307574Z" level=info msg="CreateContainer within sandbox \"52fdbdd06e231534ba429099a2366f35f0dfe4da3427d095af3ecb734f2cb4d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 06:05:14.486513 containerd[1558]: time="2025-09-05T06:05:14.486246585Z" level=info msg="Container b1d47488a77f3e88be046eff777afd541dc3078dc6e2f4a018cc01cfa66d7a70: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:14.493243 containerd[1558]: time="2025-09-05T06:05:14.493195887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bpb56,Uid:b9ad46a8-48d9-4e76-af1e-b7f9b7c78f4e,Namespace:tigera-operator,Attempt:0,}" Sep 5 06:05:14.507242 containerd[1558]: time="2025-09-05T06:05:14.507191655Z" level=info msg="CreateContainer within sandbox \"52fdbdd06e231534ba429099a2366f35f0dfe4da3427d095af3ecb734f2cb4d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1d47488a77f3e88be046eff777afd541dc3078dc6e2f4a018cc01cfa66d7a70\"" Sep 5 06:05:14.508975 containerd[1558]: time="2025-09-05T06:05:14.508915713Z" level=info msg="StartContainer for \"b1d47488a77f3e88be046eff777afd541dc3078dc6e2f4a018cc01cfa66d7a70\"" Sep 5 06:05:14.513218 containerd[1558]: time="2025-09-05T06:05:14.512816108Z" level=info msg="connecting to shim b1d47488a77f3e88be046eff777afd541dc3078dc6e2f4a018cc01cfa66d7a70" address="unix:///run/containerd/s/f669ee4fcd062f962d4d7fb84c35140827238f4da7645bba25ea1876e456614e" protocol=ttrpc version=3 Sep 5 06:05:14.568965 systemd[1]: Started cri-containerd-b1d47488a77f3e88be046eff777afd541dc3078dc6e2f4a018cc01cfa66d7a70.scope - libcontainer container b1d47488a77f3e88be046eff777afd541dc3078dc6e2f4a018cc01cfa66d7a70. Sep 5 06:05:14.937806 containerd[1558]: time="2025-09-05T06:05:14.937730764Z" level=info msg="StartContainer for \"b1d47488a77f3e88be046eff777afd541dc3078dc6e2f4a018cc01cfa66d7a70\" returns successfully" Sep 5 06:05:15.078846 kubelet[2714]: E0905 06:05:15.078798 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:15.339685 containerd[1558]: time="2025-09-05T06:05:15.339594779Z" level=info msg="connecting to shim 1be160f40c17b3cd5b746382e88ad4f8a1e05eff7b2d2885937c2850e901f68b" address="unix:///run/containerd/s/babf7b5db5e67a505b939a6ee6988590124ed03392df44a05b6cc8ca0a178dee" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:15.381045 systemd[1]: Started cri-containerd-1be160f40c17b3cd5b746382e88ad4f8a1e05eff7b2d2885937c2850e901f68b.scope - libcontainer container 1be160f40c17b3cd5b746382e88ad4f8a1e05eff7b2d2885937c2850e901f68b. Sep 5 06:05:15.437938 containerd[1558]: time="2025-09-05T06:05:15.437886359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bpb56,Uid:b9ad46a8-48d9-4e76-af1e-b7f9b7c78f4e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1be160f40c17b3cd5b746382e88ad4f8a1e05eff7b2d2885937c2850e901f68b\"" Sep 5 06:05:15.439886 containerd[1558]: time="2025-09-05T06:05:15.439566230Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 06:05:16.301697 kubelet[2714]: E0905 06:05:16.301591 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:16.312870 kubelet[2714]: I0905 06:05:16.312814 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d2ssz" podStartSLOduration=3.312795756 podStartE2EDuration="3.312795756s" podCreationTimestamp="2025-09-05 06:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:15.327522686 +0000 UTC m=+7.792815402" watchObservedRunningTime="2025-09-05 06:05:16.312795756 +0000 UTC m=+8.778088472" Sep 5 06:05:16.956064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112588275.mount: Deactivated successfully. Sep 5 06:05:17.083844 kubelet[2714]: E0905 06:05:17.083782 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:17.575046 containerd[1558]: time="2025-09-05T06:05:17.574956254Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:17.575812 containerd[1558]: time="2025-09-05T06:05:17.575749669Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 06:05:17.576985 containerd[1558]: time="2025-09-05T06:05:17.576934104Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:17.578907 containerd[1558]: time="2025-09-05T06:05:17.578865388Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:17.581760 containerd[1558]: time="2025-09-05T06:05:17.580839021Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.141229899s" Sep 5 06:05:17.581760 containerd[1558]: time="2025-09-05T06:05:17.580884026Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 06:05:17.588727 containerd[1558]: time="2025-09-05T06:05:17.588667940Z" level=info msg="CreateContainer within sandbox \"1be160f40c17b3cd5b746382e88ad4f8a1e05eff7b2d2885937c2850e901f68b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 06:05:17.600041 containerd[1558]: time="2025-09-05T06:05:17.599972972Z" level=info msg="Container 660461a7a6d55a948a939cdab46e1b5d2e0ac9917f35c9ee05261ca4d0e9b214: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:17.604026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444806006.mount: Deactivated successfully. Sep 5 06:05:17.609460 containerd[1558]: time="2025-09-05T06:05:17.609399631Z" level=info msg="CreateContainer within sandbox \"1be160f40c17b3cd5b746382e88ad4f8a1e05eff7b2d2885937c2850e901f68b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"660461a7a6d55a948a939cdab46e1b5d2e0ac9917f35c9ee05261ca4d0e9b214\"" Sep 5 06:05:17.609914 containerd[1558]: time="2025-09-05T06:05:17.609882296Z" level=info msg="StartContainer for \"660461a7a6d55a948a939cdab46e1b5d2e0ac9917f35c9ee05261ca4d0e9b214\"" Sep 5 06:05:17.610967 containerd[1558]: time="2025-09-05T06:05:17.610934101Z" level=info msg="connecting to shim 660461a7a6d55a948a939cdab46e1b5d2e0ac9917f35c9ee05261ca4d0e9b214" address="unix:///run/containerd/s/babf7b5db5e67a505b939a6ee6988590124ed03392df44a05b6cc8ca0a178dee" protocol=ttrpc version=3 Sep 5 06:05:17.664915 systemd[1]: Started cri-containerd-660461a7a6d55a948a939cdab46e1b5d2e0ac9917f35c9ee05261ca4d0e9b214.scope - libcontainer container 660461a7a6d55a948a939cdab46e1b5d2e0ac9917f35c9ee05261ca4d0e9b214. Sep 5 06:05:17.702605 containerd[1558]: time="2025-09-05T06:05:17.702558152Z" level=info msg="StartContainer for \"660461a7a6d55a948a939cdab46e1b5d2e0ac9917f35c9ee05261ca4d0e9b214\" returns successfully" Sep 5 06:05:19.683772 kubelet[2714]: E0905 06:05:19.682495 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:19.732348 kubelet[2714]: I0905 06:05:19.732262 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-bpb56" podStartSLOduration=3.588514011 podStartE2EDuration="5.732236339s" podCreationTimestamp="2025-09-05 06:05:14 +0000 UTC" firstStartedPulling="2025-09-05 06:05:15.439194945 +0000 UTC m=+7.904487661" lastFinishedPulling="2025-09-05 06:05:17.582917273 +0000 UTC m=+10.048209989" observedRunningTime="2025-09-05 06:05:18.123128706 +0000 UTC m=+10.588421422" watchObservedRunningTime="2025-09-05 06:05:19.732236339 +0000 UTC m=+12.197529055" Sep 5 06:05:23.853884 sudo[1766]: pam_unix(sudo:session): session closed for user root Sep 5 06:05:23.860761 sshd[1765]: Connection closed by 10.0.0.1 port 36714 Sep 5 06:05:23.861893 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:23.870633 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Sep 5 06:05:23.871842 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:36714.service: Deactivated successfully. Sep 5 06:05:23.877724 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 06:05:23.880877 systemd[1]: session-7.scope: Consumed 6.474s CPU time, 227.8M memory peak. Sep 5 06:05:23.888524 systemd-logind[1540]: Removed session 7. Sep 5 06:05:26.700679 systemd[1]: Created slice kubepods-besteffort-podd44356c4_d81e_47e7_86fa_a3fbdf2c5bbb.slice - libcontainer container kubepods-besteffort-podd44356c4_d81e_47e7_86fa_a3fbdf2c5bbb.slice. Sep 5 06:05:26.723822 kubelet[2714]: I0905 06:05:26.723598 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d44356c4-d81e-47e7-86fa-a3fbdf2c5bbb-typha-certs\") pod \"calico-typha-78d4dc974f-8wcjv\" (UID: \"d44356c4-d81e-47e7-86fa-a3fbdf2c5bbb\") " pod="calico-system/calico-typha-78d4dc974f-8wcjv" Sep 5 06:05:26.724340 kubelet[2714]: I0905 06:05:26.724066 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbhc\" (UniqueName: \"kubernetes.io/projected/d44356c4-d81e-47e7-86fa-a3fbdf2c5bbb-kube-api-access-qnbhc\") pod \"calico-typha-78d4dc974f-8wcjv\" (UID: \"d44356c4-d81e-47e7-86fa-a3fbdf2c5bbb\") " pod="calico-system/calico-typha-78d4dc974f-8wcjv" Sep 5 06:05:26.724340 kubelet[2714]: I0905 06:05:26.724162 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d44356c4-d81e-47e7-86fa-a3fbdf2c5bbb-tigera-ca-bundle\") pod \"calico-typha-78d4dc974f-8wcjv\" (UID: \"d44356c4-d81e-47e7-86fa-a3fbdf2c5bbb\") " pod="calico-system/calico-typha-78d4dc974f-8wcjv" Sep 5 06:05:26.906652 systemd[1]: Created slice kubepods-besteffort-pode7f007b5_2bea_40cf_9279_4c1f56fa5ac9.slice - libcontainer container kubepods-besteffort-pode7f007b5_2bea_40cf_9279_4c1f56fa5ac9.slice. Sep 5 06:05:26.926537 kubelet[2714]: I0905 06:05:26.926468 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-cni-log-dir\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.926537 kubelet[2714]: I0905 06:05:26.926515 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-tigera-ca-bundle\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.926537 kubelet[2714]: I0905 06:05:26.926546 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-xtables-lock\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.926537 kubelet[2714]: I0905 06:05:26.926570 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-lib-modules\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.926537 kubelet[2714]: I0905 06:05:26.926585 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-policysync\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.927005 kubelet[2714]: I0905 06:05:26.926600 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-var-lib-calico\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.927005 kubelet[2714]: I0905 06:05:26.926616 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2jb\" (UniqueName: \"kubernetes.io/projected/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-kube-api-access-zl2jb\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.927005 kubelet[2714]: I0905 06:05:26.926761 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-flexvol-driver-host\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.927005 kubelet[2714]: I0905 06:05:26.926823 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-var-run-calico\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.927005 kubelet[2714]: I0905 06:05:26.926860 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-cni-bin-dir\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.927157 kubelet[2714]: I0905 06:05:26.926876 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-node-certs\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:26.927157 kubelet[2714]: I0905 06:05:26.926893 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e7f007b5-2bea-40cf-9279-4c1f56fa5ac9-cni-net-dir\") pod \"calico-node-jg9hw\" (UID: \"e7f007b5-2bea-40cf-9279-4c1f56fa5ac9\") " pod="calico-system/calico-node-jg9hw" Sep 5 06:05:27.005598 kubelet[2714]: E0905 06:05:27.005527 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:27.006265 containerd[1558]: time="2025-09-05T06:05:27.006192360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78d4dc974f-8wcjv,Uid:d44356c4-d81e-47e7-86fa-a3fbdf2c5bbb,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:27.033385 kubelet[2714]: E0905 06:05:27.032710 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.033385 kubelet[2714]: W0905 06:05:27.032747 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.033385 kubelet[2714]: E0905 06:05:27.032789 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.036605 kubelet[2714]: E0905 06:05:27.036579 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.036605 kubelet[2714]: W0905 06:05:27.036601 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.036716 kubelet[2714]: E0905 06:05:27.036623 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.043842 kubelet[2714]: E0905 06:05:27.043801 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.043842 kubelet[2714]: W0905 06:05:27.043830 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.043842 kubelet[2714]: E0905 06:05:27.043853 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.049512 containerd[1558]: time="2025-09-05T06:05:27.049471705Z" level=info msg="connecting to shim a061dbba683f7242d2deace9c32edbfc4a9675c406745a6ed34d8397bf75363f" address="unix:///run/containerd/s/ece5193ba40c1f4fa63b970fcef687fec0b9c4d367d781265152ffecebdce2da" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:27.079966 systemd[1]: Started cri-containerd-a061dbba683f7242d2deace9c32edbfc4a9675c406745a6ed34d8397bf75363f.scope - libcontainer container a061dbba683f7242d2deace9c32edbfc4a9675c406745a6ed34d8397bf75363f. Sep 5 06:05:27.132273 containerd[1558]: time="2025-09-05T06:05:27.132217332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78d4dc974f-8wcjv,Uid:d44356c4-d81e-47e7-86fa-a3fbdf2c5bbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"a061dbba683f7242d2deace9c32edbfc4a9675c406745a6ed34d8397bf75363f\"" Sep 5 06:05:27.133586 kubelet[2714]: E0905 06:05:27.133545 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:27.134954 containerd[1558]: time="2025-09-05T06:05:27.134925191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 06:05:27.193171 kubelet[2714]: E0905 06:05:27.193104 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc5f5" podUID="d38e615f-d43f-4af2-a8f3-d11048e4a95a" Sep 5 06:05:27.210450 containerd[1558]: time="2025-09-05T06:05:27.210395231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jg9hw,Uid:e7f007b5-2bea-40cf-9279-4c1f56fa5ac9,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:27.224163 kubelet[2714]: E0905 06:05:27.224102 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.224163 kubelet[2714]: W0905 06:05:27.224140 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.224163 kubelet[2714]: E0905 06:05:27.224164 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.224753 kubelet[2714]: E0905 06:05:27.224715 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.225007 kubelet[2714]: W0905 06:05:27.224984 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.225007 kubelet[2714]: E0905 06:05:27.225005 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.226312 kubelet[2714]: E0905 06:05:27.226283 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.226312 kubelet[2714]: W0905 06:05:27.226300 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.226312 kubelet[2714]: E0905 06:05:27.226310 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.228071 kubelet[2714]: E0905 06:05:27.228042 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.228071 kubelet[2714]: W0905 06:05:27.228062 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.228071 kubelet[2714]: E0905 06:05:27.228071 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.228916 kubelet[2714]: E0905 06:05:27.228877 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.228916 kubelet[2714]: W0905 06:05:27.228901 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.228916 kubelet[2714]: E0905 06:05:27.228913 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.230848 kubelet[2714]: E0905 06:05:27.230802 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.230848 kubelet[2714]: W0905 06:05:27.230833 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.230932 kubelet[2714]: E0905 06:05:27.230861 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.231168 kubelet[2714]: E0905 06:05:27.231147 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.231168 kubelet[2714]: W0905 06:05:27.231159 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.231168 kubelet[2714]: E0905 06:05:27.231168 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.231383 kubelet[2714]: E0905 06:05:27.231367 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.231383 kubelet[2714]: W0905 06:05:27.231379 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.231436 kubelet[2714]: E0905 06:05:27.231388 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.231636 kubelet[2714]: E0905 06:05:27.231619 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.231636 kubelet[2714]: W0905 06:05:27.231634 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.231693 kubelet[2714]: E0905 06:05:27.231643 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.231844 kubelet[2714]: E0905 06:05:27.231828 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.231844 kubelet[2714]: W0905 06:05:27.231840 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.231902 kubelet[2714]: E0905 06:05:27.231848 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.234766 kubelet[2714]: E0905 06:05:27.232882 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.234766 kubelet[2714]: W0905 06:05:27.232898 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.234766 kubelet[2714]: E0905 06:05:27.232910 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.234766 kubelet[2714]: E0905 06:05:27.233079 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.234766 kubelet[2714]: W0905 06:05:27.233086 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.234766 kubelet[2714]: E0905 06:05:27.233093 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.234766 kubelet[2714]: E0905 06:05:27.233898 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.234766 kubelet[2714]: W0905 06:05:27.233909 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.234766 kubelet[2714]: E0905 06:05:27.233920 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.235020 kubelet[2714]: E0905 06:05:27.234794 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.235020 kubelet[2714]: W0905 06:05:27.234805 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.235020 kubelet[2714]: E0905 06:05:27.234817 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.235079 kubelet[2714]: E0905 06:05:27.235038 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.235079 kubelet[2714]: W0905 06:05:27.235046 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.235079 kubelet[2714]: E0905 06:05:27.235056 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.238685 kubelet[2714]: E0905 06:05:27.235793 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.238685 kubelet[2714]: W0905 06:05:27.235806 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.238685 kubelet[2714]: E0905 06:05:27.235817 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.240154 kubelet[2714]: E0905 06:05:27.240126 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.240154 kubelet[2714]: W0905 06:05:27.240148 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.240232 kubelet[2714]: E0905 06:05:27.240167 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.240478 kubelet[2714]: E0905 06:05:27.240461 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.240478 kubelet[2714]: W0905 06:05:27.240474 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.240536 kubelet[2714]: E0905 06:05:27.240483 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.241832 kubelet[2714]: E0905 06:05:27.241812 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.241832 kubelet[2714]: W0905 06:05:27.241826 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.241900 kubelet[2714]: E0905 06:05:27.241838 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.244025 kubelet[2714]: E0905 06:05:27.243998 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.244025 kubelet[2714]: W0905 06:05:27.244017 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.244108 kubelet[2714]: E0905 06:05:27.244031 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.244379 kubelet[2714]: E0905 06:05:27.244361 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.244379 kubelet[2714]: W0905 06:05:27.244379 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.244427 kubelet[2714]: E0905 06:05:27.244388 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.245958 kubelet[2714]: I0905 06:05:27.245924 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d38e615f-d43f-4af2-a8f3-d11048e4a95a-kubelet-dir\") pod \"csi-node-driver-pc5f5\" (UID: \"d38e615f-d43f-4af2-a8f3-d11048e4a95a\") " pod="calico-system/csi-node-driver-pc5f5" Sep 5 06:05:27.246052 kubelet[2714]: E0905 06:05:27.246037 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.246052 kubelet[2714]: W0905 06:05:27.246049 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.246120 kubelet[2714]: E0905 06:05:27.246058 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.246283 kubelet[2714]: E0905 06:05:27.246264 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.246283 kubelet[2714]: W0905 06:05:27.246275 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.246349 kubelet[2714]: E0905 06:05:27.246284 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.248598 kubelet[2714]: E0905 06:05:27.248552 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.248598 kubelet[2714]: W0905 06:05:27.248592 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.248691 kubelet[2714]: E0905 06:05:27.248623 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.248691 kubelet[2714]: I0905 06:05:27.248678 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrvb\" (UniqueName: \"kubernetes.io/projected/d38e615f-d43f-4af2-a8f3-d11048e4a95a-kube-api-access-7wrvb\") pod \"csi-node-driver-pc5f5\" (UID: \"d38e615f-d43f-4af2-a8f3-d11048e4a95a\") " pod="calico-system/csi-node-driver-pc5f5" Sep 5 06:05:27.251958 kubelet[2714]: E0905 06:05:27.251900 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.251958 kubelet[2714]: W0905 06:05:27.251942 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.252118 kubelet[2714]: E0905 06:05:27.251972 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.254680 kubelet[2714]: E0905 06:05:27.254645 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.254680 kubelet[2714]: W0905 06:05:27.254668 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.254813 kubelet[2714]: E0905 06:05:27.254705 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.255000 containerd[1558]: time="2025-09-05T06:05:27.254949340Z" level=info msg="connecting to shim 91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae" address="unix:///run/containerd/s/9e47caf4afc2333c52d78c0500861d4ac96a3d004f8d01df7975244603a1438b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:27.255051 kubelet[2714]: E0905 06:05:27.255008 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.255051 kubelet[2714]: W0905 06:05:27.255017 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.255051 kubelet[2714]: E0905 06:05:27.255027 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.255117 kubelet[2714]: I0905 06:05:27.255056 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d38e615f-d43f-4af2-a8f3-d11048e4a95a-registration-dir\") pod \"csi-node-driver-pc5f5\" (UID: \"d38e615f-d43f-4af2-a8f3-d11048e4a95a\") " pod="calico-system/csi-node-driver-pc5f5" Sep 5 06:05:27.256067 kubelet[2714]: E0905 06:05:27.255976 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.256067 kubelet[2714]: W0905 06:05:27.255993 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.256067 kubelet[2714]: E0905 06:05:27.256004 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.256067 kubelet[2714]: I0905 06:05:27.256033 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d38e615f-d43f-4af2-a8f3-d11048e4a95a-socket-dir\") pod \"csi-node-driver-pc5f5\" (UID: \"d38e615f-d43f-4af2-a8f3-d11048e4a95a\") " pod="calico-system/csi-node-driver-pc5f5" Sep 5 06:05:27.257308 kubelet[2714]: E0905 06:05:27.257283 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.257308 kubelet[2714]: W0905 06:05:27.257300 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.257377 kubelet[2714]: E0905 06:05:27.257310 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.257377 kubelet[2714]: I0905 06:05:27.257329 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d38e615f-d43f-4af2-a8f3-d11048e4a95a-varrun\") pod \"csi-node-driver-pc5f5\" (UID: \"d38e615f-d43f-4af2-a8f3-d11048e4a95a\") " pod="calico-system/csi-node-driver-pc5f5" Sep 5 06:05:27.261325 kubelet[2714]: E0905 06:05:27.261292 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.261325 kubelet[2714]: W0905 06:05:27.261310 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.261325 kubelet[2714]: E0905 06:05:27.261321 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.262766 kubelet[2714]: E0905 06:05:27.261578 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.262766 kubelet[2714]: W0905 06:05:27.261590 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.262766 kubelet[2714]: E0905 06:05:27.261597 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.262766 kubelet[2714]: E0905 06:05:27.261899 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.262766 kubelet[2714]: W0905 06:05:27.261909 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.262766 kubelet[2714]: E0905 06:05:27.261918 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.262766 kubelet[2714]: E0905 06:05:27.262190 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.262766 kubelet[2714]: W0905 06:05:27.262201 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.262766 kubelet[2714]: E0905 06:05:27.262213 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.262766 kubelet[2714]: E0905 06:05:27.262772 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.263121 kubelet[2714]: W0905 06:05:27.262782 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.263121 kubelet[2714]: E0905 06:05:27.262792 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.264846 kubelet[2714]: E0905 06:05:27.264816 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.264846 kubelet[2714]: W0905 06:05:27.264835 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.264846 kubelet[2714]: E0905 06:05:27.264844 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.298387 systemd[1]: Started cri-containerd-91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae.scope - libcontainer container 91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae. Sep 5 06:05:27.348727 containerd[1558]: time="2025-09-05T06:05:27.348648313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jg9hw,Uid:e7f007b5-2bea-40cf-9279-4c1f56fa5ac9,Namespace:calico-system,Attempt:0,} returns sandbox id \"91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae\"" Sep 5 06:05:27.357948 kubelet[2714]: E0905 06:05:27.357910 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.357948 kubelet[2714]: W0905 06:05:27.357930 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.357948 kubelet[2714]: E0905 06:05:27.357955 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.358994 kubelet[2714]: E0905 06:05:27.358162 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.358994 kubelet[2714]: W0905 06:05:27.358182 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.358994 kubelet[2714]: E0905 06:05:27.358192 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.358994 kubelet[2714]: E0905 06:05:27.358415 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.358994 kubelet[2714]: W0905 06:05:27.358425 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.358994 kubelet[2714]: E0905 06:05:27.358436 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.358994 kubelet[2714]: E0905 06:05:27.358976 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.358994 kubelet[2714]: W0905 06:05:27.358985 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.359467 kubelet[2714]: E0905 06:05:27.358995 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.359467 kubelet[2714]: E0905 06:05:27.359310 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.359467 kubelet[2714]: W0905 06:05:27.359319 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.359467 kubelet[2714]: E0905 06:05:27.359328 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.359960 kubelet[2714]: E0905 06:05:27.359918 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.360050 kubelet[2714]: W0905 06:05:27.359956 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.360050 kubelet[2714]: E0905 06:05:27.359988 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.360283 kubelet[2714]: E0905 06:05:27.360249 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.360283 kubelet[2714]: W0905 06:05:27.360260 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.360283 kubelet[2714]: E0905 06:05:27.360271 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.360574 kubelet[2714]: E0905 06:05:27.360539 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.360574 kubelet[2714]: W0905 06:05:27.360554 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.360574 kubelet[2714]: E0905 06:05:27.360565 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.360978 kubelet[2714]: E0905 06:05:27.360926 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.360978 kubelet[2714]: W0905 06:05:27.360973 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.361100 kubelet[2714]: E0905 06:05:27.360987 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.361356 kubelet[2714]: E0905 06:05:27.361331 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.361356 kubelet[2714]: W0905 06:05:27.361349 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.361435 kubelet[2714]: E0905 06:05:27.361362 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.361655 kubelet[2714]: E0905 06:05:27.361632 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.361655 kubelet[2714]: W0905 06:05:27.361647 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.361727 kubelet[2714]: E0905 06:05:27.361659 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.361905 kubelet[2714]: E0905 06:05:27.361877 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.361905 kubelet[2714]: W0905 06:05:27.361893 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.361905 kubelet[2714]: E0905 06:05:27.361903 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.362148 kubelet[2714]: E0905 06:05:27.362128 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.362148 kubelet[2714]: W0905 06:05:27.362143 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.362206 kubelet[2714]: E0905 06:05:27.362154 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.362397 kubelet[2714]: E0905 06:05:27.362376 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.362397 kubelet[2714]: W0905 06:05:27.362393 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.362469 kubelet[2714]: E0905 06:05:27.362406 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.362677 kubelet[2714]: E0905 06:05:27.362658 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.362716 kubelet[2714]: W0905 06:05:27.362673 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.362716 kubelet[2714]: E0905 06:05:27.362711 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.363005 kubelet[2714]: E0905 06:05:27.362986 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.363045 kubelet[2714]: W0905 06:05:27.362999 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.363045 kubelet[2714]: E0905 06:05:27.363032 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.363392 kubelet[2714]: E0905 06:05:27.363355 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.363392 kubelet[2714]: W0905 06:05:27.363373 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.363392 kubelet[2714]: E0905 06:05:27.363387 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.363794 kubelet[2714]: E0905 06:05:27.363721 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.363794 kubelet[2714]: W0905 06:05:27.363758 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.363794 kubelet[2714]: E0905 06:05:27.363768 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.364206 kubelet[2714]: E0905 06:05:27.364188 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.364206 kubelet[2714]: W0905 06:05:27.364201 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.364276 kubelet[2714]: E0905 06:05:27.364212 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.364469 kubelet[2714]: E0905 06:05:27.364451 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.364469 kubelet[2714]: W0905 06:05:27.364464 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.364543 kubelet[2714]: E0905 06:05:27.364476 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.364922 kubelet[2714]: E0905 06:05:27.364788 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.364922 kubelet[2714]: W0905 06:05:27.364802 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.364922 kubelet[2714]: E0905 06:05:27.364815 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.365054 kubelet[2714]: E0905 06:05:27.365036 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.365054 kubelet[2714]: W0905 06:05:27.365050 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.365114 kubelet[2714]: E0905 06:05:27.365061 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.365308 kubelet[2714]: E0905 06:05:27.365292 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.365308 kubelet[2714]: W0905 06:05:27.365303 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.365357 kubelet[2714]: E0905 06:05:27.365312 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.365582 kubelet[2714]: E0905 06:05:27.365565 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.365582 kubelet[2714]: W0905 06:05:27.365577 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.365636 kubelet[2714]: E0905 06:05:27.365586 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.365894 kubelet[2714]: E0905 06:05:27.365877 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.365894 kubelet[2714]: W0905 06:05:27.365889 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.365974 kubelet[2714]: E0905 06:05:27.365898 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:27.375543 kubelet[2714]: E0905 06:05:27.375481 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:27.375543 kubelet[2714]: W0905 06:05:27.375508 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:27.375543 kubelet[2714]: E0905 06:05:27.375541 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:29.038307 kubelet[2714]: E0905 06:05:29.038210 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc5f5" podUID="d38e615f-d43f-4af2-a8f3-d11048e4a95a" Sep 5 06:05:29.782626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996290760.mount: Deactivated successfully. Sep 5 06:05:31.037965 kubelet[2714]: E0905 06:05:31.037895 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc5f5" podUID="d38e615f-d43f-4af2-a8f3-d11048e4a95a" Sep 5 06:05:31.953077 containerd[1558]: time="2025-09-05T06:05:31.953002931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:31.953682 containerd[1558]: time="2025-09-05T06:05:31.953645191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 06:05:31.954873 containerd[1558]: time="2025-09-05T06:05:31.954837478Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:31.956834 containerd[1558]: time="2025-09-05T06:05:31.956798292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:31.957377 containerd[1558]: time="2025-09-05T06:05:31.957328912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.822371691s" Sep 5 06:05:31.957377 containerd[1558]: time="2025-09-05T06:05:31.957374788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 06:05:31.958403 containerd[1558]: time="2025-09-05T06:05:31.958371657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 06:05:31.972061 containerd[1558]: time="2025-09-05T06:05:31.972017388Z" level=info msg="CreateContainer within sandbox \"a061dbba683f7242d2deace9c32edbfc4a9675c406745a6ed34d8397bf75363f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 06:05:31.980418 containerd[1558]: time="2025-09-05T06:05:31.980375416Z" level=info msg="Container aa1f6e36d70e94344379fb4544dd8ca029d170ac108a533f94df664554259959: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:31.989348 containerd[1558]: time="2025-09-05T06:05:31.989310362Z" level=info msg="CreateContainer within sandbox \"a061dbba683f7242d2deace9c32edbfc4a9675c406745a6ed34d8397bf75363f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aa1f6e36d70e94344379fb4544dd8ca029d170ac108a533f94df664554259959\"" Sep 5 06:05:31.989825 containerd[1558]: time="2025-09-05T06:05:31.989784996Z" level=info msg="StartContainer for \"aa1f6e36d70e94344379fb4544dd8ca029d170ac108a533f94df664554259959\"" Sep 5 06:05:31.992758 containerd[1558]: time="2025-09-05T06:05:31.991812716Z" level=info msg="connecting to shim aa1f6e36d70e94344379fb4544dd8ca029d170ac108a533f94df664554259959" address="unix:///run/containerd/s/ece5193ba40c1f4fa63b970fcef687fec0b9c4d367d781265152ffecebdce2da" protocol=ttrpc version=3 Sep 5 06:05:32.015899 systemd[1]: Started cri-containerd-aa1f6e36d70e94344379fb4544dd8ca029d170ac108a533f94df664554259959.scope - libcontainer container aa1f6e36d70e94344379fb4544dd8ca029d170ac108a533f94df664554259959. Sep 5 06:05:32.071507 containerd[1558]: time="2025-09-05T06:05:32.071448940Z" level=info msg="StartContainer for \"aa1f6e36d70e94344379fb4544dd8ca029d170ac108a533f94df664554259959\" returns successfully" Sep 5 06:05:32.125016 kubelet[2714]: E0905 06:05:32.124975 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:32.142757 kubelet[2714]: I0905 06:05:32.142660 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78d4dc974f-8wcjv" podStartSLOduration=1.318993718 podStartE2EDuration="6.142642806s" podCreationTimestamp="2025-09-05 06:05:26 +0000 UTC" firstStartedPulling="2025-09-05 06:05:27.134589998 +0000 UTC m=+19.599882704" lastFinishedPulling="2025-09-05 06:05:31.958239076 +0000 UTC m=+24.423531792" observedRunningTime="2025-09-05 06:05:32.14223638 +0000 UTC m=+24.607529096" watchObservedRunningTime="2025-09-05 06:05:32.142642806 +0000 UTC m=+24.607935522" Sep 5 06:05:32.176246 kubelet[2714]: E0905 06:05:32.176194 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.176246 kubelet[2714]: W0905 06:05:32.176224 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.176246 kubelet[2714]: E0905 06:05:32.176249 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.176648 kubelet[2714]: E0905 06:05:32.176426 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.176648 kubelet[2714]: W0905 06:05:32.176445 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.176648 kubelet[2714]: E0905 06:05:32.176454 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.176648 kubelet[2714]: E0905 06:05:32.176625 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.176648 kubelet[2714]: W0905 06:05:32.176632 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.176648 kubelet[2714]: E0905 06:05:32.176640 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.177064 kubelet[2714]: E0905 06:05:32.176846 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.177064 kubelet[2714]: W0905 06:05:32.176855 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.177064 kubelet[2714]: E0905 06:05:32.176863 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.177064 kubelet[2714]: E0905 06:05:32.177032 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.177064 kubelet[2714]: W0905 06:05:32.177040 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.177064 kubelet[2714]: E0905 06:05:32.177048 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.179753 kubelet[2714]: E0905 06:05:32.177925 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.179753 kubelet[2714]: W0905 06:05:32.177938 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.179753 kubelet[2714]: E0905 06:05:32.177948 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.179753 kubelet[2714]: E0905 06:05:32.178130 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.179753 kubelet[2714]: W0905 06:05:32.178138 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.179753 kubelet[2714]: E0905 06:05:32.178147 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.179753 kubelet[2714]: E0905 06:05:32.178324 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.179753 kubelet[2714]: W0905 06:05:32.178332 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.179753 kubelet[2714]: E0905 06:05:32.178341 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.179753 kubelet[2714]: E0905 06:05:32.178513 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.180014 kubelet[2714]: W0905 06:05:32.178520 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.180014 kubelet[2714]: E0905 06:05:32.178529 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.180014 kubelet[2714]: E0905 06:05:32.178685 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.180014 kubelet[2714]: W0905 06:05:32.178694 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.180014 kubelet[2714]: E0905 06:05:32.178701 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.180014 kubelet[2714]: E0905 06:05:32.178866 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.180014 kubelet[2714]: W0905 06:05:32.178874 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.180014 kubelet[2714]: E0905 06:05:32.178882 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.180014 kubelet[2714]: E0905 06:05:32.179025 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.180014 kubelet[2714]: W0905 06:05:32.179031 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.180234 kubelet[2714]: E0905 06:05:32.179039 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.180234 kubelet[2714]: E0905 06:05:32.179187 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.180234 kubelet[2714]: W0905 06:05:32.179194 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.180234 kubelet[2714]: E0905 06:05:32.179202 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.180234 kubelet[2714]: E0905 06:05:32.179344 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.180234 kubelet[2714]: W0905 06:05:32.179351 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.180234 kubelet[2714]: E0905 06:05:32.179358 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.180234 kubelet[2714]: E0905 06:05:32.179513 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.180234 kubelet[2714]: W0905 06:05:32.179520 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.180234 kubelet[2714]: E0905 06:05:32.179528 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.191838 kubelet[2714]: E0905 06:05:32.191778 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.191838 kubelet[2714]: W0905 06:05:32.191829 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.191838 kubelet[2714]: E0905 06:05:32.191851 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.192089 kubelet[2714]: E0905 06:05:32.192069 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.192089 kubelet[2714]: W0905 06:05:32.192083 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.192170 kubelet[2714]: E0905 06:05:32.192092 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.192574 kubelet[2714]: E0905 06:05:32.192552 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.192574 kubelet[2714]: W0905 06:05:32.192567 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.192634 kubelet[2714]: E0905 06:05:32.192578 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.193371 kubelet[2714]: E0905 06:05:32.193347 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.193371 kubelet[2714]: W0905 06:05:32.193364 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.193454 kubelet[2714]: E0905 06:05:32.193375 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.193708 kubelet[2714]: E0905 06:05:32.193686 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.193708 kubelet[2714]: W0905 06:05:32.193703 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.193777 kubelet[2714]: E0905 06:05:32.193712 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.194761 kubelet[2714]: E0905 06:05:32.194690 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.194761 kubelet[2714]: W0905 06:05:32.194704 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.194761 kubelet[2714]: E0905 06:05:32.194715 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.194973 kubelet[2714]: E0905 06:05:32.194951 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.194973 kubelet[2714]: W0905 06:05:32.194966 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.195026 kubelet[2714]: E0905 06:05:32.194975 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.196814 kubelet[2714]: E0905 06:05:32.196788 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.196814 kubelet[2714]: W0905 06:05:32.196807 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.196814 kubelet[2714]: E0905 06:05:32.196818 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.197055 kubelet[2714]: E0905 06:05:32.197033 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.197055 kubelet[2714]: W0905 06:05:32.197048 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.197055 kubelet[2714]: E0905 06:05:32.197057 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.197319 kubelet[2714]: E0905 06:05:32.197291 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.197319 kubelet[2714]: W0905 06:05:32.197306 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.197319 kubelet[2714]: E0905 06:05:32.197316 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.197679 kubelet[2714]: E0905 06:05:32.197581 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.197679 kubelet[2714]: W0905 06:05:32.197594 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.197679 kubelet[2714]: E0905 06:05:32.197603 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.199842 kubelet[2714]: E0905 06:05:32.199809 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.199842 kubelet[2714]: W0905 06:05:32.199832 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.199842 kubelet[2714]: E0905 06:05:32.199845 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.200090 kubelet[2714]: E0905 06:05:32.200068 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.200090 kubelet[2714]: W0905 06:05:32.200083 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.200147 kubelet[2714]: E0905 06:05:32.200092 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.200522 kubelet[2714]: E0905 06:05:32.200500 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.200522 kubelet[2714]: W0905 06:05:32.200515 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.200586 kubelet[2714]: E0905 06:05:32.200525 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.201992 kubelet[2714]: E0905 06:05:32.201960 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.201992 kubelet[2714]: W0905 06:05:32.201978 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.201992 kubelet[2714]: E0905 06:05:32.201988 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.202266 kubelet[2714]: E0905 06:05:32.202238 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.202266 kubelet[2714]: W0905 06:05:32.202257 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.202419 kubelet[2714]: E0905 06:05:32.202270 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.202525 kubelet[2714]: E0905 06:05:32.202502 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.202559 kubelet[2714]: W0905 06:05:32.202531 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.202559 kubelet[2714]: E0905 06:05:32.202541 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:32.202937 kubelet[2714]: E0905 06:05:32.202915 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:32.202937 kubelet[2714]: W0905 06:05:32.202930 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:32.202999 kubelet[2714]: E0905 06:05:32.202941 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.038445 kubelet[2714]: E0905 06:05:33.038352 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc5f5" podUID="d38e615f-d43f-4af2-a8f3-d11048e4a95a" Sep 5 06:05:33.126248 kubelet[2714]: I0905 06:05:33.126194 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:33.126754 kubelet[2714]: E0905 06:05:33.126590 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:33.185898 kubelet[2714]: E0905 06:05:33.185857 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.185898 kubelet[2714]: W0905 06:05:33.185884 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.186100 kubelet[2714]: E0905 06:05:33.185913 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.186130 kubelet[2714]: E0905 06:05:33.186121 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.186158 kubelet[2714]: W0905 06:05:33.186132 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.186158 kubelet[2714]: E0905 06:05:33.186143 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.186344 kubelet[2714]: E0905 06:05:33.186328 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.186344 kubelet[2714]: W0905 06:05:33.186340 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.186400 kubelet[2714]: E0905 06:05:33.186350 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.186589 kubelet[2714]: E0905 06:05:33.186567 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.186589 kubelet[2714]: W0905 06:05:33.186582 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.186638 kubelet[2714]: E0905 06:05:33.186592 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.186801 kubelet[2714]: E0905 06:05:33.186788 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.186801 kubelet[2714]: W0905 06:05:33.186800 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.186858 kubelet[2714]: E0905 06:05:33.186810 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.187025 kubelet[2714]: E0905 06:05:33.187004 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.187025 kubelet[2714]: W0905 06:05:33.187017 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.187078 kubelet[2714]: E0905 06:05:33.187027 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.187217 kubelet[2714]: E0905 06:05:33.187204 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.187217 kubelet[2714]: W0905 06:05:33.187215 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.187269 kubelet[2714]: E0905 06:05:33.187224 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.187406 kubelet[2714]: E0905 06:05:33.187393 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.187439 kubelet[2714]: W0905 06:05:33.187428 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.187464 kubelet[2714]: E0905 06:05:33.187440 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.187631 kubelet[2714]: E0905 06:05:33.187618 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.187631 kubelet[2714]: W0905 06:05:33.187629 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.187680 kubelet[2714]: E0905 06:05:33.187638 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.187819 kubelet[2714]: E0905 06:05:33.187807 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.187819 kubelet[2714]: W0905 06:05:33.187817 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.187871 kubelet[2714]: E0905 06:05:33.187826 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.187993 kubelet[2714]: E0905 06:05:33.187979 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.187993 kubelet[2714]: W0905 06:05:33.187990 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.188049 kubelet[2714]: E0905 06:05:33.187999 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.188174 kubelet[2714]: E0905 06:05:33.188160 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.188174 kubelet[2714]: W0905 06:05:33.188170 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.188222 kubelet[2714]: E0905 06:05:33.188178 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.188347 kubelet[2714]: E0905 06:05:33.188329 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.188347 kubelet[2714]: W0905 06:05:33.188337 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.188347 kubelet[2714]: E0905 06:05:33.188345 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.188504 kubelet[2714]: E0905 06:05:33.188491 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.188504 kubelet[2714]: W0905 06:05:33.188500 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.188546 kubelet[2714]: E0905 06:05:33.188508 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.188647 kubelet[2714]: E0905 06:05:33.188636 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.188647 kubelet[2714]: W0905 06:05:33.188644 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.188699 kubelet[2714]: E0905 06:05:33.188652 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.200120 kubelet[2714]: E0905 06:05:33.200075 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.200120 kubelet[2714]: W0905 06:05:33.200095 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.200120 kubelet[2714]: E0905 06:05:33.200114 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.200359 kubelet[2714]: E0905 06:05:33.200332 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.200359 kubelet[2714]: W0905 06:05:33.200347 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.200359 kubelet[2714]: E0905 06:05:33.200359 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.200594 kubelet[2714]: E0905 06:05:33.200577 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.200594 kubelet[2714]: W0905 06:05:33.200589 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.200647 kubelet[2714]: E0905 06:05:33.200599 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.200946 kubelet[2714]: E0905 06:05:33.200920 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.200946 kubelet[2714]: W0905 06:05:33.200939 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.201046 kubelet[2714]: E0905 06:05:33.200950 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.201187 kubelet[2714]: E0905 06:05:33.201166 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.201187 kubelet[2714]: W0905 06:05:33.201179 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.201256 kubelet[2714]: E0905 06:05:33.201188 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.201361 kubelet[2714]: E0905 06:05:33.201343 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.201361 kubelet[2714]: W0905 06:05:33.201358 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.201438 kubelet[2714]: E0905 06:05:33.201370 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.201600 kubelet[2714]: E0905 06:05:33.201586 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.201600 kubelet[2714]: W0905 06:05:33.201596 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.201665 kubelet[2714]: E0905 06:05:33.201604 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.201919 kubelet[2714]: E0905 06:05:33.201897 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.201986 kubelet[2714]: W0905 06:05:33.201921 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.201986 kubelet[2714]: E0905 06:05:33.201935 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.202165 kubelet[2714]: E0905 06:05:33.202145 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.202165 kubelet[2714]: W0905 06:05:33.202157 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.202234 kubelet[2714]: E0905 06:05:33.202167 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.202365 kubelet[2714]: E0905 06:05:33.202346 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.202365 kubelet[2714]: W0905 06:05:33.202358 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.202471 kubelet[2714]: E0905 06:05:33.202368 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.202581 kubelet[2714]: E0905 06:05:33.202561 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.202581 kubelet[2714]: W0905 06:05:33.202573 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.202670 kubelet[2714]: E0905 06:05:33.202584 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.202827 kubelet[2714]: E0905 06:05:33.202808 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.202827 kubelet[2714]: W0905 06:05:33.202820 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.202913 kubelet[2714]: E0905 06:05:33.202830 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.203057 kubelet[2714]: E0905 06:05:33.203037 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.203057 kubelet[2714]: W0905 06:05:33.203049 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.203137 kubelet[2714]: E0905 06:05:33.203059 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.203348 kubelet[2714]: E0905 06:05:33.203329 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.203348 kubelet[2714]: W0905 06:05:33.203342 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.203442 kubelet[2714]: E0905 06:05:33.203352 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.203538 kubelet[2714]: E0905 06:05:33.203522 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.203538 kubelet[2714]: W0905 06:05:33.203532 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.203598 kubelet[2714]: E0905 06:05:33.203541 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.203767 kubelet[2714]: E0905 06:05:33.203725 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.203767 kubelet[2714]: W0905 06:05:33.203764 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.203853 kubelet[2714]: E0905 06:05:33.203776 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.204120 kubelet[2714]: E0905 06:05:33.204086 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.204120 kubelet[2714]: W0905 06:05:33.204106 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.204120 kubelet[2714]: E0905 06:05:33.204120 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.204873 kubelet[2714]: E0905 06:05:33.204858 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:05:33.204873 kubelet[2714]: W0905 06:05:33.204871 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:05:33.204938 kubelet[2714]: E0905 06:05:33.204883 2714 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:05:33.529389 containerd[1558]: time="2025-09-05T06:05:33.529311806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:33.530191 containerd[1558]: time="2025-09-05T06:05:33.530155705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 06:05:33.531320 containerd[1558]: time="2025-09-05T06:05:33.531274501Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:33.533428 containerd[1558]: time="2025-09-05T06:05:33.533364778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:33.534126 containerd[1558]: time="2025-09-05T06:05:33.534090464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.575691706s" Sep 5 06:05:33.534126 containerd[1558]: time="2025-09-05T06:05:33.534122906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 06:05:33.540051 containerd[1558]: time="2025-09-05T06:05:33.539069440Z" level=info msg="CreateContainer within sandbox \"91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 06:05:33.551206 containerd[1558]: time="2025-09-05T06:05:33.551163015Z" level=info msg="Container 9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:33.562333 containerd[1558]: time="2025-09-05T06:05:33.562295740Z" level=info msg="CreateContainer within sandbox \"91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d\"" Sep 5 06:05:33.563141 containerd[1558]: time="2025-09-05T06:05:33.563051463Z" level=info msg="StartContainer for \"9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d\"" Sep 5 06:05:33.565030 containerd[1558]: time="2025-09-05T06:05:33.564990655Z" level=info msg="connecting to shim 9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d" address="unix:///run/containerd/s/9e47caf4afc2333c52d78c0500861d4ac96a3d004f8d01df7975244603a1438b" protocol=ttrpc version=3 Sep 5 06:05:33.593912 systemd[1]: Started cri-containerd-9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d.scope - libcontainer container 9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d. Sep 5 06:05:33.659215 containerd[1558]: time="2025-09-05T06:05:33.659094489Z" level=info msg="StartContainer for \"9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d\" returns successfully" Sep 5 06:05:33.669364 systemd[1]: cri-containerd-9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d.scope: Deactivated successfully. Sep 5 06:05:33.671774 containerd[1558]: time="2025-09-05T06:05:33.671712250Z" level=info msg="received exit event container_id:\"9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d\" id:\"9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d\" pid:3455 exited_at:{seconds:1757052333 nanos:671181170}" Sep 5 06:05:33.671878 containerd[1558]: time="2025-09-05T06:05:33.671853867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d\" id:\"9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d\" pid:3455 exited_at:{seconds:1757052333 nanos:671181170}" Sep 5 06:05:33.695790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b5e801090ce1c8fe1de9abc56463f3c9573d5ce43624bceba5ae34e22d4f65d-rootfs.mount: Deactivated successfully. Sep 5 06:05:35.038188 kubelet[2714]: E0905 06:05:35.038129 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc5f5" podUID="d38e615f-d43f-4af2-a8f3-d11048e4a95a" Sep 5 06:05:35.134351 containerd[1558]: time="2025-09-05T06:05:35.134303612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 06:05:36.895194 kubelet[2714]: I0905 06:05:36.895128 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:36.895793 kubelet[2714]: E0905 06:05:36.895624 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:37.038988 kubelet[2714]: E0905 06:05:37.038932 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc5f5" podUID="d38e615f-d43f-4af2-a8f3-d11048e4a95a" Sep 5 06:05:37.137403 kubelet[2714]: E0905 06:05:37.137361 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:39.038379 kubelet[2714]: E0905 06:05:39.038306 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc5f5" podUID="d38e615f-d43f-4af2-a8f3-d11048e4a95a" Sep 5 06:05:39.606862 containerd[1558]: time="2025-09-05T06:05:39.606792575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:39.607925 containerd[1558]: time="2025-09-05T06:05:39.607883357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 06:05:39.609583 containerd[1558]: time="2025-09-05T06:05:39.609531306Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:39.612729 containerd[1558]: time="2025-09-05T06:05:39.612656003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:39.613506 containerd[1558]: time="2025-09-05T06:05:39.613463121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.479122168s" Sep 5 06:05:39.613555 containerd[1558]: time="2025-09-05T06:05:39.613507905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 06:05:39.622994 containerd[1558]: time="2025-09-05T06:05:39.622910790Z" level=info msg="CreateContainer within sandbox \"91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 06:05:39.635407 containerd[1558]: time="2025-09-05T06:05:39.635325840Z" level=info msg="Container 65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:39.653026 containerd[1558]: time="2025-09-05T06:05:39.652963513Z" level=info msg="CreateContainer within sandbox \"91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e\"" Sep 5 06:05:39.653751 containerd[1558]: time="2025-09-05T06:05:39.653681002Z" level=info msg="StartContainer for \"65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e\"" Sep 5 06:05:39.658552 containerd[1558]: time="2025-09-05T06:05:39.658434893Z" level=info msg="connecting to shim 65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e" address="unix:///run/containerd/s/9e47caf4afc2333c52d78c0500861d4ac96a3d004f8d01df7975244603a1438b" protocol=ttrpc version=3 Sep 5 06:05:39.686160 systemd[1]: Started cri-containerd-65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e.scope - libcontainer container 65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e. Sep 5 06:05:39.743512 containerd[1558]: time="2025-09-05T06:05:39.743404443Z" level=info msg="StartContainer for \"65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e\" returns successfully" Sep 5 06:05:41.003007 containerd[1558]: time="2025-09-05T06:05:41.002922802Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 06:05:41.006894 systemd[1]: cri-containerd-65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e.scope: Deactivated successfully. Sep 5 06:05:41.007450 systemd[1]: cri-containerd-65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e.scope: Consumed 711ms CPU time, 180.6M memory peak, 3.4M read from disk, 171.3M written to disk. Sep 5 06:05:41.007936 containerd[1558]: time="2025-09-05T06:05:41.007891665Z" level=info msg="received exit event container_id:\"65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e\" id:\"65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e\" pid:3517 exited_at:{seconds:1757052341 nanos:7648707}" Sep 5 06:05:41.008163 containerd[1558]: time="2025-09-05T06:05:41.008140953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e\" id:\"65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e\" pid:3517 exited_at:{seconds:1757052341 nanos:7648707}" Sep 5 06:05:41.011685 kubelet[2714]: I0905 06:05:41.011640 2714 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 06:05:41.047164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65430ba4ac853abceadc8bbf92d1fc1301657c471cec874fb66a30ff04a7326e-rootfs.mount: Deactivated successfully. Sep 5 06:05:41.060923 systemd[1]: Created slice kubepods-besteffort-podd38e615f_d43f_4af2_a8f3_d11048e4a95a.slice - libcontainer container kubepods-besteffort-podd38e615f_d43f_4af2_a8f3_d11048e4a95a.slice. Sep 5 06:05:41.068761 containerd[1558]: time="2025-09-05T06:05:41.068700806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pc5f5,Uid:d38e615f-d43f-4af2-a8f3-d11048e4a95a,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:41.463268 systemd[1]: Created slice kubepods-burstable-pod0b68b8b5_61eb_4d97_a2a5_7ddc0df13ea8.slice - libcontainer container kubepods-burstable-pod0b68b8b5_61eb_4d97_a2a5_7ddc0df13ea8.slice. Sep 5 06:05:41.480711 systemd[1]: Created slice kubepods-burstable-pod8f5933b0_2e1c_4861_957a_b0b91df9fee4.slice - libcontainer container kubepods-burstable-pod8f5933b0_2e1c_4861_957a_b0b91df9fee4.slice. Sep 5 06:05:41.493448 systemd[1]: Created slice kubepods-besteffort-podf25a560a_80d5_425b_821d_6c32b0f4557e.slice - libcontainer container kubepods-besteffort-podf25a560a_80d5_425b_821d_6c32b0f4557e.slice. Sep 5 06:05:41.503544 systemd[1]: Created slice kubepods-besteffort-pod7a42940d_4b8d_4933_ab9b_7ece64bf4c98.slice - libcontainer container kubepods-besteffort-pod7a42940d_4b8d_4933_ab9b_7ece64bf4c98.slice. Sep 5 06:05:41.513321 systemd[1]: Created slice kubepods-besteffort-podf7382ce4_5666_4691_8bf5_34a61fdb9ff7.slice - libcontainer container kubepods-besteffort-podf7382ce4_5666_4691_8bf5_34a61fdb9ff7.slice. Sep 5 06:05:41.525227 systemd[1]: Created slice kubepods-besteffort-pod7931b59e_4ee7_4b4d_819f_020df21da76b.slice - libcontainer container kubepods-besteffort-pod7931b59e_4ee7_4b4d_819f_020df21da76b.slice. Sep 5 06:05:41.538622 systemd[1]: Created slice kubepods-besteffort-pod0be15ff9_35fd_4d6e_ac6f_5bfdea7aa3d6.slice - libcontainer container kubepods-besteffort-pod0be15ff9_35fd_4d6e_ac6f_5bfdea7aa3d6.slice. Sep 5 06:05:41.558402 kubelet[2714]: I0905 06:05:41.558340 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7931b59e-4ee7-4b4d-819f-020df21da76b-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-8m6kc\" (UID: \"7931b59e-4ee7-4b4d-819f-020df21da76b\") " pod="calico-system/goldmane-54d579b49d-8m6kc" Sep 5 06:05:41.558402 kubelet[2714]: I0905 06:05:41.558397 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f5933b0-2e1c-4861-957a-b0b91df9fee4-config-volume\") pod \"coredns-674b8bbfcf-2v74x\" (UID: \"8f5933b0-2e1c-4861-957a-b0b91df9fee4\") " pod="kube-system/coredns-674b8bbfcf-2v74x" Sep 5 06:05:41.558402 kubelet[2714]: I0905 06:05:41.558417 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwvhw\" (UniqueName: \"kubernetes.io/projected/7a42940d-4b8d-4933-ab9b-7ece64bf4c98-kube-api-access-rwvhw\") pod \"calico-kube-controllers-64dd7d8444-2jb59\" (UID: \"7a42940d-4b8d-4933-ab9b-7ece64bf4c98\") " pod="calico-system/calico-kube-controllers-64dd7d8444-2jb59" Sep 5 06:05:41.558763 kubelet[2714]: I0905 06:05:41.558436 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwp4\" (UniqueName: \"kubernetes.io/projected/0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8-kube-api-access-hzwp4\") pod \"coredns-674b8bbfcf-52xll\" (UID: \"0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8\") " pod="kube-system/coredns-674b8bbfcf-52xll" Sep 5 06:05:41.558763 kubelet[2714]: I0905 06:05:41.558453 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7931b59e-4ee7-4b4d-819f-020df21da76b-goldmane-key-pair\") pod \"goldmane-54d579b49d-8m6kc\" (UID: \"7931b59e-4ee7-4b4d-819f-020df21da76b\") " pod="calico-system/goldmane-54d579b49d-8m6kc" Sep 5 06:05:41.558763 kubelet[2714]: I0905 06:05:41.558468 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5sk7\" (UniqueName: \"kubernetes.io/projected/7931b59e-4ee7-4b4d-819f-020df21da76b-kube-api-access-p5sk7\") pod \"goldmane-54d579b49d-8m6kc\" (UID: \"7931b59e-4ee7-4b4d-819f-020df21da76b\") " pod="calico-system/goldmane-54d579b49d-8m6kc" Sep 5 06:05:41.558763 kubelet[2714]: I0905 06:05:41.558482 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdwsj\" (UniqueName: \"kubernetes.io/projected/f7382ce4-5666-4691-8bf5-34a61fdb9ff7-kube-api-access-cdwsj\") pod \"calico-apiserver-5c5f977766-74s42\" (UID: \"f7382ce4-5666-4691-8bf5-34a61fdb9ff7\") " pod="calico-apiserver/calico-apiserver-5c5f977766-74s42" Sep 5 06:05:41.558763 kubelet[2714]: I0905 06:05:41.558511 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f25a560a-80d5-425b-821d-6c32b0f4557e-calico-apiserver-certs\") pod \"calico-apiserver-5c5f977766-gtxxn\" (UID: \"f25a560a-80d5-425b-821d-6c32b0f4557e\") " pod="calico-apiserver/calico-apiserver-5c5f977766-gtxxn" Sep 5 06:05:41.558949 kubelet[2714]: I0905 06:05:41.558528 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8dfr\" (UniqueName: \"kubernetes.io/projected/f25a560a-80d5-425b-821d-6c32b0f4557e-kube-api-access-r8dfr\") pod \"calico-apiserver-5c5f977766-gtxxn\" (UID: \"f25a560a-80d5-425b-821d-6c32b0f4557e\") " pod="calico-apiserver/calico-apiserver-5c5f977766-gtxxn" Sep 5 06:05:41.558949 kubelet[2714]: I0905 06:05:41.558546 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f7382ce4-5666-4691-8bf5-34a61fdb9ff7-calico-apiserver-certs\") pod \"calico-apiserver-5c5f977766-74s42\" (UID: \"f7382ce4-5666-4691-8bf5-34a61fdb9ff7\") " pod="calico-apiserver/calico-apiserver-5c5f977766-74s42" Sep 5 06:05:41.558949 kubelet[2714]: I0905 06:05:41.558561 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-whisker-backend-key-pair\") pod \"whisker-6dbd8bdcfb-9tsrl\" (UID: \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\") " pod="calico-system/whisker-6dbd8bdcfb-9tsrl" Sep 5 06:05:41.558949 kubelet[2714]: I0905 06:05:41.558578 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-whisker-ca-bundle\") pod \"whisker-6dbd8bdcfb-9tsrl\" (UID: \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\") " pod="calico-system/whisker-6dbd8bdcfb-9tsrl" Sep 5 06:05:41.558949 kubelet[2714]: I0905 06:05:41.558591 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzdf7\" (UniqueName: \"kubernetes.io/projected/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-kube-api-access-tzdf7\") pod \"whisker-6dbd8bdcfb-9tsrl\" (UID: \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\") " pod="calico-system/whisker-6dbd8bdcfb-9tsrl" Sep 5 06:05:41.559269 kubelet[2714]: I0905 06:05:41.558605 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wk9c\" (UniqueName: \"kubernetes.io/projected/8f5933b0-2e1c-4861-957a-b0b91df9fee4-kube-api-access-7wk9c\") pod \"coredns-674b8bbfcf-2v74x\" (UID: \"8f5933b0-2e1c-4861-957a-b0b91df9fee4\") " pod="kube-system/coredns-674b8bbfcf-2v74x" Sep 5 06:05:41.559269 kubelet[2714]: I0905 06:05:41.558628 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8-config-volume\") pod \"coredns-674b8bbfcf-52xll\" (UID: \"0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8\") " pod="kube-system/coredns-674b8bbfcf-52xll" Sep 5 06:05:41.559269 kubelet[2714]: I0905 06:05:41.558646 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a42940d-4b8d-4933-ab9b-7ece64bf4c98-tigera-ca-bundle\") pod \"calico-kube-controllers-64dd7d8444-2jb59\" (UID: \"7a42940d-4b8d-4933-ab9b-7ece64bf4c98\") " pod="calico-system/calico-kube-controllers-64dd7d8444-2jb59" Sep 5 06:05:41.559269 kubelet[2714]: I0905 06:05:41.558672 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7931b59e-4ee7-4b4d-819f-020df21da76b-config\") pod \"goldmane-54d579b49d-8m6kc\" (UID: \"7931b59e-4ee7-4b4d-819f-020df21da76b\") " pod="calico-system/goldmane-54d579b49d-8m6kc" Sep 5 06:05:41.580644 containerd[1558]: time="2025-09-05T06:05:41.580539499Z" level=error msg="Failed to destroy network for sandbox \"dd4ab5c6cd8824727bdcdd33dd963e173308b9d5257073a03cf1c9e98698b12c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.582694 containerd[1558]: time="2025-09-05T06:05:41.582623657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pc5f5,Uid:d38e615f-d43f-4af2-a8f3-d11048e4a95a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4ab5c6cd8824727bdcdd33dd963e173308b9d5257073a03cf1c9e98698b12c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.583101 kubelet[2714]: E0905 06:05:41.583036 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4ab5c6cd8824727bdcdd33dd963e173308b9d5257073a03cf1c9e98698b12c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.583200 kubelet[2714]: E0905 06:05:41.583165 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4ab5c6cd8824727bdcdd33dd963e173308b9d5257073a03cf1c9e98698b12c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pc5f5" Sep 5 06:05:41.583302 kubelet[2714]: E0905 06:05:41.583209 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4ab5c6cd8824727bdcdd33dd963e173308b9d5257073a03cf1c9e98698b12c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pc5f5" Sep 5 06:05:41.583366 kubelet[2714]: E0905 06:05:41.583289 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pc5f5_calico-system(d38e615f-d43f-4af2-a8f3-d11048e4a95a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pc5f5_calico-system(d38e615f-d43f-4af2-a8f3-d11048e4a95a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd4ab5c6cd8824727bdcdd33dd963e173308b9d5257073a03cf1c9e98698b12c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pc5f5" podUID="d38e615f-d43f-4af2-a8f3-d11048e4a95a" Sep 5 06:05:41.583680 systemd[1]: run-netns-cni\x2de641441d\x2dd4f6\x2d16cd\x2dd490\x2df24f61fe9275.mount: Deactivated successfully. Sep 5 06:05:41.771463 kubelet[2714]: E0905 06:05:41.771374 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:41.772324 containerd[1558]: time="2025-09-05T06:05:41.772212345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-52xll,Uid:0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:41.786483 kubelet[2714]: E0905 06:05:41.786441 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:41.787057 containerd[1558]: time="2025-09-05T06:05:41.787011601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2v74x,Uid:8f5933b0-2e1c-4861-957a-b0b91df9fee4,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:41.799711 containerd[1558]: time="2025-09-05T06:05:41.799656675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f977766-gtxxn,Uid:f25a560a-80d5-425b-821d-6c32b0f4557e,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:05:41.810900 containerd[1558]: time="2025-09-05T06:05:41.810853948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dd7d8444-2jb59,Uid:7a42940d-4b8d-4933-ab9b-7ece64bf4c98,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:41.820681 containerd[1558]: time="2025-09-05T06:05:41.820448046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f977766-74s42,Uid:f7382ce4-5666-4691-8bf5-34a61fdb9ff7,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:05:41.831625 containerd[1558]: time="2025-09-05T06:05:41.831577021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-8m6kc,Uid:7931b59e-4ee7-4b4d-819f-020df21da76b,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:41.843872 containerd[1558]: time="2025-09-05T06:05:41.843820871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbd8bdcfb-9tsrl,Uid:0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:41.863527 containerd[1558]: time="2025-09-05T06:05:41.863465727Z" level=error msg="Failed to destroy network for sandbox \"2d8f407d783fa2d91c0e31502da149b8b07447d2b1b738cf41b2168e71d16101\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.870145 containerd[1558]: time="2025-09-05T06:05:41.870060486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-52xll,Uid:0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8f407d783fa2d91c0e31502da149b8b07447d2b1b738cf41b2168e71d16101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.870351 kubelet[2714]: E0905 06:05:41.870313 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8f407d783fa2d91c0e31502da149b8b07447d2b1b738cf41b2168e71d16101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.870442 kubelet[2714]: E0905 06:05:41.870385 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8f407d783fa2d91c0e31502da149b8b07447d2b1b738cf41b2168e71d16101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-52xll" Sep 5 06:05:41.870442 kubelet[2714]: E0905 06:05:41.870433 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8f407d783fa2d91c0e31502da149b8b07447d2b1b738cf41b2168e71d16101\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-52xll" Sep 5 06:05:41.870512 kubelet[2714]: E0905 06:05:41.870482 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-52xll_kube-system(0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-52xll_kube-system(0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d8f407d783fa2d91c0e31502da149b8b07447d2b1b738cf41b2168e71d16101\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-52xll" podUID="0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8" Sep 5 06:05:41.897070 containerd[1558]: time="2025-09-05T06:05:41.896921680Z" level=error msg="Failed to destroy network for sandbox \"b97bb16e2b58ea9beeb98616cd3e85b1bc555b60abcb66df0d243656411b86da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.899393 containerd[1558]: time="2025-09-05T06:05:41.899342591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2v74x,Uid:8f5933b0-2e1c-4861-957a-b0b91df9fee4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97bb16e2b58ea9beeb98616cd3e85b1bc555b60abcb66df0d243656411b86da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.900003 kubelet[2714]: E0905 06:05:41.899887 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97bb16e2b58ea9beeb98616cd3e85b1bc555b60abcb66df0d243656411b86da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.900003 kubelet[2714]: E0905 06:05:41.899960 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97bb16e2b58ea9beeb98616cd3e85b1bc555b60abcb66df0d243656411b86da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2v74x" Sep 5 06:05:41.900003 kubelet[2714]: E0905 06:05:41.899986 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97bb16e2b58ea9beeb98616cd3e85b1bc555b60abcb66df0d243656411b86da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2v74x" Sep 5 06:05:41.900144 kubelet[2714]: E0905 06:05:41.900038 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2v74x_kube-system(8f5933b0-2e1c-4861-957a-b0b91df9fee4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2v74x_kube-system(8f5933b0-2e1c-4861-957a-b0b91df9fee4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b97bb16e2b58ea9beeb98616cd3e85b1bc555b60abcb66df0d243656411b86da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2v74x" podUID="8f5933b0-2e1c-4861-957a-b0b91df9fee4" Sep 5 06:05:41.909131 containerd[1558]: time="2025-09-05T06:05:41.908976264Z" level=error msg="Failed to destroy network for sandbox \"8113728e3c8744471eaa2670eefb4f7e7a78a600c0eec1b70a97154128602f53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.912912 containerd[1558]: time="2025-09-05T06:05:41.912814461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f977766-gtxxn,Uid:f25a560a-80d5-425b-821d-6c32b0f4557e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8113728e3c8744471eaa2670eefb4f7e7a78a600c0eec1b70a97154128602f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.913508 kubelet[2714]: E0905 06:05:41.913454 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8113728e3c8744471eaa2670eefb4f7e7a78a600c0eec1b70a97154128602f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.914058 kubelet[2714]: E0905 06:05:41.913536 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8113728e3c8744471eaa2670eefb4f7e7a78a600c0eec1b70a97154128602f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5f977766-gtxxn" Sep 5 06:05:41.914058 kubelet[2714]: E0905 06:05:41.913562 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8113728e3c8744471eaa2670eefb4f7e7a78a600c0eec1b70a97154128602f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5f977766-gtxxn" Sep 5 06:05:41.914058 kubelet[2714]: E0905 06:05:41.913625 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c5f977766-gtxxn_calico-apiserver(f25a560a-80d5-425b-821d-6c32b0f4557e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c5f977766-gtxxn_calico-apiserver(f25a560a-80d5-425b-821d-6c32b0f4557e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8113728e3c8744471eaa2670eefb4f7e7a78a600c0eec1b70a97154128602f53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5f977766-gtxxn" podUID="f25a560a-80d5-425b-821d-6c32b0f4557e" Sep 5 06:05:41.927581 containerd[1558]: time="2025-09-05T06:05:41.927515061Z" level=error msg="Failed to destroy network for sandbox \"a3004622cac5d1f0cc0b8109ac81fb7e7c4dcd753aa98281b77bcc8d59bf66b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.929001 containerd[1558]: time="2025-09-05T06:05:41.928922797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f977766-74s42,Uid:f7382ce4-5666-4691-8bf5-34a61fdb9ff7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3004622cac5d1f0cc0b8109ac81fb7e7c4dcd753aa98281b77bcc8d59bf66b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.929386 kubelet[2714]: E0905 06:05:41.929312 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3004622cac5d1f0cc0b8109ac81fb7e7c4dcd753aa98281b77bcc8d59bf66b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.929498 kubelet[2714]: E0905 06:05:41.929394 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3004622cac5d1f0cc0b8109ac81fb7e7c4dcd753aa98281b77bcc8d59bf66b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5f977766-74s42" Sep 5 06:05:41.929498 kubelet[2714]: E0905 06:05:41.929423 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3004622cac5d1f0cc0b8109ac81fb7e7c4dcd753aa98281b77bcc8d59bf66b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5f977766-74s42" Sep 5 06:05:41.930482 kubelet[2714]: E0905 06:05:41.929534 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c5f977766-74s42_calico-apiserver(f7382ce4-5666-4691-8bf5-34a61fdb9ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c5f977766-74s42_calico-apiserver(f7382ce4-5666-4691-8bf5-34a61fdb9ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3004622cac5d1f0cc0b8109ac81fb7e7c4dcd753aa98281b77bcc8d59bf66b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5f977766-74s42" podUID="f7382ce4-5666-4691-8bf5-34a61fdb9ff7" Sep 5 06:05:41.933709 containerd[1558]: time="2025-09-05T06:05:41.933638273Z" level=error msg="Failed to destroy network for sandbox \"89d809241646906a998955a3683f9d4ea46ff3a579deab77e9beb6b58d47c7b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.935811 containerd[1558]: time="2025-09-05T06:05:41.935658241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dd7d8444-2jb59,Uid:7a42940d-4b8d-4933-ab9b-7ece64bf4c98,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d809241646906a998955a3683f9d4ea46ff3a579deab77e9beb6b58d47c7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.935991 kubelet[2714]: E0905 06:05:41.935921 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d809241646906a998955a3683f9d4ea46ff3a579deab77e9beb6b58d47c7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.936046 kubelet[2714]: E0905 06:05:41.935992 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d809241646906a998955a3683f9d4ea46ff3a579deab77e9beb6b58d47c7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dd7d8444-2jb59" Sep 5 06:05:41.936046 kubelet[2714]: E0905 06:05:41.936019 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d809241646906a998955a3683f9d4ea46ff3a579deab77e9beb6b58d47c7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64dd7d8444-2jb59" Sep 5 06:05:41.936216 kubelet[2714]: E0905 06:05:41.936083 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64dd7d8444-2jb59_calico-system(7a42940d-4b8d-4933-ab9b-7ece64bf4c98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64dd7d8444-2jb59_calico-system(7a42940d-4b8d-4933-ab9b-7ece64bf4c98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89d809241646906a998955a3683f9d4ea46ff3a579deab77e9beb6b58d47c7b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64dd7d8444-2jb59" podUID="7a42940d-4b8d-4933-ab9b-7ece64bf4c98" Sep 5 06:05:41.943705 containerd[1558]: time="2025-09-05T06:05:41.943642954Z" level=error msg="Failed to destroy network for sandbox \"83d34174a9a04da26f2f39ecd8a566f1dad17b652031af68bc99682e423fba63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.945481 containerd[1558]: time="2025-09-05T06:05:41.945432698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-8m6kc,Uid:7931b59e-4ee7-4b4d-819f-020df21da76b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d34174a9a04da26f2f39ecd8a566f1dad17b652031af68bc99682e423fba63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.946240 kubelet[2714]: E0905 06:05:41.945855 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d34174a9a04da26f2f39ecd8a566f1dad17b652031af68bc99682e423fba63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.946240 kubelet[2714]: E0905 06:05:41.945914 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d34174a9a04da26f2f39ecd8a566f1dad17b652031af68bc99682e423fba63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-8m6kc" Sep 5 06:05:41.946240 kubelet[2714]: E0905 06:05:41.945935 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d34174a9a04da26f2f39ecd8a566f1dad17b652031af68bc99682e423fba63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-8m6kc" Sep 5 06:05:41.946403 kubelet[2714]: E0905 06:05:41.945987 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-8m6kc_calico-system(7931b59e-4ee7-4b4d-819f-020df21da76b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-8m6kc_calico-system(7931b59e-4ee7-4b4d-819f-020df21da76b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83d34174a9a04da26f2f39ecd8a566f1dad17b652031af68bc99682e423fba63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-8m6kc" podUID="7931b59e-4ee7-4b4d-819f-020df21da76b" Sep 5 06:05:41.949373 containerd[1558]: time="2025-09-05T06:05:41.949336078Z" level=error msg="Failed to destroy network for sandbox \"93f806507c91959335e408930a57a2c2467305f3377e5e2fe32af81cc6c37c29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.950853 containerd[1558]: time="2025-09-05T06:05:41.950814999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbd8bdcfb-9tsrl,Uid:0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f806507c91959335e408930a57a2c2467305f3377e5e2fe32af81cc6c37c29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.951077 kubelet[2714]: E0905 06:05:41.951038 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f806507c91959335e408930a57a2c2467305f3377e5e2fe32af81cc6c37c29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:41.951143 kubelet[2714]: E0905 06:05:41.951097 2714 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f806507c91959335e408930a57a2c2467305f3377e5e2fe32af81cc6c37c29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dbd8bdcfb-9tsrl" Sep 5 06:05:41.951143 kubelet[2714]: E0905 06:05:41.951126 2714 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f806507c91959335e408930a57a2c2467305f3377e5e2fe32af81cc6c37c29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dbd8bdcfb-9tsrl" Sep 5 06:05:41.951201 kubelet[2714]: E0905 06:05:41.951177 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dbd8bdcfb-9tsrl_calico-system(0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dbd8bdcfb-9tsrl_calico-system(0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93f806507c91959335e408930a57a2c2467305f3377e5e2fe32af81cc6c37c29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dbd8bdcfb-9tsrl" podUID="0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6" Sep 5 06:05:42.152493 containerd[1558]: time="2025-09-05T06:05:42.152102500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 06:05:49.383368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020327931.mount: Deactivated successfully. Sep 5 06:05:51.725903 containerd[1558]: time="2025-09-05T06:05:51.725801085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:51.727486 containerd[1558]: time="2025-09-05T06:05:51.727443980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 06:05:51.729473 containerd[1558]: time="2025-09-05T06:05:51.729399172Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:51.757120 containerd[1558]: time="2025-09-05T06:05:51.757050273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:51.757803 containerd[1558]: time="2025-09-05T06:05:51.757545964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.605315503s" Sep 5 06:05:51.757803 containerd[1558]: time="2025-09-05T06:05:51.757599755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 06:05:51.773015 containerd[1558]: time="2025-09-05T06:05:51.772952147Z" level=info msg="CreateContainer within sandbox \"91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 06:05:51.786546 containerd[1558]: time="2025-09-05T06:05:51.786499860Z" level=info msg="Container fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:51.787902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383537692.mount: Deactivated successfully. Sep 5 06:05:51.799762 containerd[1558]: time="2025-09-05T06:05:51.798571301Z" level=info msg="CreateContainer within sandbox \"91d0ab803fe824af3763af69d5ac8d3fac7e6aace22820049fd5ae9c9cda99ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf\"" Sep 5 06:05:51.801097 containerd[1558]: time="2025-09-05T06:05:51.801058051Z" level=info msg="StartContainer for \"fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf\"" Sep 5 06:05:51.803663 containerd[1558]: time="2025-09-05T06:05:51.803628429Z" level=info msg="connecting to shim fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf" address="unix:///run/containerd/s/9e47caf4afc2333c52d78c0500861d4ac96a3d004f8d01df7975244603a1438b" protocol=ttrpc version=3 Sep 5 06:05:51.835966 systemd[1]: Started cri-containerd-fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf.scope - libcontainer container fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf. Sep 5 06:05:52.313454 containerd[1558]: time="2025-09-05T06:05:52.313407524Z" level=info msg="StartContainer for \"fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf\" returns successfully" Sep 5 06:05:52.360388 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 06:05:52.361150 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 06:05:52.492565 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:60866.service - OpenSSH per-connection server daemon (10.0.0.1:60866). Sep 5 06:05:52.581974 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 60866 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:05:52.584105 sshd-session[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:52.594795 systemd-logind[1540]: New session 8 of user core. Sep 5 06:05:52.600931 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 06:05:52.737124 kubelet[2714]: I0905 06:05:52.737070 2714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-whisker-backend-key-pair\") pod \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\" (UID: \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\") " Sep 5 06:05:52.737694 kubelet[2714]: I0905 06:05:52.737278 2714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzdf7\" (UniqueName: \"kubernetes.io/projected/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-kube-api-access-tzdf7\") pod \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\" (UID: \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\") " Sep 5 06:05:52.737694 kubelet[2714]: I0905 06:05:52.737312 2714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-whisker-ca-bundle\") pod \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\" (UID: \"0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6\") " Sep 5 06:05:52.769199 kubelet[2714]: I0905 06:05:52.768633 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6" (UID: "0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 06:05:52.772406 kubelet[2714]: I0905 06:05:52.772322 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6" (UID: "0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 06:05:52.773889 kubelet[2714]: I0905 06:05:52.773836 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-kube-api-access-tzdf7" (OuterVolumeSpecName: "kube-api-access-tzdf7") pod "0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6" (UID: "0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6"). InnerVolumeSpecName "kube-api-access-tzdf7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 06:05:52.774104 systemd[1]: var-lib-kubelet-pods-0be15ff9\x2d35fd\x2d4d6e\x2dac6f\x2d5bfdea7aa3d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtzdf7.mount: Deactivated successfully. Sep 5 06:05:52.774289 systemd[1]: var-lib-kubelet-pods-0be15ff9\x2d35fd\x2d4d6e\x2dac6f\x2d5bfdea7aa3d6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 06:05:52.838622 kubelet[2714]: I0905 06:05:52.838431 2714 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 06:05:52.838622 kubelet[2714]: I0905 06:05:52.838485 2714 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tzdf7\" (UniqueName: \"kubernetes.io/projected/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-kube-api-access-tzdf7\") on node \"localhost\" DevicePath \"\"" Sep 5 06:05:52.838622 kubelet[2714]: I0905 06:05:52.838529 2714 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 06:05:53.039565 containerd[1558]: time="2025-09-05T06:05:53.039497295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f977766-gtxxn,Uid:f25a560a-80d5-425b-821d-6c32b0f4557e,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:05:53.040128 containerd[1558]: time="2025-09-05T06:05:53.039806585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pc5f5,Uid:d38e615f-d43f-4af2-a8f3-d11048e4a95a,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:53.682990 kubelet[2714]: I0905 06:05:53.682497 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jg9hw" podStartSLOduration=3.274369819 podStartE2EDuration="27.682477191s" podCreationTimestamp="2025-09-05 06:05:26 +0000 UTC" firstStartedPulling="2025-09-05 06:05:27.350343391 +0000 UTC m=+19.815636097" lastFinishedPulling="2025-09-05 06:05:51.758450753 +0000 UTC m=+44.223743469" observedRunningTime="2025-09-05 06:05:53.682308915 +0000 UTC m=+46.147601651" watchObservedRunningTime="2025-09-05 06:05:53.682477191 +0000 UTC m=+46.147769907" Sep 5 06:05:53.844860 systemd[1]: Removed slice kubepods-besteffort-pod0be15ff9_35fd_4d6e_ac6f_5bfdea7aa3d6.slice - libcontainer container kubepods-besteffort-pod0be15ff9_35fd_4d6e_ac6f_5bfdea7aa3d6.slice. Sep 5 06:05:53.861163 sshd[3871]: Connection closed by 10.0.0.1 port 60866 Sep 5 06:05:53.861483 sshd-session[3868]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:53.866984 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:60866.service: Deactivated successfully. Sep 5 06:05:53.869479 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 06:05:53.870477 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Sep 5 06:05:53.872903 systemd-logind[1540]: Removed session 8. Sep 5 06:05:54.038787 kubelet[2714]: E0905 06:05:54.038696 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:54.040280 containerd[1558]: time="2025-09-05T06:05:54.040229491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-52xll,Uid:0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:54.040550 containerd[1558]: time="2025-09-05T06:05:54.040483018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-8m6kc,Uid:7931b59e-4ee7-4b4d-819f-020df21da76b,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:54.313951 systemd[1]: Created slice kubepods-besteffort-pod94f834ed_2d8c_4f19_914d_17b860d4382a.slice - libcontainer container kubepods-besteffort-pod94f834ed_2d8c_4f19_914d_17b860d4382a.slice. Sep 5 06:05:54.350631 kubelet[2714]: I0905 06:05:54.350453 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94f834ed-2d8c-4f19-914d-17b860d4382a-whisker-backend-key-pair\") pod \"whisker-f74b5dc-d97bw\" (UID: \"94f834ed-2d8c-4f19-914d-17b860d4382a\") " pod="calico-system/whisker-f74b5dc-d97bw" Sep 5 06:05:54.350631 kubelet[2714]: I0905 06:05:54.350527 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426vb\" (UniqueName: \"kubernetes.io/projected/94f834ed-2d8c-4f19-914d-17b860d4382a-kube-api-access-426vb\") pod \"whisker-f74b5dc-d97bw\" (UID: \"94f834ed-2d8c-4f19-914d-17b860d4382a\") " pod="calico-system/whisker-f74b5dc-d97bw" Sep 5 06:05:54.350631 kubelet[2714]: I0905 06:05:54.350643 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94f834ed-2d8c-4f19-914d-17b860d4382a-whisker-ca-bundle\") pod \"whisker-f74b5dc-d97bw\" (UID: \"94f834ed-2d8c-4f19-914d-17b860d4382a\") " pod="calico-system/whisker-f74b5dc-d97bw" Sep 5 06:05:54.624660 containerd[1558]: time="2025-09-05T06:05:54.624172325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f74b5dc-d97bw,Uid:94f834ed-2d8c-4f19-914d-17b860d4382a,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:54.734900 systemd-networkd[1467]: calibbafa7803c9: Link UP Sep 5 06:05:54.735153 systemd-networkd[1467]: calibbafa7803c9: Gained carrier Sep 5 06:05:54.759426 containerd[1558]: 2025-09-05 06:05:54.250 [INFO][3930] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:05:54.759426 containerd[1558]: 2025-09-05 06:05:54.298 [INFO][3930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--52xll-eth0 coredns-674b8bbfcf- kube-system 0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8 896 0 2025-09-05 06:05:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-52xll eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbafa7803c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-52xll" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--52xll-" Sep 5 06:05:54.759426 containerd[1558]: 2025-09-05 06:05:54.303 [INFO][3930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-52xll" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" Sep 5 06:05:54.759426 containerd[1558]: 2025-09-05 06:05:54.638 [INFO][3967] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" HandleID="k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Workload="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.638 [INFO][3967] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" HandleID="k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Workload="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-52xll", "timestamp":"2025-09-05 06:05:54.638384089 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.638 [INFO][3967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.638 [INFO][3967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.640 [INFO][3967] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.661 [INFO][3967] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" host="localhost" Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.681 [INFO][3967] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.691 [INFO][3967] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.693 [INFO][3967] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.696 [INFO][3967] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:54.759788 containerd[1558]: 2025-09-05 06:05:54.696 [INFO][3967] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" host="localhost" Sep 5 06:05:54.760021 containerd[1558]: 2025-09-05 06:05:54.700 [INFO][3967] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c Sep 5 06:05:54.760021 containerd[1558]: 2025-09-05 06:05:54.706 [INFO][3967] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" host="localhost" Sep 5 06:05:54.760021 containerd[1558]: 2025-09-05 06:05:54.714 [INFO][3967] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" host="localhost" Sep 5 06:05:54.760021 containerd[1558]: 2025-09-05 06:05:54.714 [INFO][3967] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" host="localhost" Sep 5 06:05:54.760021 containerd[1558]: 2025-09-05 06:05:54.714 [INFO][3967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:54.760021 containerd[1558]: 2025-09-05 06:05:54.714 [INFO][3967] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" HandleID="k8s-pod-network.be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Workload="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" Sep 5 06:05:54.760172 containerd[1558]: 2025-09-05 06:05:54.721 [INFO][3930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-52xll" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--52xll-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-52xll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbafa7803c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:54.760269 containerd[1558]: 2025-09-05 06:05:54.722 [INFO][3930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-52xll" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" Sep 5 06:05:54.760269 containerd[1558]: 2025-09-05 06:05:54.722 [INFO][3930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbafa7803c9 ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-52xll" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" Sep 5 06:05:54.760269 containerd[1558]: 2025-09-05 06:05:54.734 [INFO][3930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-52xll" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" Sep 5 06:05:54.760352 containerd[1558]: 2025-09-05 06:05:54.734 [INFO][3930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-52xll" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--52xll-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c", Pod:"coredns-674b8bbfcf-52xll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbafa7803c9", MAC:"2e:7e:d1:a1:7d:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:54.760352 containerd[1558]: 2025-09-05 06:05:54.753 [INFO][3930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" Namespace="kube-system" Pod="coredns-674b8bbfcf-52xll" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--52xll-eth0" Sep 5 06:05:54.855333 systemd-networkd[1467]: calif7ff9f1e6df: Link UP Sep 5 06:05:54.856169 systemd-networkd[1467]: calif7ff9f1e6df: Gained carrier Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.122 [INFO][3912] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.285 [INFO][3912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0 calico-apiserver-5c5f977766- calico-apiserver f25a560a-80d5-425b-821d-6c32b0f4557e 901 0 2025-09-05 06:05:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c5f977766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c5f977766-gtxxn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7ff9f1e6df [] [] }} ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-gtxxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.285 [INFO][3912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-gtxxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.638 [INFO][3959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" HandleID="k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Workload="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.639 [INFO][3959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" HandleID="k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Workload="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011f440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c5f977766-gtxxn", "timestamp":"2025-09-05 06:05:54.637544923 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.639 [INFO][3959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.714 [INFO][3959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.715 [INFO][3959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.761 [INFO][3959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.783 [INFO][3959] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.792 [INFO][3959] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.796 [INFO][3959] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.799 [INFO][3959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.799 [INFO][3959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.801 [INFO][3959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.843 [INFO][3959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.850 [INFO][3959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.850 [INFO][3959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" host="localhost" Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.850 [INFO][3959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:54.878889 containerd[1558]: 2025-09-05 06:05:54.850 [INFO][3959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" HandleID="k8s-pod-network.4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Workload="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" Sep 5 06:05:54.879727 containerd[1558]: 2025-09-05 06:05:54.853 [INFO][3912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-gtxxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0", GenerateName:"calico-apiserver-5c5f977766-", Namespace:"calico-apiserver", SelfLink:"", UID:"f25a560a-80d5-425b-821d-6c32b0f4557e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f977766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c5f977766-gtxxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7ff9f1e6df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:54.879727 containerd[1558]: 2025-09-05 06:05:54.853 [INFO][3912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-gtxxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" Sep 5 06:05:54.879727 containerd[1558]: 2025-09-05 06:05:54.853 [INFO][3912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7ff9f1e6df ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-gtxxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" Sep 5 06:05:54.879727 containerd[1558]: 2025-09-05 06:05:54.855 [INFO][3912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-gtxxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" Sep 5 06:05:54.879727 containerd[1558]: 2025-09-05 06:05:54.858 [INFO][3912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-gtxxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0", GenerateName:"calico-apiserver-5c5f977766-", Namespace:"calico-apiserver", SelfLink:"", UID:"f25a560a-80d5-425b-821d-6c32b0f4557e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f977766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d", Pod:"calico-apiserver-5c5f977766-gtxxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7ff9f1e6df", MAC:"92:64:1f:08:74:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:54.879727 containerd[1558]: 2025-09-05 06:05:54.870 [INFO][3912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-gtxxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--gtxxn-eth0" Sep 5 06:05:54.918906 systemd-networkd[1467]: calia3c4ee197b4: Link UP Sep 5 06:05:54.919317 systemd-networkd[1467]: calia3c4ee197b4: Gained carrier Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.107 [INFO][3900] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.285 [INFO][3900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pc5f5-eth0 csi-node-driver- calico-system d38e615f-d43f-4af2-a8f3-d11048e4a95a 773 0 2025-09-05 06:05:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pc5f5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia3c4ee197b4 [] [] }} ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Namespace="calico-system" Pod="csi-node-driver-pc5f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--pc5f5-" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.285 [INFO][3900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Namespace="calico-system" Pod="csi-node-driver-pc5f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--pc5f5-eth0" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.639 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" HandleID="k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Workload="localhost-k8s-csi--node--driver--pc5f5-eth0" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.640 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" HandleID="k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Workload="localhost-k8s-csi--node--driver--pc5f5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pc5f5", "timestamp":"2025-09-05 06:05:54.639889997 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.640 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.850 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.850 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.862 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.882 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.890 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.891 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.894 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.894 [INFO][3966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.896 [INFO][3966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806 Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.901 [INFO][3966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.909 [INFO][3966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.909 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" host="localhost" Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.909 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:54.942334 containerd[1558]: 2025-09-05 06:05:54.909 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" HandleID="k8s-pod-network.2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Workload="localhost-k8s-csi--node--driver--pc5f5-eth0" Sep 5 06:05:54.944169 containerd[1558]: 2025-09-05 06:05:54.917 [INFO][3900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Namespace="calico-system" Pod="csi-node-driver-pc5f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--pc5f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pc5f5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d38e615f-d43f-4af2-a8f3-d11048e4a95a", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pc5f5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3c4ee197b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:54.944169 containerd[1558]: 2025-09-05 06:05:54.917 [INFO][3900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Namespace="calico-system" Pod="csi-node-driver-pc5f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--pc5f5-eth0" Sep 5 06:05:54.944169 containerd[1558]: 2025-09-05 06:05:54.917 [INFO][3900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3c4ee197b4 ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Namespace="calico-system" Pod="csi-node-driver-pc5f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--pc5f5-eth0" Sep 5 06:05:54.944169 containerd[1558]: 2025-09-05 06:05:54.919 [INFO][3900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Namespace="calico-system" Pod="csi-node-driver-pc5f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--pc5f5-eth0" Sep 5 06:05:54.944169 containerd[1558]: 2025-09-05 06:05:54.919 [INFO][3900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Namespace="calico-system" Pod="csi-node-driver-pc5f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--pc5f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pc5f5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d38e615f-d43f-4af2-a8f3-d11048e4a95a", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806", Pod:"csi-node-driver-pc5f5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3c4ee197b4", MAC:"fa:d9:5b:6e:31:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:54.944169 containerd[1558]: 2025-09-05 06:05:54.932 [INFO][3900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" Namespace="calico-system" Pod="csi-node-driver-pc5f5" WorkloadEndpoint="localhost-k8s-csi--node--driver--pc5f5-eth0" Sep 5 06:05:55.021217 containerd[1558]: time="2025-09-05T06:05:55.021159140Z" level=info msg="connecting to shim 2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806" address="unix:///run/containerd/s/f16b108b121432264e22bb412130461f0549749335ad1cead3adf14330f5647f" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:55.022461 containerd[1558]: time="2025-09-05T06:05:55.022424857Z" level=info msg="connecting to shim be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c" address="unix:///run/containerd/s/95c67d142b0ba567d58cf42a275a1ae003ba89ca995e3a8ea1ad06baaad7b672" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:55.040794 kubelet[2714]: E0905 06:05:55.039524 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:55.044752 containerd[1558]: time="2025-09-05T06:05:55.044697453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2v74x,Uid:8f5933b0-2e1c-4861-957a-b0b91df9fee4,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:55.050995 containerd[1558]: time="2025-09-05T06:05:55.048057783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f977766-74s42,Uid:f7382ce4-5666-4691-8bf5-34a61fdb9ff7,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:05:55.055162 containerd[1558]: time="2025-09-05T06:05:55.055115854Z" level=info msg="connecting to shim 4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d" address="unix:///run/containerd/s/bd9c26b971411b98f316d340fa66d52dbb6e162fd05aadb1421db536f861e0a4" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:55.096057 systemd[1]: Started cri-containerd-2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806.scope - libcontainer container 2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806. Sep 5 06:05:55.103153 systemd[1]: Started cri-containerd-4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d.scope - libcontainer container 4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d. Sep 5 06:05:55.106208 systemd[1]: Started cri-containerd-be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c.scope - libcontainer container be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c. Sep 5 06:05:55.112113 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:55.130264 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:55.136806 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:55.232546 systemd-networkd[1467]: cali615c220fc28: Link UP Sep 5 06:05:55.233372 systemd-networkd[1467]: cali615c220fc28: Gained carrier Sep 5 06:05:55.244748 containerd[1558]: time="2025-09-05T06:05:55.242303302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pc5f5,Uid:d38e615f-d43f-4af2-a8f3-d11048e4a95a,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806\"" Sep 5 06:05:55.249289 containerd[1558]: time="2025-09-05T06:05:55.249238904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.248 [INFO][3940] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.294 [INFO][3940] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--8m6kc-eth0 goldmane-54d579b49d- calico-system 7931b59e-4ee7-4b4d-819f-020df21da76b 900 0 2025-09-05 06:05:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-8m6kc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali615c220fc28 [] [] }} ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Namespace="calico-system" Pod="goldmane-54d579b49d-8m6kc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8m6kc-" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.294 [INFO][3940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Namespace="calico-system" Pod="goldmane-54d579b49d-8m6kc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.637 [INFO][3963] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" HandleID="k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Workload="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.640 [INFO][3963] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" HandleID="k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Workload="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-8m6kc", "timestamp":"2025-09-05 06:05:54.637545454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.640 [INFO][3963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.910 [INFO][3963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.911 [INFO][3963] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.963 [INFO][3963] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.984 [INFO][3963] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.992 [INFO][3963] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.995 [INFO][3963] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.998 [INFO][3963] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:54.998 [INFO][3963] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:55.005 [INFO][3963] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:55.047 [INFO][3963] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:55.217 [INFO][3963] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:55.217 [INFO][3963] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" host="localhost" Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:55.217 [INFO][3963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:55.269787 containerd[1558]: 2025-09-05 06:05:55.217 [INFO][3963] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" HandleID="k8s-pod-network.6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Workload="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" Sep 5 06:05:55.270489 containerd[1558]: 2025-09-05 06:05:55.229 [INFO][3940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Namespace="calico-system" Pod="goldmane-54d579b49d-8m6kc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--8m6kc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7931b59e-4ee7-4b4d-819f-020df21da76b", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-8m6kc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali615c220fc28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:55.270489 containerd[1558]: 2025-09-05 06:05:55.229 [INFO][3940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Namespace="calico-system" Pod="goldmane-54d579b49d-8m6kc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" Sep 5 06:05:55.270489 containerd[1558]: 2025-09-05 06:05:55.229 [INFO][3940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali615c220fc28 ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Namespace="calico-system" Pod="goldmane-54d579b49d-8m6kc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" Sep 5 06:05:55.270489 containerd[1558]: 2025-09-05 06:05:55.234 [INFO][3940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Namespace="calico-system" Pod="goldmane-54d579b49d-8m6kc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" Sep 5 06:05:55.270489 containerd[1558]: 2025-09-05 06:05:55.234 [INFO][3940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Namespace="calico-system" Pod="goldmane-54d579b49d-8m6kc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--8m6kc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"7931b59e-4ee7-4b4d-819f-020df21da76b", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea", Pod:"goldmane-54d579b49d-8m6kc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali615c220fc28", MAC:"92:62:81:92:95:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:55.270489 containerd[1558]: 2025-09-05 06:05:55.250 [INFO][3940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" Namespace="calico-system" Pod="goldmane-54d579b49d-8m6kc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8m6kc-eth0" Sep 5 06:05:55.309129 containerd[1558]: time="2025-09-05T06:05:55.309082391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f977766-gtxxn,Uid:f25a560a-80d5-425b-821d-6c32b0f4557e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d\"" Sep 5 06:05:55.316859 systemd-networkd[1467]: cali1906c72fde7: Link UP Sep 5 06:05:55.317987 systemd-networkd[1467]: cali1906c72fde7: Gained carrier Sep 5 06:05:55.318765 containerd[1558]: time="2025-09-05T06:05:55.318649614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-52xll,Uid:0b68b8b5-61eb-4d97-a2a5-7ddc0df13ea8,Namespace:kube-system,Attempt:0,} returns sandbox id \"be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c\"" Sep 5 06:05:55.321759 kubelet[2714]: E0905 06:05:55.321675 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:55.337506 containerd[1558]: time="2025-09-05T06:05:55.337428551Z" level=info msg="CreateContainer within sandbox \"be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:54.692 [INFO][4010] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:54.709 [INFO][4010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--f74b5dc--d97bw-eth0 whisker-f74b5dc- calico-system 94f834ed-2d8c-4f19-914d-17b860d4382a 1005 0 2025-09-05 06:05:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f74b5dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-f74b5dc-d97bw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1906c72fde7 [] [] }} ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Namespace="calico-system" Pod="whisker-f74b5dc-d97bw" WorkloadEndpoint="localhost-k8s-whisker--f74b5dc--d97bw-" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:54.710 [INFO][4010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Namespace="calico-system" Pod="whisker-f74b5dc-d97bw" WorkloadEndpoint="localhost-k8s-whisker--f74b5dc--d97bw-eth0" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:54.746 [INFO][4023] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" HandleID="k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Workload="localhost-k8s-whisker--f74b5dc--d97bw-eth0" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:54.746 [INFO][4023] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" HandleID="k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Workload="localhost-k8s-whisker--f74b5dc--d97bw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-f74b5dc-d97bw", "timestamp":"2025-09-05 06:05:54.746341163 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:54.746 [INFO][4023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.217 [INFO][4023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.217 [INFO][4023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.230 [INFO][4023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.242 [INFO][4023] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.259 [INFO][4023] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.269 [INFO][4023] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.272 [INFO][4023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.272 [INFO][4023] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.274 [INFO][4023] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982 Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.281 [INFO][4023] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.293 [INFO][4023] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.293 [INFO][4023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" host="localhost" Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.294 [INFO][4023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:55.362770 containerd[1558]: 2025-09-05 06:05:55.294 [INFO][4023] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" HandleID="k8s-pod-network.a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Workload="localhost-k8s-whisker--f74b5dc--d97bw-eth0" Sep 5 06:05:55.363759 containerd[1558]: 2025-09-05 06:05:55.313 [INFO][4010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Namespace="calico-system" Pod="whisker-f74b5dc-d97bw" WorkloadEndpoint="localhost-k8s-whisker--f74b5dc--d97bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f74b5dc--d97bw-eth0", GenerateName:"whisker-f74b5dc-", Namespace:"calico-system", SelfLink:"", UID:"94f834ed-2d8c-4f19-914d-17b860d4382a", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f74b5dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-f74b5dc-d97bw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1906c72fde7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:55.363759 containerd[1558]: 2025-09-05 06:05:55.313 [INFO][4010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Namespace="calico-system" Pod="whisker-f74b5dc-d97bw" WorkloadEndpoint="localhost-k8s-whisker--f74b5dc--d97bw-eth0" Sep 5 06:05:55.363759 containerd[1558]: 2025-09-05 06:05:55.313 [INFO][4010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1906c72fde7 ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Namespace="calico-system" Pod="whisker-f74b5dc-d97bw" WorkloadEndpoint="localhost-k8s-whisker--f74b5dc--d97bw-eth0" Sep 5 06:05:55.363759 containerd[1558]: 2025-09-05 06:05:55.318 [INFO][4010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Namespace="calico-system" Pod="whisker-f74b5dc-d97bw" WorkloadEndpoint="localhost-k8s-whisker--f74b5dc--d97bw-eth0" Sep 5 06:05:55.363759 containerd[1558]: 2025-09-05 06:05:55.319 [INFO][4010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Namespace="calico-system" Pod="whisker-f74b5dc-d97bw" WorkloadEndpoint="localhost-k8s-whisker--f74b5dc--d97bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f74b5dc--d97bw-eth0", GenerateName:"whisker-f74b5dc-", Namespace:"calico-system", SelfLink:"", UID:"94f834ed-2d8c-4f19-914d-17b860d4382a", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f74b5dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982", Pod:"whisker-f74b5dc-d97bw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1906c72fde7", MAC:"66:1a:a2:bb:cd:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:55.363759 containerd[1558]: 2025-09-05 06:05:55.329 [INFO][4010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" Namespace="calico-system" Pod="whisker-f74b5dc-d97bw" WorkloadEndpoint="localhost-k8s-whisker--f74b5dc--d97bw-eth0" Sep 5 06:05:55.376898 containerd[1558]: time="2025-09-05T06:05:55.376795974Z" level=info msg="connecting to shim 6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea" address="unix:///run/containerd/s/4fd7b8d035a3fba42141427b933aad375c31b4720555a27760ce57d06f5a9c28" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:55.400925 containerd[1558]: time="2025-09-05T06:05:55.400473359Z" level=info msg="Container 28ab4a0b1882f4b06e17c984503b4169c20ef598f055115c06267f26fc08d2aa: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:55.413192 systemd[1]: Started cri-containerd-6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea.scope - libcontainer container 6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea. Sep 5 06:05:55.452808 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:55.488793 containerd[1558]: time="2025-09-05T06:05:55.488187444Z" level=info msg="CreateContainer within sandbox \"be90796a789c60bc63cd2c12cf1f6423175403cc33c706f3ea0c466b40cfed6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"28ab4a0b1882f4b06e17c984503b4169c20ef598f055115c06267f26fc08d2aa\"" Sep 5 06:05:55.490221 containerd[1558]: time="2025-09-05T06:05:55.490172942Z" level=info msg="StartContainer for \"28ab4a0b1882f4b06e17c984503b4169c20ef598f055115c06267f26fc08d2aa\"" Sep 5 06:05:55.492649 containerd[1558]: time="2025-09-05T06:05:55.492583588Z" level=info msg="connecting to shim 28ab4a0b1882f4b06e17c984503b4169c20ef598f055115c06267f26fc08d2aa" address="unix:///run/containerd/s/95c67d142b0ba567d58cf42a275a1ae003ba89ca995e3a8ea1ad06baaad7b672" protocol=ttrpc version=3 Sep 5 06:05:55.521364 systemd[1]: Started cri-containerd-28ab4a0b1882f4b06e17c984503b4169c20ef598f055115c06267f26fc08d2aa.scope - libcontainer container 28ab4a0b1882f4b06e17c984503b4169c20ef598f055115c06267f26fc08d2aa. Sep 5 06:05:55.535356 containerd[1558]: time="2025-09-05T06:05:55.535184793Z" level=info msg="connecting to shim a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982" address="unix:///run/containerd/s/e1005064a33b8f70d7f9587a710709954faa5184d5fdeaa15660abedad915cb0" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:55.555053 systemd-networkd[1467]: cali79ea78563ad: Link UP Sep 5 06:05:55.558402 systemd-networkd[1467]: cali79ea78563ad: Gained carrier Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.293 [INFO][4185] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.319 [INFO][4185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2v74x-eth0 coredns-674b8bbfcf- kube-system 8f5933b0-2e1c-4861-957a-b0b91df9fee4 897 0 2025-09-05 06:05:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2v74x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79ea78563ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v74x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2v74x-" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.321 [INFO][4185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v74x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.414 [INFO][4232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" HandleID="k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Workload="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.416 [INFO][4232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" HandleID="k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Workload="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2v74x", "timestamp":"2025-09-05 06:05:55.414500937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.417 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.417 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.417 [INFO][4232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.434 [INFO][4232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.441 [INFO][4232] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.454 [INFO][4232] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.459 [INFO][4232] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.462 [INFO][4232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.463 [INFO][4232] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.465 [INFO][4232] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.476 [INFO][4232] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.492 [INFO][4232] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.492 [INFO][4232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" host="localhost" Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.492 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:55.597523 containerd[1558]: 2025-09-05 06:05:55.492 [INFO][4232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" HandleID="k8s-pod-network.b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Workload="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" Sep 5 06:05:55.598493 containerd[1558]: 2025-09-05 06:05:55.542 [INFO][4185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v74x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2v74x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f5933b0-2e1c-4861-957a-b0b91df9fee4", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2v74x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79ea78563ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:55.598493 containerd[1558]: 2025-09-05 06:05:55.544 [INFO][4185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v74x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" Sep 5 06:05:55.598493 containerd[1558]: 2025-09-05 06:05:55.545 [INFO][4185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79ea78563ad ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v74x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" Sep 5 06:05:55.598493 containerd[1558]: 2025-09-05 06:05:55.563 [INFO][4185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v74x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" Sep 5 06:05:55.598493 containerd[1558]: 2025-09-05 06:05:55.566 [INFO][4185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v74x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2v74x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f5933b0-2e1c-4861-957a-b0b91df9fee4", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f", Pod:"coredns-674b8bbfcf-2v74x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79ea78563ad", MAC:"b2:11:4c:e2:98:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:55.598493 containerd[1558]: 2025-09-05 06:05:55.587 [INFO][4185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v74x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2v74x-eth0" Sep 5 06:05:55.614341 containerd[1558]: time="2025-09-05T06:05:55.614285566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-8m6kc,Uid:7931b59e-4ee7-4b4d-819f-020df21da76b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea\"" Sep 5 06:05:55.617290 systemd[1]: Started cri-containerd-a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982.scope - libcontainer container a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982. Sep 5 06:05:55.641504 containerd[1558]: time="2025-09-05T06:05:55.641428367Z" level=info msg="StartContainer for \"28ab4a0b1882f4b06e17c984503b4169c20ef598f055115c06267f26fc08d2aa\" returns successfully" Sep 5 06:05:55.659005 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:55.664022 systemd-networkd[1467]: cali451696b43e4: Link UP Sep 5 06:05:55.665682 systemd-networkd[1467]: cali451696b43e4: Gained carrier Sep 5 06:05:55.676658 containerd[1558]: time="2025-09-05T06:05:55.676577890Z" level=info msg="connecting to shim b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f" address="unix:///run/containerd/s/c656697880cc894c604bd99928b877dd33de21e38d736daa7ab8b66638c5c80b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.326 [INFO][4213] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.356 [INFO][4213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0 calico-apiserver-5c5f977766- calico-apiserver f7382ce4-5666-4691-8bf5-34a61fdb9ff7 899 0 2025-09-05 06:05:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c5f977766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c5f977766-74s42 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali451696b43e4 [] [] }} ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-74s42" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--74s42-" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.356 [INFO][4213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-74s42" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.439 [INFO][4262] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" HandleID="k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Workload="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.442 [INFO][4262] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" HandleID="k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Workload="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c5f977766-74s42", "timestamp":"2025-09-05 06:05:55.439724344 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.442 [INFO][4262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.494 [INFO][4262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.494 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.550 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.568 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.603 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.609 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.612 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.612 [INFO][4262] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.614 [INFO][4262] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.625 [INFO][4262] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.646 [INFO][4262] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.647 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" host="localhost" Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.647 [INFO][4262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:55.728272 containerd[1558]: 2025-09-05 06:05:55.647 [INFO][4262] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" HandleID="k8s-pod-network.48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Workload="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" Sep 5 06:05:55.729616 containerd[1558]: 2025-09-05 06:05:55.658 [INFO][4213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-74s42" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0", GenerateName:"calico-apiserver-5c5f977766-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7382ce4-5666-4691-8bf5-34a61fdb9ff7", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f977766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c5f977766-74s42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali451696b43e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:55.729616 containerd[1558]: 2025-09-05 06:05:55.658 [INFO][4213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-74s42" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" Sep 5 06:05:55.729616 containerd[1558]: 2025-09-05 06:05:55.658 [INFO][4213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali451696b43e4 ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-74s42" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" Sep 5 06:05:55.729616 containerd[1558]: 2025-09-05 06:05:55.666 [INFO][4213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-74s42" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" Sep 5 06:05:55.729616 containerd[1558]: 2025-09-05 06:05:55.667 [INFO][4213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-74s42" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0", GenerateName:"calico-apiserver-5c5f977766-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7382ce4-5666-4691-8bf5-34a61fdb9ff7", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5f977766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a", Pod:"calico-apiserver-5c5f977766-74s42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali451696b43e4", MAC:"ea:b1:72:70:75:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:55.729616 containerd[1558]: 2025-09-05 06:05:55.689 [INFO][4213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" Namespace="calico-apiserver" Pod="calico-apiserver-5c5f977766-74s42" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c5f977766--74s42-eth0" Sep 5 06:05:55.740103 systemd[1]: Started cri-containerd-b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f.scope - libcontainer container b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f. Sep 5 06:05:55.778445 containerd[1558]: time="2025-09-05T06:05:55.778400083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f74b5dc-d97bw,Uid:94f834ed-2d8c-4f19-914d-17b860d4382a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982\"" Sep 5 06:05:55.782993 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:55.793037 systemd-networkd[1467]: calibbafa7803c9: Gained IPv6LL Sep 5 06:05:55.804580 containerd[1558]: time="2025-09-05T06:05:55.804473557Z" level=info msg="connecting to shim 48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a" address="unix:///run/containerd/s/5c1f5aa556d9fa40dd8466dcb415744633db628407f37280b6328fc68ed9e6f4" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:55.859035 systemd[1]: Started cri-containerd-48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a.scope - libcontainer container 48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a. Sep 5 06:05:55.865595 containerd[1558]: time="2025-09-05T06:05:55.865533689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2v74x,Uid:8f5933b0-2e1c-4861-957a-b0b91df9fee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f\"" Sep 5 06:05:55.868068 kubelet[2714]: E0905 06:05:55.867998 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:55.878642 containerd[1558]: time="2025-09-05T06:05:55.878573090Z" level=info msg="CreateContainer within sandbox \"b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 06:05:55.909967 containerd[1558]: time="2025-09-05T06:05:55.909813715Z" level=info msg="Container a0097ea973f4ec10eaafecc212199647b0b78deeede029ef3be5c286eebb2e97: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:55.920601 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:55.927036 containerd[1558]: time="2025-09-05T06:05:55.926993952Z" level=info msg="CreateContainer within sandbox \"b7e716a8d56882e16022068a43958b6b5f8afb756fae6bc624c1f7d66d44266f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0097ea973f4ec10eaafecc212199647b0b78deeede029ef3be5c286eebb2e97\"" Sep 5 06:05:55.928898 containerd[1558]: time="2025-09-05T06:05:55.928052770Z" level=info msg="StartContainer for \"a0097ea973f4ec10eaafecc212199647b0b78deeede029ef3be5c286eebb2e97\"" Sep 5 06:05:55.929788 containerd[1558]: time="2025-09-05T06:05:55.929766056Z" level=info msg="connecting to shim a0097ea973f4ec10eaafecc212199647b0b78deeede029ef3be5c286eebb2e97" address="unix:///run/containerd/s/c656697880cc894c604bd99928b877dd33de21e38d736daa7ab8b66638c5c80b" protocol=ttrpc version=3 Sep 5 06:05:55.982951 systemd[1]: Started cri-containerd-a0097ea973f4ec10eaafecc212199647b0b78deeede029ef3be5c286eebb2e97.scope - libcontainer container a0097ea973f4ec10eaafecc212199647b0b78deeede029ef3be5c286eebb2e97. Sep 5 06:05:56.014349 containerd[1558]: time="2025-09-05T06:05:56.014306030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5f977766-74s42,Uid:f7382ce4-5666-4691-8bf5-34a61fdb9ff7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a\"" Sep 5 06:05:56.040698 containerd[1558]: time="2025-09-05T06:05:56.040639950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dd7d8444-2jb59,Uid:7a42940d-4b8d-4933-ab9b-7ece64bf4c98,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:56.042956 containerd[1558]: time="2025-09-05T06:05:56.042877781Z" level=info msg="StartContainer for \"a0097ea973f4ec10eaafecc212199647b0b78deeede029ef3be5c286eebb2e97\" returns successfully" Sep 5 06:05:56.043180 kubelet[2714]: I0905 06:05:56.042958 2714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6" path="/var/lib/kubelet/pods/0be15ff9-35fd-4d6e-ac6f-5bfdea7aa3d6/volumes" Sep 5 06:05:56.112951 systemd-networkd[1467]: calia3c4ee197b4: Gained IPv6LL Sep 5 06:05:56.179448 systemd-networkd[1467]: cali1f312bbb44c: Link UP Sep 5 06:05:56.179850 systemd-networkd[1467]: cali1f312bbb44c: Gained carrier Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.095 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0 calico-kube-controllers-64dd7d8444- calico-system 7a42940d-4b8d-4933-ab9b-7ece64bf4c98 898 0 2025-09-05 06:05:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64dd7d8444 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64dd7d8444-2jb59 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f312bbb44c [] [] }} ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Namespace="calico-system" Pod="calico-kube-controllers-64dd7d8444-2jb59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.095 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Namespace="calico-system" Pod="calico-kube-controllers-64dd7d8444-2jb59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.138 [INFO][4666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" HandleID="k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Workload="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.138 [INFO][4666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" HandleID="k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Workload="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64dd7d8444-2jb59", "timestamp":"2025-09-05 06:05:56.138139085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.138 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.138 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.138 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.146 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.151 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.155 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.158 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.160 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.160 [INFO][4666] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.161 [INFO][4666] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.165 [INFO][4666] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.171 [INFO][4666] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.171 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" host="localhost" Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.171 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:56.198877 containerd[1558]: 2025-09-05 06:05:56.171 [INFO][4666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" HandleID="k8s-pod-network.ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Workload="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" Sep 5 06:05:56.199789 containerd[1558]: 2025-09-05 06:05:56.175 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Namespace="calico-system" Pod="calico-kube-controllers-64dd7d8444-2jb59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0", GenerateName:"calico-kube-controllers-64dd7d8444-", Namespace:"calico-system", SelfLink:"", UID:"7a42940d-4b8d-4933-ab9b-7ece64bf4c98", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64dd7d8444", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64dd7d8444-2jb59", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f312bbb44c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:56.199789 containerd[1558]: 2025-09-05 06:05:56.175 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Namespace="calico-system" Pod="calico-kube-controllers-64dd7d8444-2jb59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" Sep 5 06:05:56.199789 containerd[1558]: 2025-09-05 06:05:56.175 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f312bbb44c ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Namespace="calico-system" Pod="calico-kube-controllers-64dd7d8444-2jb59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" Sep 5 06:05:56.199789 containerd[1558]: 2025-09-05 06:05:56.177 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Namespace="calico-system" Pod="calico-kube-controllers-64dd7d8444-2jb59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" Sep 5 06:05:56.199789 containerd[1558]: 2025-09-05 06:05:56.181 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Namespace="calico-system" Pod="calico-kube-controllers-64dd7d8444-2jb59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0", GenerateName:"calico-kube-controllers-64dd7d8444-", Namespace:"calico-system", SelfLink:"", UID:"7a42940d-4b8d-4933-ab9b-7ece64bf4c98", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64dd7d8444", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c", Pod:"calico-kube-controllers-64dd7d8444-2jb59", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f312bbb44c", MAC:"ba:b0:a5:f8:44:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:56.199789 containerd[1558]: 2025-09-05 06:05:56.191 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" Namespace="calico-system" Pod="calico-kube-controllers-64dd7d8444-2jb59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64dd7d8444--2jb59-eth0" Sep 5 06:05:56.225762 containerd[1558]: time="2025-09-05T06:05:56.225430917Z" level=info msg="connecting to shim ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c" address="unix:///run/containerd/s/26c012f3acee7ee8ecdd1985655beae26db60e11b7003ce39bce084d3ab70e2e" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:56.271911 systemd[1]: Started cri-containerd-ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c.scope - libcontainer container ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c. Sep 5 06:05:56.288140 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:56.321588 containerd[1558]: time="2025-09-05T06:05:56.321518352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64dd7d8444-2jb59,Uid:7a42940d-4b8d-4933-ab9b-7ece64bf4c98,Namespace:calico-system,Attempt:0,} returns sandbox id \"ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c\"" Sep 5 06:05:56.369583 kubelet[2714]: E0905 06:05:56.367953 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:56.370122 systemd-networkd[1467]: calif7ff9f1e6df: Gained IPv6LL Sep 5 06:05:56.373715 kubelet[2714]: E0905 06:05:56.373669 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:56.414485 kubelet[2714]: I0905 06:05:56.413909 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-52xll" podStartSLOduration=42.413878489 podStartE2EDuration="42.413878489s" podCreationTimestamp="2025-09-05 06:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:56.3985811 +0000 UTC m=+48.863873826" watchObservedRunningTime="2025-09-05 06:05:56.413878489 +0000 UTC m=+48.879171195" Sep 5 06:05:56.434426 systemd-networkd[1467]: vxlan.calico: Link UP Sep 5 06:05:56.435723 kubelet[2714]: I0905 06:05:56.435506 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2v74x" podStartSLOduration=42.435481617 podStartE2EDuration="42.435481617s" podCreationTimestamp="2025-09-05 06:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:56.415481389 +0000 UTC m=+48.880774105" watchObservedRunningTime="2025-09-05 06:05:56.435481617 +0000 UTC m=+48.900774333" Sep 5 06:05:56.434880 systemd-networkd[1467]: vxlan.calico: Gained carrier Sep 5 06:05:56.945063 systemd-networkd[1467]: cali615c220fc28: Gained IPv6LL Sep 5 06:05:57.008933 systemd-networkd[1467]: cali1906c72fde7: Gained IPv6LL Sep 5 06:05:57.295423 containerd[1558]: time="2025-09-05T06:05:57.295323789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:57.296200 containerd[1558]: time="2025-09-05T06:05:57.296145481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 06:05:57.298302 containerd[1558]: time="2025-09-05T06:05:57.298244942Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:57.301232 containerd[1558]: time="2025-09-05T06:05:57.301130860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:57.301520 containerd[1558]: time="2025-09-05T06:05:57.301472853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.052026309s" Sep 5 06:05:57.301559 containerd[1558]: time="2025-09-05T06:05:57.301519280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 06:05:57.303023 containerd[1558]: time="2025-09-05T06:05:57.302981274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 06:05:57.309561 containerd[1558]: time="2025-09-05T06:05:57.309512304Z" level=info msg="CreateContainer within sandbox \"2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 06:05:57.331618 containerd[1558]: time="2025-09-05T06:05:57.331538594Z" level=info msg="Container edfc73e861d95d8c9b4c2b218d19a858515d6af161ea0ec206498fa237d9dad2: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:57.343757 containerd[1558]: time="2025-09-05T06:05:57.343683415Z" level=info msg="CreateContainer within sandbox \"2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"edfc73e861d95d8c9b4c2b218d19a858515d6af161ea0ec206498fa237d9dad2\"" Sep 5 06:05:57.344535 containerd[1558]: time="2025-09-05T06:05:57.344467917Z" level=info msg="StartContainer for \"edfc73e861d95d8c9b4c2b218d19a858515d6af161ea0ec206498fa237d9dad2\"" Sep 5 06:05:57.346972 containerd[1558]: time="2025-09-05T06:05:57.346852073Z" level=info msg="connecting to shim edfc73e861d95d8c9b4c2b218d19a858515d6af161ea0ec206498fa237d9dad2" address="unix:///run/containerd/s/f16b108b121432264e22bb412130461f0549749335ad1cead3adf14330f5647f" protocol=ttrpc version=3 Sep 5 06:05:57.385117 systemd[1]: Started cri-containerd-edfc73e861d95d8c9b4c2b218d19a858515d6af161ea0ec206498fa237d9dad2.scope - libcontainer container edfc73e861d95d8c9b4c2b218d19a858515d6af161ea0ec206498fa237d9dad2. Sep 5 06:05:57.386056 kubelet[2714]: E0905 06:05:57.385313 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:57.386621 kubelet[2714]: E0905 06:05:57.386564 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:57.393014 systemd-networkd[1467]: cali451696b43e4: Gained IPv6LL Sep 5 06:05:57.520963 systemd-networkd[1467]: cali79ea78563ad: Gained IPv6LL Sep 5 06:05:57.534771 containerd[1558]: time="2025-09-05T06:05:57.534693457Z" level=info msg="StartContainer for \"edfc73e861d95d8c9b4c2b218d19a858515d6af161ea0ec206498fa237d9dad2\" returns successfully" Sep 5 06:05:57.777003 systemd-networkd[1467]: cali1f312bbb44c: Gained IPv6LL Sep 5 06:05:57.841023 systemd-networkd[1467]: vxlan.calico: Gained IPv6LL Sep 5 06:05:58.390772 kubelet[2714]: E0905 06:05:58.390543 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:58.390772 kubelet[2714]: E0905 06:05:58.390607 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:05:58.881626 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:60870.service - OpenSSH per-connection server daemon (10.0.0.1:60870). Sep 5 06:05:58.966725 sshd[4849]: Accepted publickey for core from 10.0.0.1 port 60870 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:05:58.968928 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:58.975802 systemd-logind[1540]: New session 9 of user core. Sep 5 06:05:58.984917 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 06:05:59.129815 sshd[4852]: Connection closed by 10.0.0.1 port 60870 Sep 5 06:05:59.130315 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:59.135638 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:60870.service: Deactivated successfully. Sep 5 06:05:59.138298 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 06:05:59.139316 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Sep 5 06:05:59.140673 systemd-logind[1540]: Removed session 9. Sep 5 06:06:00.958793 containerd[1558]: time="2025-09-05T06:06:00.958689125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:00.959881 containerd[1558]: time="2025-09-05T06:06:00.959842941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 06:06:00.961705 containerd[1558]: time="2025-09-05T06:06:00.961589990Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:00.964188 containerd[1558]: time="2025-09-05T06:06:00.964087819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:00.964888 containerd[1558]: time="2025-09-05T06:06:00.964758969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.661725928s" Sep 5 06:06:00.964888 containerd[1558]: time="2025-09-05T06:06:00.964807199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 06:06:00.967106 containerd[1558]: time="2025-09-05T06:06:00.967053676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 06:06:00.974282 containerd[1558]: time="2025-09-05T06:06:00.974228112Z" level=info msg="CreateContainer within sandbox \"4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 06:06:00.987503 containerd[1558]: time="2025-09-05T06:06:00.987420234Z" level=info msg="Container 2ef2f977342ef6ed5db682a5d10e2fc57a0bd41b03a7fae0fdf7edc95557ca0e: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:06:01.000537 containerd[1558]: time="2025-09-05T06:06:01.000468596Z" level=info msg="CreateContainer within sandbox \"4dada33b0778c66f3bde77eff53be48642a96f707e12d00cc1acb7dc0f4f692d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ef2f977342ef6ed5db682a5d10e2fc57a0bd41b03a7fae0fdf7edc95557ca0e\"" Sep 5 06:06:01.001231 containerd[1558]: time="2025-09-05T06:06:01.001188276Z" level=info msg="StartContainer for \"2ef2f977342ef6ed5db682a5d10e2fc57a0bd41b03a7fae0fdf7edc95557ca0e\"" Sep 5 06:06:01.002548 containerd[1558]: time="2025-09-05T06:06:01.002518203Z" level=info msg="connecting to shim 2ef2f977342ef6ed5db682a5d10e2fc57a0bd41b03a7fae0fdf7edc95557ca0e" address="unix:///run/containerd/s/bd9c26b971411b98f316d340fa66d52dbb6e162fd05aadb1421db536f861e0a4" protocol=ttrpc version=3 Sep 5 06:06:01.042095 systemd[1]: Started cri-containerd-2ef2f977342ef6ed5db682a5d10e2fc57a0bd41b03a7fae0fdf7edc95557ca0e.scope - libcontainer container 2ef2f977342ef6ed5db682a5d10e2fc57a0bd41b03a7fae0fdf7edc95557ca0e. Sep 5 06:06:01.298253 containerd[1558]: time="2025-09-05T06:06:01.297980106Z" level=info msg="StartContainer for \"2ef2f977342ef6ed5db682a5d10e2fc57a0bd41b03a7fae0fdf7edc95557ca0e\" returns successfully" Sep 5 06:06:02.409838 kubelet[2714]: I0905 06:06:02.409777 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:06:02.677978 kubelet[2714]: I0905 06:06:02.677477 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c5f977766-gtxxn" podStartSLOduration=33.023230546 podStartE2EDuration="38.677459474s" podCreationTimestamp="2025-09-05 06:05:24 +0000 UTC" firstStartedPulling="2025-09-05 06:05:55.311672414 +0000 UTC m=+47.776965130" lastFinishedPulling="2025-09-05 06:06:00.965901342 +0000 UTC m=+53.431194058" observedRunningTime="2025-09-05 06:06:01.420685187 +0000 UTC m=+53.885977923" watchObservedRunningTime="2025-09-05 06:06:02.677459474 +0000 UTC m=+55.142752190" Sep 5 06:06:04.142856 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:58592.service - OpenSSH per-connection server daemon (10.0.0.1:58592). Sep 5 06:06:04.597621 sshd[4924]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:04.599998 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:04.605911 systemd-logind[1540]: New session 10 of user core. Sep 5 06:06:04.613930 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 06:06:04.891885 sshd[4927]: Connection closed by 10.0.0.1 port 58592 Sep 5 06:06:04.892316 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:04.897992 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:58592.service: Deactivated successfully. Sep 5 06:06:04.901021 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 06:06:04.902070 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Sep 5 06:06:04.903974 systemd-logind[1540]: Removed session 10. Sep 5 06:06:05.295583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620785302.mount: Deactivated successfully. Sep 5 06:06:07.973073 containerd[1558]: time="2025-09-05T06:06:07.972889359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:07.975218 containerd[1558]: time="2025-09-05T06:06:07.975109304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 06:06:07.977859 containerd[1558]: time="2025-09-05T06:06:07.976972948Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:07.980229 containerd[1558]: time="2025-09-05T06:06:07.980155705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:07.981094 containerd[1558]: time="2025-09-05T06:06:07.981053952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 7.013937697s" Sep 5 06:06:07.981094 containerd[1558]: time="2025-09-05T06:06:07.981098309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 06:06:07.983077 containerd[1558]: time="2025-09-05T06:06:07.982532112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 06:06:07.987653 containerd[1558]: time="2025-09-05T06:06:07.987577081Z" level=info msg="CreateContainer within sandbox \"6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 06:06:08.009765 containerd[1558]: time="2025-09-05T06:06:08.007252450Z" level=info msg="Container 21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:06:08.035527 containerd[1558]: time="2025-09-05T06:06:08.035444258Z" level=info msg="CreateContainer within sandbox \"6c0806d797551e65514e325698ce85013394ca023d8c8c8d4fe3524bf11261ea\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b\"" Sep 5 06:06:08.036356 containerd[1558]: time="2025-09-05T06:06:08.036307045Z" level=info msg="StartContainer for \"21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b\"" Sep 5 06:06:08.038112 containerd[1558]: time="2025-09-05T06:06:08.038057589Z" level=info msg="connecting to shim 21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b" address="unix:///run/containerd/s/4fd7b8d035a3fba42141427b933aad375c31b4720555a27760ce57d06f5a9c28" protocol=ttrpc version=3 Sep 5 06:06:08.112984 systemd[1]: Started cri-containerd-21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b.scope - libcontainer container 21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b. Sep 5 06:06:08.194424 containerd[1558]: time="2025-09-05T06:06:08.194352100Z" level=info msg="StartContainer for \"21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b\" returns successfully" Sep 5 06:06:08.543394 kubelet[2714]: I0905 06:06:08.543310 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-8m6kc" podStartSLOduration=30.180980616 podStartE2EDuration="42.543288481s" podCreationTimestamp="2025-09-05 06:05:26 +0000 UTC" firstStartedPulling="2025-09-05 06:05:55.619851456 +0000 UTC m=+48.085144172" lastFinishedPulling="2025-09-05 06:06:07.982159311 +0000 UTC m=+60.447452037" observedRunningTime="2025-09-05 06:06:08.539988043 +0000 UTC m=+61.005280779" watchObservedRunningTime="2025-09-05 06:06:08.543288481 +0000 UTC m=+61.008581197" Sep 5 06:06:08.583812 containerd[1558]: time="2025-09-05T06:06:08.583764462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b\" id:\"48380b1bf6c0a1fbb3c635ddf898439549d36033417fdc0e17609d59841c9160\" pid:5029 exit_status:1 exited_at:{seconds:1757052368 nanos:583283091}" Sep 5 06:06:09.506924 containerd[1558]: time="2025-09-05T06:06:09.506864550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b\" id:\"59b6fe26de617111c03456e682bc4b8e9b55a128eb29277c58670cd4d40dd026\" pid:5056 exit_status:1 exited_at:{seconds:1757052369 nanos:506505687}" Sep 5 06:06:09.908975 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:51284.service - OpenSSH per-connection server daemon (10.0.0.1:51284). Sep 5 06:06:10.008896 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 51284 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:10.013681 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:10.018478 systemd-logind[1540]: New session 11 of user core. Sep 5 06:06:10.028934 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 06:06:10.260582 sshd[5072]: Connection closed by 10.0.0.1 port 51284 Sep 5 06:06:10.261016 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:10.266369 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:51284.service: Deactivated successfully. Sep 5 06:06:10.268722 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 06:06:10.269623 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Sep 5 06:06:10.270865 systemd-logind[1540]: Removed session 11. Sep 5 06:06:11.362463 containerd[1558]: time="2025-09-05T06:06:11.362386299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:11.363305 containerd[1558]: time="2025-09-05T06:06:11.363230937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 06:06:11.364506 containerd[1558]: time="2025-09-05T06:06:11.364446541Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:11.366835 containerd[1558]: time="2025-09-05T06:06:11.366763729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:11.367368 containerd[1558]: time="2025-09-05T06:06:11.367328438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.384768191s" Sep 5 06:06:11.367368 containerd[1558]: time="2025-09-05T06:06:11.367364879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 06:06:11.372411 containerd[1558]: time="2025-09-05T06:06:11.372378215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 06:06:11.380582 containerd[1558]: time="2025-09-05T06:06:11.380505666Z" level=info msg="CreateContainer within sandbox \"a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 06:06:11.391173 containerd[1558]: time="2025-09-05T06:06:11.391111817Z" level=info msg="Container 69425d2ce807fd64b7ff6cce9ba0d6e8bdf9239b85c5b2797cfe9f30126f3d0f: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:06:11.402167 containerd[1558]: time="2025-09-05T06:06:11.402066540Z" level=info msg="CreateContainer within sandbox \"a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"69425d2ce807fd64b7ff6cce9ba0d6e8bdf9239b85c5b2797cfe9f30126f3d0f\"" Sep 5 06:06:11.402841 containerd[1558]: time="2025-09-05T06:06:11.402776929Z" level=info msg="StartContainer for \"69425d2ce807fd64b7ff6cce9ba0d6e8bdf9239b85c5b2797cfe9f30126f3d0f\"" Sep 5 06:06:11.404299 containerd[1558]: time="2025-09-05T06:06:11.404265179Z" level=info msg="connecting to shim 69425d2ce807fd64b7ff6cce9ba0d6e8bdf9239b85c5b2797cfe9f30126f3d0f" address="unix:///run/containerd/s/e1005064a33b8f70d7f9587a710709954faa5184d5fdeaa15660abedad915cb0" protocol=ttrpc version=3 Sep 5 06:06:11.432013 systemd[1]: Started cri-containerd-69425d2ce807fd64b7ff6cce9ba0d6e8bdf9239b85c5b2797cfe9f30126f3d0f.scope - libcontainer container 69425d2ce807fd64b7ff6cce9ba0d6e8bdf9239b85c5b2797cfe9f30126f3d0f. Sep 5 06:06:11.745983 containerd[1558]: time="2025-09-05T06:06:11.745932028Z" level=info msg="StartContainer for \"69425d2ce807fd64b7ff6cce9ba0d6e8bdf9239b85c5b2797cfe9f30126f3d0f\" returns successfully" Sep 5 06:06:12.015821 containerd[1558]: time="2025-09-05T06:06:12.015588627Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:12.017599 containerd[1558]: time="2025-09-05T06:06:12.017547181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 06:06:12.019476 containerd[1558]: time="2025-09-05T06:06:12.019419188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 647.002267ms" Sep 5 06:06:12.019476 containerd[1558]: time="2025-09-05T06:06:12.019460788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 06:06:12.022617 containerd[1558]: time="2025-09-05T06:06:12.021989079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 06:06:12.037668 containerd[1558]: time="2025-09-05T06:06:12.037593795Z" level=info msg="CreateContainer within sandbox \"48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 06:06:12.056089 containerd[1558]: time="2025-09-05T06:06:12.055148069Z" level=info msg="Container 1ac4dfec2251b0cd8b6cdd74a0eac0733099055a3be9f356b8c83e02e5829bae: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:06:12.080885 containerd[1558]: time="2025-09-05T06:06:12.080798150Z" level=info msg="CreateContainer within sandbox \"48d206fc8939194412d14f9fa59fd6098f5e20f02b756a345440446dbd03459a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1ac4dfec2251b0cd8b6cdd74a0eac0733099055a3be9f356b8c83e02e5829bae\"" Sep 5 06:06:12.081602 containerd[1558]: time="2025-09-05T06:06:12.081554017Z" level=info msg="StartContainer for \"1ac4dfec2251b0cd8b6cdd74a0eac0733099055a3be9f356b8c83e02e5829bae\"" Sep 5 06:06:12.083255 containerd[1558]: time="2025-09-05T06:06:12.083217151Z" level=info msg="connecting to shim 1ac4dfec2251b0cd8b6cdd74a0eac0733099055a3be9f356b8c83e02e5829bae" address="unix:///run/containerd/s/5c1f5aa556d9fa40dd8466dcb415744633db628407f37280b6328fc68ed9e6f4" protocol=ttrpc version=3 Sep 5 06:06:12.117170 systemd[1]: Started cri-containerd-1ac4dfec2251b0cd8b6cdd74a0eac0733099055a3be9f356b8c83e02e5829bae.scope - libcontainer container 1ac4dfec2251b0cd8b6cdd74a0eac0733099055a3be9f356b8c83e02e5829bae. Sep 5 06:06:12.335102 containerd[1558]: time="2025-09-05T06:06:12.334969187Z" level=info msg="StartContainer for \"1ac4dfec2251b0cd8b6cdd74a0eac0733099055a3be9f356b8c83e02e5829bae\" returns successfully" Sep 5 06:06:12.622142 kubelet[2714]: I0905 06:06:12.621793 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c5f977766-74s42" podStartSLOduration=32.617608855 podStartE2EDuration="48.621768791s" podCreationTimestamp="2025-09-05 06:05:24 +0000 UTC" firstStartedPulling="2025-09-05 06:05:56.016228078 +0000 UTC m=+48.481520794" lastFinishedPulling="2025-09-05 06:06:12.020388014 +0000 UTC m=+64.485680730" observedRunningTime="2025-09-05 06:06:12.621162212 +0000 UTC m=+65.086454928" watchObservedRunningTime="2025-09-05 06:06:12.621768791 +0000 UTC m=+65.087061507" Sep 5 06:06:15.274337 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:51300.service - OpenSSH per-connection server daemon (10.0.0.1:51300). Sep 5 06:06:15.652142 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 51300 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:15.657456 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:15.663014 systemd-logind[1540]: New session 12 of user core. Sep 5 06:06:15.669897 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 06:06:15.847778 sshd[5168]: Connection closed by 10.0.0.1 port 51300 Sep 5 06:06:15.846827 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:15.860226 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:51300.service: Deactivated successfully. Sep 5 06:06:15.863443 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 06:06:15.865501 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Sep 5 06:06:15.869251 systemd-logind[1540]: Removed session 12. Sep 5 06:06:15.872136 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:51304.service - OpenSSH per-connection server daemon (10.0.0.1:51304). Sep 5 06:06:15.942029 sshd[5187]: Accepted publickey for core from 10.0.0.1 port 51304 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:15.944228 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:15.951625 systemd-logind[1540]: New session 13 of user core. Sep 5 06:06:15.957973 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 06:06:16.285163 sshd[5190]: Connection closed by 10.0.0.1 port 51304 Sep 5 06:06:16.288907 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:16.298748 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:51304.service: Deactivated successfully. Sep 5 06:06:16.301812 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 06:06:16.302952 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Sep 5 06:06:16.308625 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:51320.service - OpenSSH per-connection server daemon (10.0.0.1:51320). Sep 5 06:06:16.310272 systemd-logind[1540]: Removed session 13. Sep 5 06:06:16.362940 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 51320 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:16.365304 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:16.372815 systemd-logind[1540]: New session 14 of user core. Sep 5 06:06:16.382063 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 06:06:16.863842 sshd[5205]: Connection closed by 10.0.0.1 port 51320 Sep 5 06:06:16.864355 sshd-session[5202]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:16.870348 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:51320.service: Deactivated successfully. Sep 5 06:06:16.875731 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 06:06:16.877202 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Sep 5 06:06:16.880524 systemd-logind[1540]: Removed session 14. Sep 5 06:06:17.685029 containerd[1558]: time="2025-09-05T06:06:17.684723345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:17.720999 containerd[1558]: time="2025-09-05T06:06:17.720905705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 06:06:17.756019 containerd[1558]: time="2025-09-05T06:06:17.755911194Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:17.814281 containerd[1558]: time="2025-09-05T06:06:17.814193601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:17.815030 containerd[1558]: time="2025-09-05T06:06:17.814997444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.792956375s" Sep 5 06:06:17.815104 containerd[1558]: time="2025-09-05T06:06:17.815034105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 06:06:17.815987 containerd[1558]: time="2025-09-05T06:06:17.815912171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 06:06:17.834884 containerd[1558]: time="2025-09-05T06:06:17.834810410Z" level=info msg="CreateContainer within sandbox \"ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 06:06:17.854092 containerd[1558]: time="2025-09-05T06:06:17.854022071Z" level=info msg="Container e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:06:17.890348 containerd[1558]: time="2025-09-05T06:06:17.890287100Z" level=info msg="CreateContainer within sandbox \"ceaec4ef2f549d949539fc7a4a47679b35f95f4c4a827f72c81023115cb6803c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255\"" Sep 5 06:06:17.890982 containerd[1558]: time="2025-09-05T06:06:17.890889997Z" level=info msg="StartContainer for \"e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255\"" Sep 5 06:06:17.892334 containerd[1558]: time="2025-09-05T06:06:17.892289535Z" level=info msg="connecting to shim e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255" address="unix:///run/containerd/s/26c012f3acee7ee8ecdd1985655beae26db60e11b7003ce39bce084d3ab70e2e" protocol=ttrpc version=3 Sep 5 06:06:17.925130 systemd[1]: Started cri-containerd-e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255.scope - libcontainer container e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255. Sep 5 06:06:17.991951 containerd[1558]: time="2025-09-05T06:06:17.991715737Z" level=info msg="StartContainer for \"e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255\" returns successfully" Sep 5 06:06:18.478840 kubelet[2714]: I0905 06:06:18.476460 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64dd7d8444-2jb59" podStartSLOduration=29.983923373 podStartE2EDuration="51.476441922s" podCreationTimestamp="2025-09-05 06:05:27 +0000 UTC" firstStartedPulling="2025-09-05 06:05:56.323143322 +0000 UTC m=+48.788436038" lastFinishedPulling="2025-09-05 06:06:17.81566187 +0000 UTC m=+70.280954587" observedRunningTime="2025-09-05 06:06:18.47549278 +0000 UTC m=+70.940785497" watchObservedRunningTime="2025-09-05 06:06:18.476441922 +0000 UTC m=+70.941734638" Sep 5 06:06:18.522319 containerd[1558]: time="2025-09-05T06:06:18.522277203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255\" id:\"76de293b4580fd617ccc8a55fd1d76366a53cc1ff3e532a84e4d964e92855dc3\" pid:5286 exited_at:{seconds:1757052378 nanos:521943353}" Sep 5 06:06:19.605443 containerd[1558]: time="2025-09-05T06:06:19.605364010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:19.606163 containerd[1558]: time="2025-09-05T06:06:19.606137343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 06:06:19.607775 containerd[1558]: time="2025-09-05T06:06:19.607710039Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:19.610044 containerd[1558]: time="2025-09-05T06:06:19.609947952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:19.610631 containerd[1558]: time="2025-09-05T06:06:19.610593079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.794638989s" Sep 5 06:06:19.610672 containerd[1558]: time="2025-09-05T06:06:19.610627495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 06:06:19.617103 containerd[1558]: time="2025-09-05T06:06:19.617059451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 06:06:19.622308 containerd[1558]: time="2025-09-05T06:06:19.622243294Z" level=info msg="CreateContainer within sandbox \"2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 06:06:19.636206 containerd[1558]: time="2025-09-05T06:06:19.636051255Z" level=info msg="Container 808f827aca74e4d7e8cd4cb47e47873814f452f2b0ffb44933377eda16f5ba27: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:06:19.647297 containerd[1558]: time="2025-09-05T06:06:19.647235997Z" level=info msg="CreateContainer within sandbox \"2d893110c2eaabb64b39b30819b8b23d73d4e89bc621e853954b8bafb9c02806\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"808f827aca74e4d7e8cd4cb47e47873814f452f2b0ffb44933377eda16f5ba27\"" Sep 5 06:06:19.647866 containerd[1558]: time="2025-09-05T06:06:19.647833864Z" level=info msg="StartContainer for \"808f827aca74e4d7e8cd4cb47e47873814f452f2b0ffb44933377eda16f5ba27\"" Sep 5 06:06:19.649327 containerd[1558]: time="2025-09-05T06:06:19.649304484Z" level=info msg="connecting to shim 808f827aca74e4d7e8cd4cb47e47873814f452f2b0ffb44933377eda16f5ba27" address="unix:///run/containerd/s/f16b108b121432264e22bb412130461f0549749335ad1cead3adf14330f5647f" protocol=ttrpc version=3 Sep 5 06:06:19.678956 systemd[1]: Started cri-containerd-808f827aca74e4d7e8cd4cb47e47873814f452f2b0ffb44933377eda16f5ba27.scope - libcontainer container 808f827aca74e4d7e8cd4cb47e47873814f452f2b0ffb44933377eda16f5ba27. Sep 5 06:06:19.727572 containerd[1558]: time="2025-09-05T06:06:19.727519024Z" level=info msg="StartContainer for \"808f827aca74e4d7e8cd4cb47e47873814f452f2b0ffb44933377eda16f5ba27\" returns successfully" Sep 5 06:06:20.156543 kubelet[2714]: I0905 06:06:20.156483 2714 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 06:06:20.157637 kubelet[2714]: I0905 06:06:20.157614 2714 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 06:06:20.858788 kubelet[2714]: I0905 06:06:20.858410 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pc5f5" podStartSLOduration=29.489743957 podStartE2EDuration="53.858387804s" podCreationTimestamp="2025-09-05 06:05:27 +0000 UTC" firstStartedPulling="2025-09-05 06:05:55.248265055 +0000 UTC m=+47.713557771" lastFinishedPulling="2025-09-05 06:06:19.616908902 +0000 UTC m=+72.082201618" observedRunningTime="2025-09-05 06:06:20.858178092 +0000 UTC m=+73.323470818" watchObservedRunningTime="2025-09-05 06:06:20.858387804 +0000 UTC m=+73.323680520" Sep 5 06:06:21.881151 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:37238.service - OpenSSH per-connection server daemon (10.0.0.1:37238). Sep 5 06:06:22.052054 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 37238 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:22.053784 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:22.058701 systemd-logind[1540]: New session 15 of user core. Sep 5 06:06:22.066875 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 06:06:22.319098 sshd[5339]: Connection closed by 10.0.0.1 port 37238 Sep 5 06:06:22.319507 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:22.324542 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:37238.service: Deactivated successfully. Sep 5 06:06:22.326707 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 06:06:22.327529 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Sep 5 06:06:22.329150 systemd-logind[1540]: Removed session 15. Sep 5 06:06:23.458538 containerd[1558]: time="2025-09-05T06:06:23.458483400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf\" id:\"b9f7939871bb95534e6e74a0c2d6977a01dd4b93fe53d7ec59c620cf1f60e7ec\" pid:5364 exited_at:{seconds:1757052383 nanos:458056674}" Sep 5 06:06:23.560065 containerd[1558]: time="2025-09-05T06:06:23.559994561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf\" id:\"11a274f70b5058147d9cee689ba19e0fbc43e2abadfc1fb1e0ba4ab471386819\" pid:5390 exited_at:{seconds:1757052383 nanos:559666493}" Sep 5 06:06:24.702405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69044293.mount: Deactivated successfully. Sep 5 06:06:24.729504 containerd[1558]: time="2025-09-05T06:06:24.729439194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:24.730321 containerd[1558]: time="2025-09-05T06:06:24.730275594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 06:06:24.731513 containerd[1558]: time="2025-09-05T06:06:24.731473665Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:24.734114 containerd[1558]: time="2025-09-05T06:06:24.734079089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:06:24.734629 containerd[1558]: time="2025-09-05T06:06:24.734584936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.117492493s" Sep 5 06:06:24.734687 containerd[1558]: time="2025-09-05T06:06:24.734632628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 06:06:24.745544 containerd[1558]: time="2025-09-05T06:06:24.745483302Z" level=info msg="CreateContainer within sandbox \"a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 06:06:24.754859 containerd[1558]: time="2025-09-05T06:06:24.754800634Z" level=info msg="Container 55593bdd76bb126d080ca3346b4caf5fe6651c2aeb6adafadb25ba781e1d2513: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:06:24.764582 containerd[1558]: time="2025-09-05T06:06:24.764507350Z" level=info msg="CreateContainer within sandbox \"a37c392f13141499636f5343481bb1d31ae906578ba778c80ce1bced55b32982\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"55593bdd76bb126d080ca3346b4caf5fe6651c2aeb6adafadb25ba781e1d2513\"" Sep 5 06:06:24.765145 containerd[1558]: time="2025-09-05T06:06:24.765114250Z" level=info msg="StartContainer for \"55593bdd76bb126d080ca3346b4caf5fe6651c2aeb6adafadb25ba781e1d2513\"" Sep 5 06:06:24.766487 containerd[1558]: time="2025-09-05T06:06:24.766458482Z" level=info msg="connecting to shim 55593bdd76bb126d080ca3346b4caf5fe6651c2aeb6adafadb25ba781e1d2513" address="unix:///run/containerd/s/e1005064a33b8f70d7f9587a710709954faa5184d5fdeaa15660abedad915cb0" protocol=ttrpc version=3 Sep 5 06:06:24.802970 systemd[1]: Started cri-containerd-55593bdd76bb126d080ca3346b4caf5fe6651c2aeb6adafadb25ba781e1d2513.scope - libcontainer container 55593bdd76bb126d080ca3346b4caf5fe6651c2aeb6adafadb25ba781e1d2513. Sep 5 06:06:25.007124 containerd[1558]: time="2025-09-05T06:06:25.007079524Z" level=info msg="StartContainer for \"55593bdd76bb126d080ca3346b4caf5fe6651c2aeb6adafadb25ba781e1d2513\" returns successfully" Sep 5 06:06:25.039166 kubelet[2714]: E0905 06:06:25.039121 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:06:26.039935 kubelet[2714]: E0905 06:06:26.039305 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:06:27.335517 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:37240.service - OpenSSH per-connection server daemon (10.0.0.1:37240). Sep 5 06:06:27.435549 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 37240 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:27.437731 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:27.447805 systemd-logind[1540]: New session 16 of user core. Sep 5 06:06:27.455017 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 06:06:27.707033 sshd[5457]: Connection closed by 10.0.0.1 port 37240 Sep 5 06:06:27.708025 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:27.714359 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:37240.service: Deactivated successfully. Sep 5 06:06:27.717352 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 06:06:27.718944 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Sep 5 06:06:27.720960 systemd-logind[1540]: Removed session 16. Sep 5 06:06:31.038816 kubelet[2714]: E0905 06:06:31.038729 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:06:32.723052 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:41254.service - OpenSSH per-connection server daemon (10.0.0.1:41254). Sep 5 06:06:32.801533 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 41254 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:32.803871 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:32.809665 systemd-logind[1540]: New session 17 of user core. Sep 5 06:06:32.825107 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 06:06:32.969851 sshd[5473]: Connection closed by 10.0.0.1 port 41254 Sep 5 06:06:32.970201 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:32.977258 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:41254.service: Deactivated successfully. Sep 5 06:06:32.979811 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 06:06:32.980848 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Sep 5 06:06:32.982526 systemd-logind[1540]: Removed session 17. Sep 5 06:06:34.898319 containerd[1558]: time="2025-09-05T06:06:34.898259619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255\" id:\"ad4e39782d131a5e48b46a21163de0680c6e9734b39746f47568e388dbda938f\" pid:5497 exited_at:{seconds:1757052394 nanos:897924730}" Sep 5 06:06:37.992068 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:41268.service - OpenSSH per-connection server daemon (10.0.0.1:41268). Sep 5 06:06:38.040107 kubelet[2714]: E0905 06:06:38.040062 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:06:38.067912 sshd[5516]: Accepted publickey for core from 10.0.0.1 port 41268 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:38.069658 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:38.074547 systemd-logind[1540]: New session 18 of user core. Sep 5 06:06:38.080930 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 06:06:38.348351 sshd[5519]: Connection closed by 10.0.0.1 port 41268 Sep 5 06:06:38.353772 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:41268.service: Deactivated successfully. Sep 5 06:06:38.348982 sshd-session[5516]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:38.355971 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 06:06:38.356856 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Sep 5 06:06:38.358236 systemd-logind[1540]: Removed session 18. Sep 5 06:06:39.568670 containerd[1558]: time="2025-09-05T06:06:39.568605686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b\" id:\"d35a2d0dfb9c44f2fad4ca6565a7a7f6d87bfa3ee1f585dbbf424a98e756e055\" pid:5543 exited_at:{seconds:1757052399 nanos:568187050}" Sep 5 06:06:39.588958 kubelet[2714]: I0905 06:06:39.587594 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f74b5dc-d97bw" podStartSLOduration=16.629455465 podStartE2EDuration="45.587573058s" podCreationTimestamp="2025-09-05 06:05:54 +0000 UTC" firstStartedPulling="2025-09-05 06:05:55.781642702 +0000 UTC m=+48.246935418" lastFinishedPulling="2025-09-05 06:06:24.739760295 +0000 UTC m=+77.205053011" observedRunningTime="2025-09-05 06:06:25.51771485 +0000 UTC m=+77.983007566" watchObservedRunningTime="2025-09-05 06:06:39.587573058 +0000 UTC m=+92.052865794" Sep 5 06:06:42.039412 kubelet[2714]: E0905 06:06:42.039334 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:06:43.363324 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:45948.service - OpenSSH per-connection server daemon (10.0.0.1:45948). Sep 5 06:06:43.423581 sshd[5558]: Accepted publickey for core from 10.0.0.1 port 45948 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:43.425661 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:43.430895 systemd-logind[1540]: New session 19 of user core. Sep 5 06:06:43.440878 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 06:06:43.614687 sshd[5561]: Connection closed by 10.0.0.1 port 45948 Sep 5 06:06:43.614960 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:43.620713 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:45948.service: Deactivated successfully. Sep 5 06:06:43.623009 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 06:06:43.623852 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Sep 5 06:06:43.625402 systemd-logind[1540]: Removed session 19. Sep 5 06:06:48.516679 containerd[1558]: time="2025-09-05T06:06:48.516627471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e36aff4fb2ebc0840375445319d1605f20ed9bd9f8017aef2cc15fe72e0cb255\" id:\"aa98acb1ecce484519e614b7c4772a5adf1e2a77324a3cfcf0f48cadae14d487\" pid:5586 exited_at:{seconds:1757052408 nanos:516412152}" Sep 5 06:06:48.630469 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:45960.service - OpenSSH per-connection server daemon (10.0.0.1:45960). Sep 5 06:06:48.684762 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 45960 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:48.686617 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:48.691653 systemd-logind[1540]: New session 20 of user core. Sep 5 06:06:48.709020 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 06:06:48.843648 sshd[5600]: Connection closed by 10.0.0.1 port 45960 Sep 5 06:06:48.843972 sshd-session[5597]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:48.852888 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:45960.service: Deactivated successfully. Sep 5 06:06:48.855162 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 06:06:48.856035 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Sep 5 06:06:48.859202 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:45974.service - OpenSSH per-connection server daemon (10.0.0.1:45974). Sep 5 06:06:48.860475 systemd-logind[1540]: Removed session 20. Sep 5 06:06:48.915700 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 45974 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:48.917254 sshd-session[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:48.922123 systemd-logind[1540]: New session 21 of user core. Sep 5 06:06:48.929888 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 06:06:49.823036 sshd[5616]: Connection closed by 10.0.0.1 port 45974 Sep 5 06:06:49.823523 sshd-session[5613]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:49.837103 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:45974.service: Deactivated successfully. Sep 5 06:06:49.839421 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 06:06:49.840464 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Sep 5 06:06:49.843559 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:45990.service - OpenSSH per-connection server daemon (10.0.0.1:45990). Sep 5 06:06:49.844330 systemd-logind[1540]: Removed session 21. Sep 5 06:06:49.911203 sshd[5627]: Accepted publickey for core from 10.0.0.1 port 45990 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:49.912834 sshd-session[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:49.917592 systemd-logind[1540]: New session 22 of user core. Sep 5 06:06:49.924952 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 06:06:50.685211 sshd[5630]: Connection closed by 10.0.0.1 port 45990 Sep 5 06:06:50.685814 sshd-session[5627]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:50.699763 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:45990.service: Deactivated successfully. Sep 5 06:06:50.702477 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 06:06:50.703486 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Sep 5 06:06:50.707542 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:54490.service - OpenSSH per-connection server daemon (10.0.0.1:54490). Sep 5 06:06:50.709005 systemd-logind[1540]: Removed session 22. Sep 5 06:06:50.771120 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 54490 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:50.773089 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:50.779111 systemd-logind[1540]: New session 23 of user core. Sep 5 06:06:50.791979 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 06:06:51.160425 sshd[5655]: Connection closed by 10.0.0.1 port 54490 Sep 5 06:06:51.160923 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:51.173647 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:54490.service: Deactivated successfully. Sep 5 06:06:51.177998 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 06:06:51.181367 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Sep 5 06:06:51.184757 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:54496.service - OpenSSH per-connection server daemon (10.0.0.1:54496). Sep 5 06:06:51.186678 systemd-logind[1540]: Removed session 23. Sep 5 06:06:51.244613 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 54496 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:51.247342 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:51.253628 systemd-logind[1540]: New session 24 of user core. Sep 5 06:06:51.260933 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 06:06:51.390287 sshd[5669]: Connection closed by 10.0.0.1 port 54496 Sep 5 06:06:51.390801 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:51.396816 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:54496.service: Deactivated successfully. Sep 5 06:06:51.399274 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 06:06:51.400355 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Sep 5 06:06:51.402014 systemd-logind[1540]: Removed session 24. Sep 5 06:06:53.546124 containerd[1558]: time="2025-09-05T06:06:53.546072007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa7fa0aa4e2ad18ad3ea1804cb630c31fee2d263cc23a8a6f23d0ff395149bdf\" id:\"3167b2966e13a831ed55303453ce044acaa863e224e3343ca00c66ad85e80b1f\" pid:5693 exited_at:{seconds:1757052413 nanos:545696004}" Sep 5 06:06:56.406027 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:54510.service - OpenSSH per-connection server daemon (10.0.0.1:54510). Sep 5 06:06:56.471635 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 54510 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:06:56.474142 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:06:56.479708 systemd-logind[1540]: New session 25 of user core. Sep 5 06:06:56.486918 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 06:06:56.610794 sshd[5710]: Connection closed by 10.0.0.1 port 54510 Sep 5 06:06:56.611259 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Sep 5 06:06:56.617420 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:54510.service: Deactivated successfully. Sep 5 06:06:56.620260 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 06:06:56.621275 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Sep 5 06:06:56.623051 systemd-logind[1540]: Removed session 25. Sep 5 06:07:01.630182 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:57714.service - OpenSSH per-connection server daemon (10.0.0.1:57714). Sep 5 06:07:01.728826 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 57714 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:07:01.731239 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:07:01.735704 systemd-logind[1540]: New session 26 of user core. Sep 5 06:07:01.741899 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 06:07:01.963549 sshd[5729]: Connection closed by 10.0.0.1 port 57714 Sep 5 06:07:01.964955 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Sep 5 06:07:01.970181 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:57714.service: Deactivated successfully. Sep 5 06:07:01.973277 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 06:07:01.974308 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Sep 5 06:07:01.976514 systemd-logind[1540]: Removed session 26. Sep 5 06:07:06.978228 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:57728.service - OpenSSH per-connection server daemon (10.0.0.1:57728). Sep 5 06:07:07.038209 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 57728 ssh2: RSA SHA256:T5qcG59uAia43oig5RBtb+eF92ubXiITNZz/7bMi53g Sep 5 06:07:07.041118 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:07:07.048860 systemd-logind[1540]: New session 27 of user core. Sep 5 06:07:07.055084 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 06:07:07.221434 containerd[1558]: time="2025-09-05T06:07:07.221376252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21fc48af0eeb9456bbf5f768d6f81da50e38b8e4edd054b97c88adbc5603540b\" id:\"17bcf0aad743ed3c7a87f769c76d9f36374b3761c68d119f1a0bdd46ac619d52\" pid:5759 exited_at:{seconds:1757052427 nanos:221036510}" Sep 5 06:07:07.317220 sshd[5755]: Connection closed by 10.0.0.1 port 57728 Sep 5 06:07:07.317854 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Sep 5 06:07:07.323145 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:57728.service: Deactivated successfully. Sep 5 06:07:07.326649 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 06:07:07.330178 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. Sep 5 06:07:07.332238 systemd-logind[1540]: Removed session 27.