Sep 5 06:24:02.780369 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 5 04:19:33 -00 2025 Sep 5 06:24:02.780392 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4b2174e9c368fa97600991ce20efc370fbbf3ddfce3ea407f50212a1021bd496 Sep 5 06:24:02.780401 kernel: BIOS-provided physical RAM map: Sep 5 06:24:02.780408 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 06:24:02.780414 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 06:24:02.780421 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 06:24:02.780428 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 06:24:02.780435 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 06:24:02.780444 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 06:24:02.780450 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 06:24:02.780457 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 06:24:02.780463 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 06:24:02.780469 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 06:24:02.780476 kernel: NX (Execute Disable) protection: active Sep 5 06:24:02.780486 kernel: APIC: Static calls initialized Sep 5 06:24:02.780493 kernel: SMBIOS 2.8 present. Sep 5 06:24:02.780500 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 06:24:02.780507 kernel: DMI: Memory slots populated: 1/1 Sep 5 06:24:02.780514 kernel: Hypervisor detected: KVM Sep 5 06:24:02.780521 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 06:24:02.780541 kernel: kvm-clock: using sched offset of 3251472042 cycles Sep 5 06:24:02.780549 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 06:24:02.780557 kernel: tsc: Detected 2794.748 MHz processor Sep 5 06:24:02.780564 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 06:24:02.780574 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 06:24:02.780581 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 06:24:02.780589 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 06:24:02.780596 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 06:24:02.780604 kernel: Using GB pages for direct mapping Sep 5 06:24:02.780611 kernel: ACPI: Early table checksum verification disabled Sep 5 06:24:02.780618 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 06:24:02.780626 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:24:02.780635 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:24:02.780642 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:24:02.780649 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 06:24:02.780657 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:24:02.780664 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:24:02.780671 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:24:02.780679 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:24:02.780686 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 06:24:02.780698 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 06:24:02.780705 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 06:24:02.780713 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 06:24:02.780720 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 06:24:02.780728 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 06:24:02.780735 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 06:24:02.780744 kernel: No NUMA configuration found Sep 5 06:24:02.780752 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 06:24:02.780759 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 5 06:24:02.780767 kernel: Zone ranges: Sep 5 06:24:02.780775 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 06:24:02.780782 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 06:24:02.780789 kernel: Normal empty Sep 5 06:24:02.780797 kernel: Device empty Sep 5 06:24:02.780804 kernel: Movable zone start for each node Sep 5 06:24:02.780811 kernel: Early memory node ranges Sep 5 06:24:02.780821 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 06:24:02.780829 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 06:24:02.780836 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 06:24:02.780843 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 06:24:02.780851 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 06:24:02.780858 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 06:24:02.780866 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 06:24:02.780873 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 06:24:02.780881 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 06:24:02.780890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 06:24:02.780898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 06:24:02.780905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 06:24:02.780913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 06:24:02.780920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 06:24:02.780928 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 06:24:02.780935 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 06:24:02.780943 kernel: TSC deadline timer available Sep 5 06:24:02.780950 kernel: CPU topo: Max. logical packages: 1 Sep 5 06:24:02.780959 kernel: CPU topo: Max. logical dies: 1 Sep 5 06:24:02.780967 kernel: CPU topo: Max. dies per package: 1 Sep 5 06:24:02.780974 kernel: CPU topo: Max. threads per core: 1 Sep 5 06:24:02.780981 kernel: CPU topo: Num. cores per package: 4 Sep 5 06:24:02.780989 kernel: CPU topo: Num. threads per package: 4 Sep 5 06:24:02.780996 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 5 06:24:02.781004 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 06:24:02.781011 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 06:24:02.781019 kernel: kvm-guest: setup PV sched yield Sep 5 06:24:02.781026 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 06:24:02.781035 kernel: Booting paravirtualized kernel on KVM Sep 5 06:24:02.781043 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 06:24:02.781050 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 06:24:02.781058 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 5 06:24:02.781065 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 5 06:24:02.781073 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 06:24:02.781080 kernel: kvm-guest: PV spinlocks enabled Sep 5 06:24:02.781087 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 06:24:02.781096 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4b2174e9c368fa97600991ce20efc370fbbf3ddfce3ea407f50212a1021bd496 Sep 5 06:24:02.781106 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 06:24:02.781113 kernel: random: crng init done Sep 5 06:24:02.781121 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 06:24:02.781128 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 06:24:02.781136 kernel: Fallback order for Node 0: 0 Sep 5 06:24:02.781143 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 5 06:24:02.781150 kernel: Policy zone: DMA32 Sep 5 06:24:02.781158 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 06:24:02.781167 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 06:24:02.781175 kernel: ftrace: allocating 40102 entries in 157 pages Sep 5 06:24:02.781182 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 06:24:02.781189 kernel: Dynamic Preempt: voluntary Sep 5 06:24:02.781197 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 06:24:02.781205 kernel: rcu: RCU event tracing is enabled. Sep 5 06:24:02.781212 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 06:24:02.781220 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 06:24:02.781228 kernel: Rude variant of Tasks RCU enabled. Sep 5 06:24:02.781237 kernel: Tracing variant of Tasks RCU enabled. Sep 5 06:24:02.781251 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 06:24:02.781258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 06:24:02.781266 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:24:02.781274 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:24:02.781281 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:24:02.781289 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 06:24:02.781296 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 06:24:02.781311 kernel: Console: colour VGA+ 80x25 Sep 5 06:24:02.781319 kernel: printk: legacy console [ttyS0] enabled Sep 5 06:24:02.781326 kernel: ACPI: Core revision 20240827 Sep 5 06:24:02.781335 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 06:24:02.781345 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 06:24:02.781352 kernel: x2apic enabled Sep 5 06:24:02.781360 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 06:24:02.781368 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 06:24:02.781376 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 06:24:02.781385 kernel: kvm-guest: setup PV IPIs Sep 5 06:24:02.781393 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 06:24:02.781401 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 06:24:02.781409 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 06:24:02.781426 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 06:24:02.781441 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 06:24:02.781457 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 06:24:02.781465 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 06:24:02.781473 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 06:24:02.781483 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 06:24:02.781491 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 06:24:02.781499 kernel: active return thunk: retbleed_return_thunk Sep 5 06:24:02.781507 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 06:24:02.781519 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 06:24:02.781527 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 06:24:02.781545 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 06:24:02.781554 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 06:24:02.781564 kernel: active return thunk: srso_return_thunk Sep 5 06:24:02.781572 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 06:24:02.781580 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 06:24:02.781587 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 06:24:02.781595 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 06:24:02.781603 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 06:24:02.781611 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 06:24:02.781619 kernel: Freeing SMP alternatives memory: 32K Sep 5 06:24:02.781627 kernel: pid_max: default: 32768 minimum: 301 Sep 5 06:24:02.781636 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 06:24:02.781644 kernel: landlock: Up and running. Sep 5 06:24:02.781651 kernel: SELinux: Initializing. Sep 5 06:24:02.781659 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 06:24:02.781667 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 06:24:02.781675 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 06:24:02.781683 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 06:24:02.781691 kernel: ... version: 0 Sep 5 06:24:02.781699 kernel: ... bit width: 48 Sep 5 06:24:02.781709 kernel: ... generic registers: 6 Sep 5 06:24:02.781717 kernel: ... value mask: 0000ffffffffffff Sep 5 06:24:02.781725 kernel: ... max period: 00007fffffffffff Sep 5 06:24:02.781732 kernel: ... fixed-purpose events: 0 Sep 5 06:24:02.781740 kernel: ... event mask: 000000000000003f Sep 5 06:24:02.781748 kernel: signal: max sigframe size: 1776 Sep 5 06:24:02.781756 kernel: rcu: Hierarchical SRCU implementation. Sep 5 06:24:02.781763 kernel: rcu: Max phase no-delay instances is 400. Sep 5 06:24:02.781771 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 06:24:02.781781 kernel: smp: Bringing up secondary CPUs ... Sep 5 06:24:02.781789 kernel: smpboot: x86: Booting SMP configuration: Sep 5 06:24:02.781796 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 06:24:02.781804 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 06:24:02.781811 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 06:24:02.781819 kernel: Memory: 2428920K/2571752K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54068K init, 2900K bss, 136904K reserved, 0K cma-reserved) Sep 5 06:24:02.781827 kernel: devtmpfs: initialized Sep 5 06:24:02.781835 kernel: x86/mm: Memory block size: 128MB Sep 5 06:24:02.781843 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 06:24:02.781852 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 06:24:02.781860 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 06:24:02.781867 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 06:24:02.781875 kernel: audit: initializing netlink subsys (disabled) Sep 5 06:24:02.781883 kernel: audit: type=2000 audit(1757053440.080:1): state=initialized audit_enabled=0 res=1 Sep 5 06:24:02.781891 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 06:24:02.781898 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 06:24:02.781906 kernel: cpuidle: using governor menu Sep 5 06:24:02.781913 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 06:24:02.781923 kernel: dca service started, version 1.12.1 Sep 5 06:24:02.781931 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 5 06:24:02.781938 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 06:24:02.781946 kernel: PCI: Using configuration type 1 for base access Sep 5 06:24:02.781954 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 06:24:02.781962 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 06:24:02.781969 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 06:24:02.781977 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 06:24:02.781985 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 06:24:02.781994 kernel: ACPI: Added _OSI(Module Device) Sep 5 06:24:02.782001 kernel: ACPI: Added _OSI(Processor Device) Sep 5 06:24:02.782009 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 06:24:02.782017 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 06:24:02.782025 kernel: ACPI: Interpreter enabled Sep 5 06:24:02.782032 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 06:24:02.782040 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 06:24:02.782048 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 06:24:02.782055 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 06:24:02.782065 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 06:24:02.782073 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 06:24:02.782259 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 06:24:02.782415 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 06:24:02.782544 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 06:24:02.782555 kernel: PCI host bridge to bus 0000:00 Sep 5 06:24:02.782678 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 06:24:02.782790 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 06:24:02.782896 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 06:24:02.782999 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 06:24:02.783103 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 06:24:02.783206 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 06:24:02.783318 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 06:24:02.783456 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 06:24:02.783608 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 5 06:24:02.783727 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 5 06:24:02.783843 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 5 06:24:02.783957 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 5 06:24:02.784072 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 06:24:02.784197 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 06:24:02.784327 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 5 06:24:02.784442 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 5 06:24:02.784575 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 06:24:02.784703 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 06:24:02.784821 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 5 06:24:02.784937 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 5 06:24:02.785053 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 06:24:02.785183 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 06:24:02.785308 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 5 06:24:02.785423 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 5 06:24:02.785554 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 06:24:02.785672 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 5 06:24:02.785797 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 06:24:02.785913 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 06:24:02.786041 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 06:24:02.786161 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 5 06:24:02.786291 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 5 06:24:02.786417 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 06:24:02.786557 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 5 06:24:02.786570 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 06:24:02.786582 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 06:24:02.786589 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 06:24:02.786597 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 06:24:02.786605 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 06:24:02.786613 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 06:24:02.786621 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 06:24:02.786628 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 06:24:02.786636 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 06:24:02.786644 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 06:24:02.786654 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 06:24:02.786662 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 06:24:02.786669 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 06:24:02.786677 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 06:24:02.786685 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 06:24:02.786693 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 06:24:02.786701 kernel: iommu: Default domain type: Translated Sep 5 06:24:02.786708 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 06:24:02.786716 kernel: PCI: Using ACPI for IRQ routing Sep 5 06:24:02.786726 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 06:24:02.786734 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 06:24:02.786742 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 06:24:02.786859 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 06:24:02.786977 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 06:24:02.787090 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 06:24:02.787100 kernel: vgaarb: loaded Sep 5 06:24:02.787108 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 06:24:02.787116 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 06:24:02.787127 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 06:24:02.787135 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 06:24:02.787143 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 06:24:02.787151 kernel: pnp: PnP ACPI init Sep 5 06:24:02.787284 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 06:24:02.787296 kernel: pnp: PnP ACPI: found 6 devices Sep 5 06:24:02.787304 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 06:24:02.787312 kernel: NET: Registered PF_INET protocol family Sep 5 06:24:02.787322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 06:24:02.787330 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 06:24:02.787338 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 06:24:02.787345 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 06:24:02.787353 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 06:24:02.787361 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 06:24:02.787369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 06:24:02.787376 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 06:24:02.787386 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 06:24:02.787394 kernel: NET: Registered PF_XDP protocol family Sep 5 06:24:02.787502 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 06:24:02.787634 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 06:24:02.787741 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 06:24:02.787849 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 06:24:02.787954 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 06:24:02.788059 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 06:24:02.788069 kernel: PCI: CLS 0 bytes, default 64 Sep 5 06:24:02.788081 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 06:24:02.788089 kernel: Initialise system trusted keyrings Sep 5 06:24:02.788097 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 06:24:02.788105 kernel: Key type asymmetric registered Sep 5 06:24:02.788113 kernel: Asymmetric key parser 'x509' registered Sep 5 06:24:02.788121 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 06:24:02.788129 kernel: io scheduler mq-deadline registered Sep 5 06:24:02.788137 kernel: io scheduler kyber registered Sep 5 06:24:02.788145 kernel: io scheduler bfq registered Sep 5 06:24:02.788155 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 06:24:02.788163 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 06:24:02.788171 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 06:24:02.788179 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 06:24:02.788187 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 06:24:02.788195 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 06:24:02.788203 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 06:24:02.788211 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 06:24:02.788219 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 06:24:02.788229 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 06:24:02.788375 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 06:24:02.788487 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 06:24:02.788614 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T06:24:02 UTC (1757053442) Sep 5 06:24:02.788724 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 06:24:02.788734 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 06:24:02.788742 kernel: NET: Registered PF_INET6 protocol family Sep 5 06:24:02.788749 kernel: Segment Routing with IPv6 Sep 5 06:24:02.788761 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 06:24:02.788768 kernel: NET: Registered PF_PACKET protocol family Sep 5 06:24:02.788776 kernel: Key type dns_resolver registered Sep 5 06:24:02.788784 kernel: IPI shorthand broadcast: enabled Sep 5 06:24:02.788792 kernel: sched_clock: Marking stable (2666002958, 107933367)->(2791367848, -17431523) Sep 5 06:24:02.788799 kernel: registered taskstats version 1 Sep 5 06:24:02.788807 kernel: Loading compiled-in X.509 certificates Sep 5 06:24:02.788816 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 0a288d3740f799f7923bd7314e999f997bd1026c' Sep 5 06:24:02.788823 kernel: Demotion targets for Node 0: null Sep 5 06:24:02.788833 kernel: Key type .fscrypt registered Sep 5 06:24:02.788840 kernel: Key type fscrypt-provisioning registered Sep 5 06:24:02.788848 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 06:24:02.788855 kernel: ima: Allocated hash algorithm: sha1 Sep 5 06:24:02.788863 kernel: ima: No architecture policies found Sep 5 06:24:02.788871 kernel: clk: Disabling unused clocks Sep 5 06:24:02.788878 kernel: Warning: unable to open an initial console. Sep 5 06:24:02.788886 kernel: Freeing unused kernel image (initmem) memory: 54068K Sep 5 06:24:02.788896 kernel: Write protecting the kernel read-only data: 24576k Sep 5 06:24:02.788903 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 5 06:24:02.788911 kernel: Run /init as init process Sep 5 06:24:02.788919 kernel: with arguments: Sep 5 06:24:02.788927 kernel: /init Sep 5 06:24:02.788934 kernel: with environment: Sep 5 06:24:02.788942 kernel: HOME=/ Sep 5 06:24:02.788949 kernel: TERM=linux Sep 5 06:24:02.788957 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 06:24:02.788965 systemd[1]: Successfully made /usr/ read-only. Sep 5 06:24:02.788986 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 06:24:02.788997 systemd[1]: Detected virtualization kvm. Sep 5 06:24:02.789005 systemd[1]: Detected architecture x86-64. Sep 5 06:24:02.789014 systemd[1]: Running in initrd. Sep 5 06:24:02.789022 systemd[1]: No hostname configured, using default hostname. Sep 5 06:24:02.789032 systemd[1]: Hostname set to . Sep 5 06:24:02.789041 systemd[1]: Initializing machine ID from VM UUID. Sep 5 06:24:02.789049 systemd[1]: Queued start job for default target initrd.target. Sep 5 06:24:02.789057 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:24:02.789066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:24:02.789075 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 06:24:02.789084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 06:24:02.789092 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 06:24:02.789104 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 06:24:02.789194 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 06:24:02.789204 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 06:24:02.789213 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:24:02.789222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:24:02.789230 systemd[1]: Reached target paths.target - Path Units. Sep 5 06:24:02.789250 systemd[1]: Reached target slices.target - Slice Units. Sep 5 06:24:02.789260 systemd[1]: Reached target swap.target - Swaps. Sep 5 06:24:02.789269 systemd[1]: Reached target timers.target - Timer Units. Sep 5 06:24:02.789278 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 06:24:02.789286 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 06:24:02.789295 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 06:24:02.789304 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 06:24:02.789312 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:24:02.789321 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 06:24:02.789332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:24:02.789341 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 06:24:02.789349 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 06:24:02.789358 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 06:24:02.789369 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 06:24:02.789380 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 06:24:02.789388 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 06:24:02.789397 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 06:24:02.789406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 06:24:02.789414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:24:02.789423 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 06:24:02.789435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:24:02.789443 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 06:24:02.789452 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 06:24:02.789481 systemd-journald[220]: Collecting audit messages is disabled. Sep 5 06:24:02.789505 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 06:24:02.789516 systemd-journald[220]: Journal started Sep 5 06:24:02.789548 systemd-journald[220]: Runtime Journal (/run/log/journal/128d2d7c5d24481a8bedeea900419bdb) is 6M, max 48.6M, 42.5M free. Sep 5 06:24:02.780358 systemd-modules-load[222]: Inserted module 'overlay' Sep 5 06:24:02.792560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 06:24:02.793866 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 06:24:02.805560 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 06:24:02.807004 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 5 06:24:02.829206 kernel: Bridge firewalling registered Sep 5 06:24:02.829088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 06:24:02.829526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:24:02.833644 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 06:24:02.843770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 06:24:02.846474 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 06:24:02.853732 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:24:02.855050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:24:02.857416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 06:24:02.860966 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 06:24:02.861979 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 06:24:02.876705 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:24:02.878599 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 06:24:02.891918 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4b2174e9c368fa97600991ce20efc370fbbf3ddfce3ea407f50212a1021bd496 Sep 5 06:24:02.925900 systemd-resolved[262]: Positive Trust Anchors: Sep 5 06:24:02.925914 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 06:24:02.925943 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 06:24:02.928355 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 5 06:24:02.934122 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 06:24:02.934774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:24:02.999569 kernel: SCSI subsystem initialized Sep 5 06:24:03.008559 kernel: Loading iSCSI transport class v2.0-870. Sep 5 06:24:03.018559 kernel: iscsi: registered transport (tcp) Sep 5 06:24:03.039568 kernel: iscsi: registered transport (qla4xxx) Sep 5 06:24:03.039588 kernel: QLogic iSCSI HBA Driver Sep 5 06:24:03.059367 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 06:24:03.075778 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:24:03.077039 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 06:24:03.131173 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 06:24:03.133616 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 06:24:03.194556 kernel: raid6: avx2x4 gen() 30186 MB/s Sep 5 06:24:03.211554 kernel: raid6: avx2x2 gen() 31369 MB/s Sep 5 06:24:03.228576 kernel: raid6: avx2x1 gen() 26047 MB/s Sep 5 06:24:03.228592 kernel: raid6: using algorithm avx2x2 gen() 31369 MB/s Sep 5 06:24:03.246591 kernel: raid6: .... xor() 20009 MB/s, rmw enabled Sep 5 06:24:03.246614 kernel: raid6: using avx2x2 recovery algorithm Sep 5 06:24:03.266556 kernel: xor: automatically using best checksumming function avx Sep 5 06:24:03.426560 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 06:24:03.434565 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 06:24:03.438336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:24:03.471091 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 5 06:24:03.476351 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:24:03.477525 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 06:24:03.496898 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Sep 5 06:24:03.525952 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 06:24:03.529351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 06:24:03.606864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:24:03.608747 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 06:24:03.643557 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 06:24:03.650199 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 06:24:03.650248 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 06:24:03.656553 kernel: AES CTR mode by8 optimization enabled Sep 5 06:24:03.660280 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 5 06:24:03.660302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 06:24:03.660319 kernel: GPT:9289727 != 19775487 Sep 5 06:24:03.660674 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 06:24:03.662831 kernel: GPT:9289727 != 19775487 Sep 5 06:24:03.662865 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 06:24:03.662877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:24:03.683909 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 06:24:03.684365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:24:03.687724 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:24:03.692263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:24:03.696847 kernel: libata version 3.00 loaded. Sep 5 06:24:03.720198 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 06:24:03.763412 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 06:24:03.763617 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 06:24:03.763631 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 06:24:03.764842 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 06:24:03.764991 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 06:24:03.765123 kernel: scsi host0: ahci Sep 5 06:24:03.765280 kernel: scsi host1: ahci Sep 5 06:24:03.765421 kernel: scsi host2: ahci Sep 5 06:24:03.765575 kernel: scsi host3: ahci Sep 5 06:24:03.765716 kernel: scsi host4: ahci Sep 5 06:24:03.765861 kernel: scsi host5: ahci Sep 5 06:24:03.766006 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 5 06:24:03.766018 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 5 06:24:03.766029 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 5 06:24:03.766040 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 5 06:24:03.766050 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 5 06:24:03.766061 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 5 06:24:03.763696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:24:03.784942 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 06:24:03.786165 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 06:24:03.796979 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 06:24:03.806812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 06:24:03.808005 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 06:24:03.836949 disk-uuid[631]: Primary Header is updated. Sep 5 06:24:03.836949 disk-uuid[631]: Secondary Entries is updated. Sep 5 06:24:03.836949 disk-uuid[631]: Secondary Header is updated. Sep 5 06:24:03.840098 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:24:04.036562 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 06:24:04.036616 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 06:24:04.036628 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 06:24:04.037557 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 06:24:04.038554 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 06:24:04.039559 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 06:24:04.039573 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 06:24:04.040696 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 06:24:04.040708 kernel: ata3.00: applying bridge limits Sep 5 06:24:04.041823 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 06:24:04.041838 kernel: ata3.00: configured for UDMA/100 Sep 5 06:24:04.043554 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 06:24:04.090033 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 06:24:04.090247 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 06:24:04.102557 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 06:24:04.454989 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 06:24:04.456576 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 06:24:04.458382 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:24:04.459543 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 06:24:04.462414 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 06:24:04.490125 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 06:24:04.847875 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:24:04.847936 disk-uuid[632]: The operation has completed successfully. Sep 5 06:24:04.872337 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 06:24:04.872454 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 06:24:04.910851 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 06:24:04.934459 sh[661]: Success Sep 5 06:24:04.952308 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 06:24:04.952355 kernel: device-mapper: uevent: version 1.0.3 Sep 5 06:24:04.952368 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 06:24:04.960553 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 5 06:24:04.987726 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 06:24:04.991406 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 06:24:05.012487 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 06:24:05.018561 kernel: BTRFS: device fsid 98069635-e988-4e04-b156-f40a4a69cf42 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (673) Sep 5 06:24:05.018595 kernel: BTRFS info (device dm-0): first mount of filesystem 98069635-e988-4e04-b156-f40a4a69cf42 Sep 5 06:24:05.020341 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 06:24:05.025010 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 06:24:05.025029 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 06:24:05.026125 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 06:24:05.026772 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 06:24:05.028144 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 06:24:05.028864 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 06:24:05.033277 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 06:24:05.056572 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (705) Sep 5 06:24:05.056607 kernel: BTRFS info (device vda6): first mount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:24:05.058547 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 06:24:05.061589 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:24:05.061612 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:24:05.065557 kernel: BTRFS info (device vda6): last unmount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:24:05.066321 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 06:24:05.067985 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 06:24:05.150014 ignition[747]: Ignition 2.22.0 Sep 5 06:24:05.150026 ignition[747]: Stage: fetch-offline Sep 5 06:24:05.150057 ignition[747]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:24:05.150066 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:24:05.150142 ignition[747]: parsed url from cmdline: "" Sep 5 06:24:05.150146 ignition[747]: no config URL provided Sep 5 06:24:05.150151 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 06:24:05.150159 ignition[747]: no config at "/usr/lib/ignition/user.ign" Sep 5 06:24:05.150186 ignition[747]: op(1): [started] loading QEMU firmware config module Sep 5 06:24:05.150192 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 06:24:05.158256 ignition[747]: op(1): [finished] loading QEMU firmware config module Sep 5 06:24:05.170567 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 06:24:05.173703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 06:24:05.200881 ignition[747]: parsing config with SHA512: dfabe7d317f7f04c6cb5fdaad84e1748c107357a7029d2621cc4dc154cd918c293b71afdc59b702a36657530a9bf475e0ec0e773c2e60e6e6ab82e52f353c1d6 Sep 5 06:24:05.208013 unknown[747]: fetched base config from "system" Sep 5 06:24:05.208028 unknown[747]: fetched user config from "qemu" Sep 5 06:24:05.208382 ignition[747]: fetch-offline: fetch-offline passed Sep 5 06:24:05.208434 ignition[747]: Ignition finished successfully Sep 5 06:24:05.211780 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 06:24:05.218727 systemd-networkd[851]: lo: Link UP Sep 5 06:24:05.218737 systemd-networkd[851]: lo: Gained carrier Sep 5 06:24:05.220221 systemd-networkd[851]: Enumeration completed Sep 5 06:24:05.220572 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:24:05.220576 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 06:24:05.221274 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 06:24:05.221985 systemd[1]: Reached target network.target - Network. Sep 5 06:24:05.222052 systemd-networkd[851]: eth0: Link UP Sep 5 06:24:05.222255 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 06:24:05.222258 systemd-networkd[851]: eth0: Gained carrier Sep 5 06:24:05.222266 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:24:05.223029 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 06:24:05.245576 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.4/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 06:24:05.257569 ignition[855]: Ignition 2.22.0 Sep 5 06:24:05.257580 ignition[855]: Stage: kargs Sep 5 06:24:05.257703 ignition[855]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:24:05.257713 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:24:05.260431 ignition[855]: kargs: kargs passed Sep 5 06:24:05.260521 ignition[855]: Ignition finished successfully Sep 5 06:24:05.266407 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 06:24:05.267939 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 06:24:05.312480 ignition[864]: Ignition 2.22.0 Sep 5 06:24:05.312492 ignition[864]: Stage: disks Sep 5 06:24:05.312640 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:24:05.312651 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:24:05.313427 ignition[864]: disks: disks passed Sep 5 06:24:05.313472 ignition[864]: Ignition finished successfully Sep 5 06:24:05.319571 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 06:24:05.320780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 06:24:05.321256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 06:24:05.321738 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 06:24:05.322053 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 06:24:05.322374 systemd[1]: Reached target basic.target - Basic System. Sep 5 06:24:05.323807 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 06:24:05.350898 systemd-fsck[873]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 06:24:05.358310 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 06:24:05.359609 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 06:24:05.458567 kernel: EXT4-fs (vda9): mounted filesystem 5e58259f-916a-43e8-ae75-b44bea97e14e r/w with ordered data mode. Quota mode: none. Sep 5 06:24:05.458725 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 06:24:05.459453 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 06:24:05.461804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 06:24:05.463672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 06:24:05.465183 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 06:24:05.465222 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 06:24:05.465243 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 06:24:05.474623 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 06:24:05.476272 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 06:24:05.480556 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (882) Sep 5 06:24:05.480584 kernel: BTRFS info (device vda6): first mount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:24:05.482490 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 06:24:05.485551 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:24:05.485580 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:24:05.487888 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 06:24:05.513458 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 06:24:05.517415 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory Sep 5 06:24:05.522116 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 06:24:05.526632 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 06:24:05.611743 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 06:24:05.613899 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 06:24:05.614967 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 06:24:05.634597 kernel: BTRFS info (device vda6): last unmount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:24:05.646690 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 06:24:05.663341 ignition[996]: INFO : Ignition 2.22.0 Sep 5 06:24:05.663341 ignition[996]: INFO : Stage: mount Sep 5 06:24:05.664898 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:24:05.664898 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:24:05.667373 ignition[996]: INFO : mount: mount passed Sep 5 06:24:05.668109 ignition[996]: INFO : Ignition finished successfully Sep 5 06:24:05.671427 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 06:24:05.672730 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 06:24:06.018581 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 06:24:06.020089 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 06:24:06.049294 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Sep 5 06:24:06.049317 kernel: BTRFS info (device vda6): first mount of filesystem b74bbc0c-6da1-4206-9f48-c70f629ccdff Sep 5 06:24:06.049329 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 06:24:06.052955 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:24:06.052971 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:24:06.054480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 06:24:06.088355 ignition[1026]: INFO : Ignition 2.22.0 Sep 5 06:24:06.088355 ignition[1026]: INFO : Stage: files Sep 5 06:24:06.089941 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:24:06.089941 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:24:06.089941 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Sep 5 06:24:06.093450 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 06:24:06.093450 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 06:24:06.097781 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 06:24:06.099261 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 06:24:06.099261 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 06:24:06.098480 unknown[1026]: wrote ssh authorized keys file for user: core Sep 5 06:24:06.103021 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 06:24:06.103021 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 5 06:24:06.139609 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 06:24:06.654438 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 06:24:06.656419 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 06:24:06.656419 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 06:24:06.656419 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 06:24:06.656419 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 06:24:06.656419 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 06:24:06.656419 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 06:24:06.656419 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 06:24:06.656419 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 06:24:06.670136 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 06:24:06.670136 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 06:24:06.670136 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 06:24:06.670136 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 06:24:06.670136 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 06:24:06.670136 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 5 06:24:06.941655 systemd-networkd[851]: eth0: Gained IPv6LL Sep 5 06:24:07.094274 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 06:24:07.514626 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 06:24:07.514626 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 06:24:07.518272 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 06:24:07.523924 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 06:24:07.523924 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 06:24:07.523924 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 06:24:07.528112 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 06:24:07.529971 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 06:24:07.529971 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 06:24:07.529971 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 06:24:07.548917 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 06:24:07.554950 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 06:24:07.556573 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 06:24:07.557951 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 06:24:07.557951 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 06:24:07.557951 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 06:24:07.557951 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 06:24:07.557951 ignition[1026]: INFO : files: files passed Sep 5 06:24:07.557951 ignition[1026]: INFO : Ignition finished successfully Sep 5 06:24:07.568593 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 06:24:07.571010 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 06:24:07.573167 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 06:24:07.588942 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 06:24:07.589073 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 06:24:07.592184 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 06:24:07.595909 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:24:07.595909 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:24:07.598979 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:24:07.601841 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 06:24:07.602452 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 06:24:07.606263 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 06:24:07.643646 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 06:24:07.643774 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 06:24:07.644318 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 06:24:07.648716 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 06:24:07.648998 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 06:24:07.650751 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 06:24:07.680657 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 06:24:07.682158 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 06:24:07.703431 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:24:07.703847 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:24:07.707086 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 06:24:07.707478 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 06:24:07.707600 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 06:24:07.708283 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 06:24:07.708616 systemd[1]: Stopped target basic.target - Basic System. Sep 5 06:24:07.709075 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 06:24:07.709401 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 06:24:07.709884 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 06:24:07.710204 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 06:24:07.710523 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 06:24:07.711006 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 06:24:07.711337 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 06:24:07.711809 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 06:24:07.712126 systemd[1]: Stopped target swap.target - Swaps. Sep 5 06:24:07.712417 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 06:24:07.712524 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 06:24:07.733687 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:24:07.735619 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:24:07.735975 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 06:24:07.736076 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:24:07.736319 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 06:24:07.736422 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 06:24:07.741392 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 06:24:07.741520 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 06:24:07.744948 systemd[1]: Stopped target paths.target - Path Units. Sep 5 06:24:07.745184 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 06:24:07.745319 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:24:07.748829 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 06:24:07.749146 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 06:24:07.749463 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 06:24:07.749558 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 06:24:07.749998 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 06:24:07.750075 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 06:24:07.755958 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 06:24:07.756067 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 06:24:07.757877 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 06:24:07.757975 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 06:24:07.760298 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 06:24:07.761275 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 06:24:07.761382 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:24:07.762457 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 06:24:07.765081 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 06:24:07.765239 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:24:07.766013 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 06:24:07.766156 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 06:24:07.770090 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 06:24:07.784705 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 06:24:07.805559 ignition[1081]: INFO : Ignition 2.22.0 Sep 5 06:24:07.805559 ignition[1081]: INFO : Stage: umount Sep 5 06:24:07.805559 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:24:07.805559 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:24:07.809645 ignition[1081]: INFO : umount: umount passed Sep 5 06:24:07.809645 ignition[1081]: INFO : Ignition finished successfully Sep 5 06:24:07.805663 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 06:24:07.808917 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 06:24:07.809044 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 06:24:07.810625 systemd[1]: Stopped target network.target - Network. Sep 5 06:24:07.812078 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 06:24:07.812157 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 06:24:07.812405 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 06:24:07.812453 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 06:24:07.812864 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 06:24:07.812919 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 06:24:07.813186 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 06:24:07.813229 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 06:24:07.813679 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 06:24:07.814063 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 06:24:07.825847 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 06:24:07.825975 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 06:24:07.830884 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 06:24:07.831555 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 06:24:07.831750 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:24:07.835974 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 06:24:07.836245 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 06:24:07.836357 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 06:24:07.840248 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 06:24:07.840780 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 06:24:07.841290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 06:24:07.841328 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:24:07.845763 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 06:24:07.846172 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 06:24:07.846221 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 06:24:07.846543 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 06:24:07.846584 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:24:07.852012 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 06:24:07.852056 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 06:24:07.852548 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:24:07.853820 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 06:24:07.875365 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 06:24:07.880745 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:24:07.881341 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 06:24:07.881389 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 06:24:07.883372 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 06:24:07.883406 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:24:07.885483 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 06:24:07.885545 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 06:24:07.886303 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 06:24:07.886349 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 06:24:07.887076 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 06:24:07.887126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 06:24:07.888610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 06:24:07.899006 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 06:24:07.900078 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:24:07.902790 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 06:24:07.902858 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:24:07.906127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 06:24:07.906179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:24:07.909820 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 06:24:07.909934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 06:24:07.910469 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 06:24:07.910582 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 06:24:08.011470 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 06:24:08.011620 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 06:24:08.012448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 06:24:08.014479 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 06:24:08.014556 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 06:24:08.015754 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 06:24:08.046115 systemd[1]: Switching root. Sep 5 06:24:08.092114 systemd-journald[220]: Journal stopped Sep 5 06:24:09.351583 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 5 06:24:09.351649 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 06:24:09.351673 kernel: SELinux: policy capability open_perms=1 Sep 5 06:24:09.351684 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 06:24:09.351695 kernel: SELinux: policy capability always_check_network=0 Sep 5 06:24:09.351713 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 06:24:09.351725 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 06:24:09.351736 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 06:24:09.351748 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 06:24:09.351759 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 06:24:09.351771 kernel: audit: type=1403 audit(1757053448.594:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 06:24:09.351792 systemd[1]: Successfully loaded SELinux policy in 66.428ms. Sep 5 06:24:09.351818 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.083ms. Sep 5 06:24:09.351831 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 06:24:09.351844 systemd[1]: Detected virtualization kvm. Sep 5 06:24:09.351857 systemd[1]: Detected architecture x86-64. Sep 5 06:24:09.351869 systemd[1]: Detected first boot. Sep 5 06:24:09.351881 systemd[1]: Initializing machine ID from VM UUID. Sep 5 06:24:09.351894 zram_generator::config[1128]: No configuration found. Sep 5 06:24:09.351916 kernel: Guest personality initialized and is inactive Sep 5 06:24:09.351928 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 06:24:09.351939 kernel: Initialized host personality Sep 5 06:24:09.351950 kernel: NET: Registered PF_VSOCK protocol family Sep 5 06:24:09.351962 systemd[1]: Populated /etc with preset unit settings. Sep 5 06:24:09.351975 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 06:24:09.351988 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 06:24:09.352000 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 06:24:09.352013 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 06:24:09.352028 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 06:24:09.352040 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 06:24:09.352052 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 06:24:09.352064 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 06:24:09.352084 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 06:24:09.352098 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 06:24:09.352111 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 06:24:09.352123 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 06:24:09.352135 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:24:09.352149 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:24:09.352162 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 06:24:09.352174 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 06:24:09.352186 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 06:24:09.352199 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 06:24:09.352212 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 06:24:09.352224 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:24:09.352238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:24:09.352253 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 06:24:09.352266 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 06:24:09.352278 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 06:24:09.352292 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 06:24:09.352305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:24:09.352318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 06:24:09.352330 systemd[1]: Reached target slices.target - Slice Units. Sep 5 06:24:09.352343 systemd[1]: Reached target swap.target - Swaps. Sep 5 06:24:09.352355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 06:24:09.352370 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 06:24:09.352382 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 06:24:09.352395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:24:09.352407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 06:24:09.352418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:24:09.352431 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 06:24:09.352461 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 06:24:09.352474 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 06:24:09.352487 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 06:24:09.352501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:24:09.352513 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 06:24:09.352525 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 06:24:09.352560 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 06:24:09.352573 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 06:24:09.352586 systemd[1]: Reached target machines.target - Containers. Sep 5 06:24:09.352599 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 06:24:09.352611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:24:09.352626 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 06:24:09.352638 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 06:24:09.352650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:24:09.352662 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 06:24:09.352674 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:24:09.352686 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 06:24:09.352698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:24:09.352711 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 06:24:09.352726 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 06:24:09.352738 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 06:24:09.352750 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 06:24:09.352761 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 06:24:09.352775 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:24:09.352787 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 06:24:09.352798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 06:24:09.352811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 06:24:09.352823 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 06:24:09.352837 kernel: loop: module loaded Sep 5 06:24:09.352849 kernel: fuse: init (API version 7.41) Sep 5 06:24:09.352861 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 06:24:09.352873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 06:24:09.352886 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 06:24:09.352900 systemd[1]: Stopped verity-setup.service. Sep 5 06:24:09.352913 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:24:09.352925 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 06:24:09.352937 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 06:24:09.352949 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 06:24:09.352961 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 06:24:09.352973 kernel: ACPI: bus type drm_connector registered Sep 5 06:24:09.352984 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 06:24:09.352996 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 06:24:09.353017 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:24:09.353031 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 06:24:09.353043 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 06:24:09.353056 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 06:24:09.353075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:24:09.353090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:24:09.353102 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 06:24:09.353114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 06:24:09.353145 systemd-journald[1199]: Collecting audit messages is disabled. Sep 5 06:24:09.353169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:24:09.353181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:24:09.353193 systemd-journald[1199]: Journal started Sep 5 06:24:09.353218 systemd-journald[1199]: Runtime Journal (/run/log/journal/128d2d7c5d24481a8bedeea900419bdb) is 6M, max 48.6M, 42.5M free. Sep 5 06:24:09.108303 systemd[1]: Queued start job for default target multi-user.target. Sep 5 06:24:09.127440 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 06:24:09.127898 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 06:24:09.355654 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 06:24:09.357256 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 06:24:09.357476 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 06:24:09.358805 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:24:09.359012 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:24:09.360552 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 06:24:09.362009 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:24:09.363555 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 06:24:09.365079 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 06:24:09.378802 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 06:24:09.381223 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 06:24:09.383307 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 06:24:09.384402 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 06:24:09.384481 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 06:24:09.386348 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 06:24:09.400644 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 06:24:09.401978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:24:09.403608 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 06:24:09.408037 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 06:24:09.409695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 06:24:09.411656 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 06:24:09.412113 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 06:24:09.413707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 06:24:09.417634 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 06:24:09.424210 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 06:24:09.428238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:24:09.438637 systemd-journald[1199]: Time spent on flushing to /var/log/journal/128d2d7c5d24481a8bedeea900419bdb is 13.090ms for 979 entries. Sep 5 06:24:09.438637 systemd-journald[1199]: System Journal (/var/log/journal/128d2d7c5d24481a8bedeea900419bdb) is 8M, max 195.6M, 187.6M free. Sep 5 06:24:09.464844 systemd-journald[1199]: Received client request to flush runtime journal. Sep 5 06:24:09.464889 kernel: loop0: detected capacity change from 0 to 224512 Sep 5 06:24:09.432284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 06:24:09.433670 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 06:24:09.447984 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 06:24:09.452304 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 06:24:09.456792 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 06:24:09.460049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:24:09.467369 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 06:24:09.480547 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 06:24:09.485014 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 06:24:09.491658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 06:24:09.493177 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 06:24:09.505573 kernel: loop1: detected capacity change from 0 to 128016 Sep 5 06:24:09.527220 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 5 06:24:09.527237 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 5 06:24:09.531802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:24:09.539562 kernel: loop2: detected capacity change from 0 to 111000 Sep 5 06:24:09.562562 kernel: loop3: detected capacity change from 0 to 224512 Sep 5 06:24:09.570563 kernel: loop4: detected capacity change from 0 to 128016 Sep 5 06:24:09.582561 kernel: loop5: detected capacity change from 0 to 111000 Sep 5 06:24:09.591055 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 06:24:09.591968 (sd-merge)[1269]: Merged extensions into '/usr'. Sep 5 06:24:09.598302 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 06:24:09.598319 systemd[1]: Reloading... Sep 5 06:24:09.669660 zram_generator::config[1302]: No configuration found. Sep 5 06:24:09.725602 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 06:24:09.839127 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 06:24:09.839660 systemd[1]: Reloading finished in 240 ms. Sep 5 06:24:09.869150 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 06:24:09.870673 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 06:24:09.890853 systemd[1]: Starting ensure-sysext.service... Sep 5 06:24:09.893165 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 06:24:09.925855 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 06:24:09.925894 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 06:24:09.926233 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 06:24:09.926495 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 06:24:09.927389 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 06:24:09.927669 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Sep 5 06:24:09.927741 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Sep 5 06:24:09.931863 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 06:24:09.931875 systemd-tmpfiles[1334]: Skipping /boot Sep 5 06:24:09.933736 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Sep 5 06:24:09.933752 systemd[1]: Reloading... Sep 5 06:24:09.941462 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 06:24:09.941473 systemd-tmpfiles[1334]: Skipping /boot Sep 5 06:24:09.980556 zram_generator::config[1361]: No configuration found. Sep 5 06:24:10.149577 systemd[1]: Reloading finished in 215 ms. Sep 5 06:24:10.172911 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 06:24:10.192293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:24:10.200698 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:24:10.203118 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 06:24:10.222072 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 06:24:10.225812 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 06:24:10.228414 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:24:10.231653 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 06:24:10.236285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:24:10.236672 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:24:10.242785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:24:10.246693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:24:10.248976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:24:10.250357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:24:10.250454 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:24:10.255883 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 06:24:10.256956 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:24:10.261026 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 06:24:10.262891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:24:10.263119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:24:10.264863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:24:10.265139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:24:10.266913 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:24:10.270836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:24:10.278999 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Sep 5 06:24:10.282157 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:24:10.282368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:24:10.284737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:24:10.287338 augenrules[1430]: No rules Sep 5 06:24:10.288785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:24:10.294949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:24:10.296673 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:24:10.296810 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:24:10.298235 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 06:24:10.299276 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:24:10.300708 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:24:10.302585 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:24:10.302854 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:24:10.305643 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 06:24:10.307470 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 06:24:10.309424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:24:10.309702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:24:10.313122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:24:10.313333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:24:10.315110 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:24:10.315316 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:24:10.319197 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 06:24:10.348721 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 06:24:10.352014 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:24:10.359979 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:24:10.361021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:24:10.362526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:24:10.365748 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 06:24:10.370157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:24:10.377703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:24:10.378989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:24:10.379021 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:24:10.381106 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 06:24:10.383595 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 06:24:10.383621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 06:24:10.384487 systemd[1]: Finished ensure-sysext.service. Sep 5 06:24:10.385664 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 06:24:10.385866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 06:24:10.387220 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:24:10.387420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:24:10.393357 augenrules[1482]: /sbin/augenrules: No change Sep 5 06:24:10.394561 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 06:24:10.396973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:24:10.398008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:24:10.401717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:24:10.401931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:24:10.407050 augenrules[1516]: No rules Sep 5 06:24:10.409106 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:24:10.409385 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:24:10.411642 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 06:24:10.411729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 06:24:10.431285 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 06:24:10.432400 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 06:24:10.451553 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 06:24:10.457895 kernel: ACPI: button: Power Button [PWRF] Sep 5 06:24:10.466130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 06:24:10.471066 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 06:24:10.489427 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 06:24:10.507748 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 06:24:10.508049 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 06:24:10.526568 systemd-networkd[1499]: lo: Link UP Sep 5 06:24:10.526576 systemd-networkd[1499]: lo: Gained carrier Sep 5 06:24:10.528181 systemd-networkd[1499]: Enumeration completed Sep 5 06:24:10.528278 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 06:24:10.529224 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:24:10.529292 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 06:24:10.530001 systemd-networkd[1499]: eth0: Link UP Sep 5 06:24:10.530206 systemd-networkd[1499]: eth0: Gained carrier Sep 5 06:24:10.530477 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:24:10.531801 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 06:24:10.534252 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 06:24:10.543646 systemd-networkd[1499]: eth0: DHCPv4 address 10.0.0.4/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 06:24:10.567151 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 06:24:10.568216 systemd-resolved[1403]: Positive Trust Anchors: Sep 5 06:24:10.568231 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 06:24:10.568262 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 06:24:10.578603 systemd-resolved[1403]: Defaulting to hostname 'linux'. Sep 5 06:24:10.580743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 06:24:10.581981 systemd[1]: Reached target network.target - Network. Sep 5 06:24:10.582886 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:24:10.593106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:24:10.639083 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 06:24:10.640052 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 06:24:10.640337 kernel: kvm_amd: TSC scaling supported Sep 5 06:24:10.640359 kernel: kvm_amd: Nested Virtualization enabled Sep 5 06:24:10.640372 kernel: kvm_amd: Nested Paging enabled Sep 5 06:24:10.640388 kernel: kvm_amd: LBR virtualization supported Sep 5 06:24:10.640401 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 06:24:10.640101 systemd-timesyncd[1525]: Initial clock synchronization to Fri 2025-09-05 06:24:10.900338 UTC. Sep 5 06:24:10.641731 kernel: kvm_amd: Virtual GIF supported Sep 5 06:24:10.642715 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 06:24:10.689569 kernel: EDAC MC: Ver: 3.0.0 Sep 5 06:24:10.722695 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:24:10.724093 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 06:24:10.725237 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 06:24:10.726499 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 06:24:10.727725 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 06:24:10.728977 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 06:24:10.730290 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 06:24:10.731497 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 06:24:10.732713 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 06:24:10.732743 systemd[1]: Reached target paths.target - Path Units. Sep 5 06:24:10.733640 systemd[1]: Reached target timers.target - Timer Units. Sep 5 06:24:10.735272 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 06:24:10.738101 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 06:24:10.741236 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 06:24:10.742610 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 06:24:10.743824 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 06:24:10.747275 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 06:24:10.748574 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 06:24:10.750270 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 06:24:10.751985 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 06:24:10.752926 systemd[1]: Reached target basic.target - Basic System. Sep 5 06:24:10.753881 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 06:24:10.753909 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 06:24:10.754856 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 06:24:10.756840 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 06:24:10.758767 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 06:24:10.761162 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 06:24:10.773924 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 06:24:10.774989 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 06:24:10.776492 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 06:24:10.778659 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 06:24:10.780597 jq[1562]: false Sep 5 06:24:10.781699 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 06:24:10.783833 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 06:24:10.788110 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 06:24:10.792560 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing passwd entry cache Sep 5 06:24:10.790660 oslogin_cache_refresh[1564]: Refreshing passwd entry cache Sep 5 06:24:10.793826 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 06:24:10.795570 extend-filesystems[1563]: Found /dev/vda6 Sep 5 06:24:10.796645 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 06:24:10.797175 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 06:24:10.798437 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 06:24:10.800247 extend-filesystems[1563]: Found /dev/vda9 Sep 5 06:24:10.802044 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting users, quitting Sep 5 06:24:10.802044 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 06:24:10.802044 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing group entry cache Sep 5 06:24:10.801716 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 06:24:10.801680 oslogin_cache_refresh[1564]: Failure getting users, quitting Sep 5 06:24:10.801695 oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 06:24:10.801738 oslogin_cache_refresh[1564]: Refreshing group entry cache Sep 5 06:24:10.804317 extend-filesystems[1563]: Checking size of /dev/vda9 Sep 5 06:24:10.808639 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 06:24:10.812867 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting groups, quitting Sep 5 06:24:10.812867 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 06:24:10.811651 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 06:24:10.809665 oslogin_cache_refresh[1564]: Failure getting groups, quitting Sep 5 06:24:10.811893 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 06:24:10.809675 oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 06:24:10.812236 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 06:24:10.813226 jq[1583]: true Sep 5 06:24:10.812460 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 06:24:10.814996 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 06:24:10.815634 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 06:24:10.817237 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 06:24:10.817560 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 06:24:10.821327 update_engine[1578]: I20250905 06:24:10.821257 1578 main.cc:92] Flatcar Update Engine starting Sep 5 06:24:10.824372 extend-filesystems[1563]: Resized partition /dev/vda9 Sep 5 06:24:10.826570 extend-filesystems[1593]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 06:24:10.833290 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 06:24:10.837901 jq[1589]: true Sep 5 06:24:10.835877 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 06:24:10.850989 tar[1588]: linux-amd64/LICENSE Sep 5 06:24:10.851267 tar[1588]: linux-amd64/helm Sep 5 06:24:10.867565 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 06:24:10.888688 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 06:24:10.888688 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 06:24:10.888688 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 06:24:10.893231 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Sep 5 06:24:10.890828 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 06:24:10.891110 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 06:24:10.898455 dbus-daemon[1560]: [system] SELinux support is enabled Sep 5 06:24:10.901814 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) Sep 5 06:24:10.901846 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 06:24:10.902578 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 06:24:10.904665 systemd-logind[1573]: New seat seat0. Sep 5 06:24:10.914006 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 06:24:10.915449 update_engine[1578]: I20250905 06:24:10.915161 1578 update_check_scheduler.cc:74] Next update check in 7m26s Sep 5 06:24:10.916834 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 06:24:10.916860 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 06:24:10.918095 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 06:24:10.918107 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 06:24:10.921637 systemd[1]: Started update-engine.service - Update Engine. Sep 5 06:24:10.925421 dbus-daemon[1560]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 06:24:10.928198 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 06:24:10.939230 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Sep 5 06:24:10.939452 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 06:24:10.941722 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 06:24:10.944036 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 06:24:10.965158 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 06:24:10.970077 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 06:24:10.970717 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 06:24:10.989350 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 06:24:10.989670 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 06:24:10.992755 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 06:24:11.016756 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 06:24:11.021811 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 06:24:11.024019 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 06:24:11.025277 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 06:24:11.034807 containerd[1590]: time="2025-09-05T06:24:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 06:24:11.035640 containerd[1590]: time="2025-09-05T06:24:11.035595992Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 06:24:11.044554 containerd[1590]: time="2025-09-05T06:24:11.044315396Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.838µs" Sep 5 06:24:11.044554 containerd[1590]: time="2025-09-05T06:24:11.044337400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 06:24:11.044554 containerd[1590]: time="2025-09-05T06:24:11.044358173Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 06:24:11.044554 containerd[1590]: time="2025-09-05T06:24:11.044514721Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 06:24:11.044554 containerd[1590]: time="2025-09-05T06:24:11.044529960Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 06:24:11.044709 containerd[1590]: time="2025-09-05T06:24:11.044695601Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 06:24:11.044817 containerd[1590]: time="2025-09-05T06:24:11.044800654Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 06:24:11.044862 containerd[1590]: time="2025-09-05T06:24:11.044851955Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 06:24:11.047118 containerd[1590]: time="2025-09-05T06:24:11.047067818Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 06:24:11.047118 containerd[1590]: time="2025-09-05T06:24:11.047109198Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 06:24:11.047185 containerd[1590]: time="2025-09-05T06:24:11.047125418Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 06:24:11.047185 containerd[1590]: time="2025-09-05T06:24:11.047134823Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 06:24:11.047298 containerd[1590]: time="2025-09-05T06:24:11.047275213Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 06:24:11.047553 containerd[1590]: time="2025-09-05T06:24:11.047525993Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 06:24:11.047602 containerd[1590]: time="2025-09-05T06:24:11.047580500Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 06:24:11.047602 containerd[1590]: time="2025-09-05T06:24:11.047594869Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 06:24:11.047651 containerd[1590]: time="2025-09-05T06:24:11.047631925Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 06:24:11.048027 containerd[1590]: time="2025-09-05T06:24:11.047915085Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 06:24:11.048027 containerd[1590]: time="2025-09-05T06:24:11.047996644Z" level=info msg="metadata content store policy set" policy=shared Sep 5 06:24:11.053425 containerd[1590]: time="2025-09-05T06:24:11.053407143Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 06:24:11.053520 containerd[1590]: time="2025-09-05T06:24:11.053505130Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 06:24:11.053601 containerd[1590]: time="2025-09-05T06:24:11.053587279Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 06:24:11.053650 containerd[1590]: time="2025-09-05T06:24:11.053639562Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 06:24:11.053697 containerd[1590]: time="2025-09-05T06:24:11.053686600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 06:24:11.053754 containerd[1590]: time="2025-09-05T06:24:11.053743963Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 06:24:11.053809 containerd[1590]: time="2025-09-05T06:24:11.053798335Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 06:24:11.053857 containerd[1590]: time="2025-09-05T06:24:11.053846821Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 06:24:11.053902 containerd[1590]: time="2025-09-05T06:24:11.053891729Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.053999253Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054016550Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054029036Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054132092Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054150372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054164895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054175871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054187179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054204071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054215420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054226013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054237486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054247334Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054256830Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 06:24:11.054624 containerd[1590]: time="2025-09-05T06:24:11.054318259Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 06:24:11.055037 containerd[1590]: time="2025-09-05T06:24:11.054331645Z" level=info msg="Start snapshots syncer" Sep 5 06:24:11.055037 containerd[1590]: time="2025-09-05T06:24:11.054363114Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 06:24:11.055037 containerd[1590]: time="2025-09-05T06:24:11.054587837Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054629391Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054696820Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054799668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054818062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054829244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054846458Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054857631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054870231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054881082Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054901989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054912655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054922938Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054950859Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 06:24:11.055180 containerd[1590]: time="2025-09-05T06:24:11.054961483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.054969593Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.054978801Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.054986538Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.054996263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.055008759Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.055027617Z" level=info msg="runtime interface created" Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.055033059Z" level=info msg="created NRI interface" Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.055041615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.055052259Z" level=info msg="Connect containerd service" Sep 5 06:24:11.055438 containerd[1590]: time="2025-09-05T06:24:11.055072525Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 06:24:11.055813 containerd[1590]: time="2025-09-05T06:24:11.055773360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 06:24:11.135695 containerd[1590]: time="2025-09-05T06:24:11.135635843Z" level=info msg="Start subscribing containerd event" Sep 5 06:24:11.135695 containerd[1590]: time="2025-09-05T06:24:11.135675692Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 06:24:11.135829 containerd[1590]: time="2025-09-05T06:24:11.135700458Z" level=info msg="Start recovering state" Sep 5 06:24:11.135829 containerd[1590]: time="2025-09-05T06:24:11.135739955Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 06:24:11.135883 containerd[1590]: time="2025-09-05T06:24:11.135834217Z" level=info msg="Start event monitor" Sep 5 06:24:11.135883 containerd[1590]: time="2025-09-05T06:24:11.135852021Z" level=info msg="Start cni network conf syncer for default" Sep 5 06:24:11.135883 containerd[1590]: time="2025-09-05T06:24:11.135861601Z" level=info msg="Start streaming server" Sep 5 06:24:11.135883 containerd[1590]: time="2025-09-05T06:24:11.135872452Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 06:24:11.135883 containerd[1590]: time="2025-09-05T06:24:11.135879849Z" level=info msg="runtime interface starting up..." Sep 5 06:24:11.135883 containerd[1590]: time="2025-09-05T06:24:11.135886501Z" level=info msg="starting plugins..." Sep 5 06:24:11.135996 containerd[1590]: time="2025-09-05T06:24:11.135900704Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 06:24:11.136091 containerd[1590]: time="2025-09-05T06:24:11.136073712Z" level=info msg="containerd successfully booted in 0.101770s" Sep 5 06:24:11.136371 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 06:24:11.177167 tar[1588]: linux-amd64/README.md Sep 5 06:24:11.205942 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 06:24:12.256317 systemd-networkd[1499]: eth0: Gained IPv6LL Sep 5 06:24:12.259457 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 06:24:12.261153 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 06:24:12.263512 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 06:24:12.265795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:24:12.267873 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 06:24:12.305729 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 06:24:12.307584 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 06:24:12.307831 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 06:24:12.310044 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 06:24:12.992792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:24:12.994380 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 06:24:12.995628 systemd[1]: Startup finished in 2.717s (kernel) + 5.968s (initrd) + 4.465s (userspace) = 13.152s. Sep 5 06:24:13.027874 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:24:13.422209 kubelet[1695]: E0905 06:24:13.422070 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:24:13.426093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:24:13.426297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:24:13.426683 systemd[1]: kubelet.service: Consumed 941ms CPU time, 263M memory peak. Sep 5 06:24:15.818589 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 06:24:15.819695 systemd[1]: Started sshd@0-10.0.0.4:22-10.0.0.1:55810.service - OpenSSH per-connection server daemon (10.0.0.1:55810). Sep 5 06:24:15.873923 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 55810 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:24:15.875485 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:24:15.881799 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 06:24:15.882909 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 06:24:15.888837 systemd-logind[1573]: New session 1 of user core. Sep 5 06:24:15.899769 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 06:24:15.902647 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 06:24:15.927482 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 06:24:15.929804 systemd-logind[1573]: New session c1 of user core. Sep 5 06:24:16.085537 systemd[1713]: Queued start job for default target default.target. Sep 5 06:24:16.107867 systemd[1713]: Created slice app.slice - User Application Slice. Sep 5 06:24:16.107892 systemd[1713]: Reached target paths.target - Paths. Sep 5 06:24:16.107933 systemd[1713]: Reached target timers.target - Timers. Sep 5 06:24:16.109399 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 06:24:16.119856 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 06:24:16.119977 systemd[1713]: Reached target sockets.target - Sockets. Sep 5 06:24:16.120017 systemd[1713]: Reached target basic.target - Basic System. Sep 5 06:24:16.120058 systemd[1713]: Reached target default.target - Main User Target. Sep 5 06:24:16.120089 systemd[1713]: Startup finished in 182ms. Sep 5 06:24:16.120347 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 06:24:16.121950 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 06:24:16.186451 systemd[1]: Started sshd@1-10.0.0.4:22-10.0.0.1:55826.service - OpenSSH per-connection server daemon (10.0.0.1:55826). Sep 5 06:24:16.240471 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 55826 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:24:16.241781 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:24:16.245816 systemd-logind[1573]: New session 2 of user core. Sep 5 06:24:16.252671 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 06:24:16.306069 sshd[1727]: Connection closed by 10.0.0.1 port 55826 Sep 5 06:24:16.306376 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 5 06:24:16.316121 systemd[1]: sshd@1-10.0.0.4:22-10.0.0.1:55826.service: Deactivated successfully. Sep 5 06:24:16.317861 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 06:24:16.318571 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Sep 5 06:24:16.321155 systemd[1]: Started sshd@2-10.0.0.4:22-10.0.0.1:55834.service - OpenSSH per-connection server daemon (10.0.0.1:55834). Sep 5 06:24:16.321787 systemd-logind[1573]: Removed session 2. Sep 5 06:24:16.381745 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 55834 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:24:16.382924 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:24:16.386890 systemd-logind[1573]: New session 3 of user core. Sep 5 06:24:16.404688 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 06:24:16.453069 sshd[1736]: Connection closed by 10.0.0.1 port 55834 Sep 5 06:24:16.453413 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 5 06:24:16.463054 systemd[1]: sshd@2-10.0.0.4:22-10.0.0.1:55834.service: Deactivated successfully. Sep 5 06:24:16.464840 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 06:24:16.465536 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Sep 5 06:24:16.468062 systemd[1]: Started sshd@3-10.0.0.4:22-10.0.0.1:55838.service - OpenSSH per-connection server daemon (10.0.0.1:55838). Sep 5 06:24:16.468612 systemd-logind[1573]: Removed session 3. Sep 5 06:24:16.523218 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 55838 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:24:16.524406 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:24:16.528134 systemd-logind[1573]: New session 4 of user core. Sep 5 06:24:16.539666 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 06:24:16.592387 sshd[1745]: Connection closed by 10.0.0.1 port 55838 Sep 5 06:24:16.592761 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 5 06:24:16.604880 systemd[1]: sshd@3-10.0.0.4:22-10.0.0.1:55838.service: Deactivated successfully. Sep 5 06:24:16.606575 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 06:24:16.607238 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Sep 5 06:24:16.609734 systemd[1]: Started sshd@4-10.0.0.4:22-10.0.0.1:55844.service - OpenSSH per-connection server daemon (10.0.0.1:55844). Sep 5 06:24:16.610259 systemd-logind[1573]: Removed session 4. Sep 5 06:24:16.664058 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 55844 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:24:16.665374 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:24:16.669193 systemd-logind[1573]: New session 5 of user core. Sep 5 06:24:16.678667 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 06:24:16.735542 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 06:24:16.735869 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:24:16.751286 sudo[1755]: pam_unix(sudo:session): session closed for user root Sep 5 06:24:16.752960 sshd[1754]: Connection closed by 10.0.0.1 port 55844 Sep 5 06:24:16.753306 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 5 06:24:16.762104 systemd[1]: sshd@4-10.0.0.4:22-10.0.0.1:55844.service: Deactivated successfully. Sep 5 06:24:16.763897 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 06:24:16.764621 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Sep 5 06:24:16.767276 systemd[1]: Started sshd@5-10.0.0.4:22-10.0.0.1:55850.service - OpenSSH per-connection server daemon (10.0.0.1:55850). Sep 5 06:24:16.767795 systemd-logind[1573]: Removed session 5. Sep 5 06:24:16.824712 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 55850 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:24:16.825922 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:24:16.829990 systemd-logind[1573]: New session 6 of user core. Sep 5 06:24:16.836669 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 06:24:16.889490 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 06:24:16.889814 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:24:17.108238 sudo[1766]: pam_unix(sudo:session): session closed for user root Sep 5 06:24:17.114110 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 06:24:17.114407 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:24:17.123542 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:24:17.163728 augenrules[1788]: No rules Sep 5 06:24:17.165398 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:24:17.165699 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:24:17.166945 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 5 06:24:17.168342 sshd[1764]: Connection closed by 10.0.0.1 port 55850 Sep 5 06:24:17.168682 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Sep 5 06:24:17.176353 systemd[1]: sshd@5-10.0.0.4:22-10.0.0.1:55850.service: Deactivated successfully. Sep 5 06:24:17.178086 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 06:24:17.178759 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Sep 5 06:24:17.181297 systemd[1]: Started sshd@6-10.0.0.4:22-10.0.0.1:55856.service - OpenSSH per-connection server daemon (10.0.0.1:55856). Sep 5 06:24:17.181879 systemd-logind[1573]: Removed session 6. Sep 5 06:24:17.232032 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 55856 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:24:17.233247 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:24:17.237131 systemd-logind[1573]: New session 7 of user core. Sep 5 06:24:17.247663 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 06:24:17.299870 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 06:24:17.300176 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:24:17.584456 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 06:24:17.605868 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 06:24:17.824414 dockerd[1821]: time="2025-09-05T06:24:17.824343390Z" level=info msg="Starting up" Sep 5 06:24:17.825487 dockerd[1821]: time="2025-09-05T06:24:17.825449450Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 06:24:17.837304 dockerd[1821]: time="2025-09-05T06:24:17.837210254Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 06:24:18.256999 dockerd[1821]: time="2025-09-05T06:24:18.256917932Z" level=info msg="Loading containers: start." Sep 5 06:24:18.266572 kernel: Initializing XFRM netlink socket Sep 5 06:24:18.510008 systemd-networkd[1499]: docker0: Link UP Sep 5 06:24:18.514971 dockerd[1821]: time="2025-09-05T06:24:18.514926799Z" level=info msg="Loading containers: done." Sep 5 06:24:18.527933 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck833045239-merged.mount: Deactivated successfully. Sep 5 06:24:18.529157 dockerd[1821]: time="2025-09-05T06:24:18.529112040Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 06:24:18.529219 dockerd[1821]: time="2025-09-05T06:24:18.529194007Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 06:24:18.529315 dockerd[1821]: time="2025-09-05T06:24:18.529288251Z" level=info msg="Initializing buildkit" Sep 5 06:24:18.557597 dockerd[1821]: time="2025-09-05T06:24:18.557566328Z" level=info msg="Completed buildkit initialization" Sep 5 06:24:18.564710 dockerd[1821]: time="2025-09-05T06:24:18.564662364Z" level=info msg="Daemon has completed initialization" Sep 5 06:24:18.564824 dockerd[1821]: time="2025-09-05T06:24:18.564743823Z" level=info msg="API listen on /run/docker.sock" Sep 5 06:24:18.564925 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 06:24:19.292375 containerd[1590]: time="2025-09-05T06:24:19.292337862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 5 06:24:19.852048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261208751.mount: Deactivated successfully. Sep 5 06:24:20.732036 containerd[1590]: time="2025-09-05T06:24:20.731977124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:20.732602 containerd[1590]: time="2025-09-05T06:24:20.732581351Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 5 06:24:20.733757 containerd[1590]: time="2025-09-05T06:24:20.733715317Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:20.736096 containerd[1590]: time="2025-09-05T06:24:20.736047460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:20.736993 containerd[1590]: time="2025-09-05T06:24:20.736948576Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 1.444569462s" Sep 5 06:24:20.736993 containerd[1590]: time="2025-09-05T06:24:20.736981284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 5 06:24:20.737489 containerd[1590]: time="2025-09-05T06:24:20.737459972Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 5 06:24:21.938803 containerd[1590]: time="2025-09-05T06:24:21.938748077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:21.939431 containerd[1590]: time="2025-09-05T06:24:21.939383240Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 5 06:24:21.940442 containerd[1590]: time="2025-09-05T06:24:21.940419525Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:21.942813 containerd[1590]: time="2025-09-05T06:24:21.942777823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:21.943585 containerd[1590]: time="2025-09-05T06:24:21.943552916Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.206068175s" Sep 5 06:24:21.943629 containerd[1590]: time="2025-09-05T06:24:21.943584139Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 5 06:24:21.944263 containerd[1590]: time="2025-09-05T06:24:21.944063469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 5 06:24:23.537273 containerd[1590]: time="2025-09-05T06:24:23.537201673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:23.538012 containerd[1590]: time="2025-09-05T06:24:23.537947692Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 5 06:24:23.539160 containerd[1590]: time="2025-09-05T06:24:23.539106974Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:23.541778 containerd[1590]: time="2025-09-05T06:24:23.541732174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:23.542684 containerd[1590]: time="2025-09-05T06:24:23.542650355Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.598551425s" Sep 5 06:24:23.542724 containerd[1590]: time="2025-09-05T06:24:23.542683180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 5 06:24:23.543231 containerd[1590]: time="2025-09-05T06:24:23.543194805Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 5 06:24:23.676786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 06:24:23.678260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:24:23.878126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:24:23.881871 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:24:24.097358 kubelet[2112]: E0905 06:24:24.097294 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:24:24.103757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:24:24.103964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:24:24.104387 systemd[1]: kubelet.service: Consumed 366ms CPU time, 111.1M memory peak. Sep 5 06:24:24.834935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245754052.mount: Deactivated successfully. Sep 5 06:24:25.633552 containerd[1590]: time="2025-09-05T06:24:25.633487610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:25.634286 containerd[1590]: time="2025-09-05T06:24:25.634254008Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 5 06:24:25.635468 containerd[1590]: time="2025-09-05T06:24:25.635437236Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:25.637337 containerd[1590]: time="2025-09-05T06:24:25.637307486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:25.637845 containerd[1590]: time="2025-09-05T06:24:25.637802492Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.094579595s" Sep 5 06:24:25.637879 containerd[1590]: time="2025-09-05T06:24:25.637843151Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 5 06:24:25.638571 containerd[1590]: time="2025-09-05T06:24:25.638332941Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 06:24:26.240088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291163618.mount: Deactivated successfully. Sep 5 06:24:26.909935 containerd[1590]: time="2025-09-05T06:24:26.909865517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:26.910679 containerd[1590]: time="2025-09-05T06:24:26.910636578Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 06:24:26.911706 containerd[1590]: time="2025-09-05T06:24:26.911666536Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:26.914253 containerd[1590]: time="2025-09-05T06:24:26.914203447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:26.915023 containerd[1590]: time="2025-09-05T06:24:26.914978060Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.276612751s" Sep 5 06:24:26.915023 containerd[1590]: time="2025-09-05T06:24:26.915009999Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 06:24:26.915470 containerd[1590]: time="2025-09-05T06:24:26.915447319Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 06:24:27.465824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814563215.mount: Deactivated successfully. Sep 5 06:24:27.471418 containerd[1590]: time="2025-09-05T06:24:27.471373362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:24:27.472008 containerd[1590]: time="2025-09-05T06:24:27.471978832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 06:24:27.473159 containerd[1590]: time="2025-09-05T06:24:27.473125916Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:24:27.474963 containerd[1590]: time="2025-09-05T06:24:27.474926725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:24:27.475481 containerd[1590]: time="2025-09-05T06:24:27.475455255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 559.981054ms" Sep 5 06:24:27.475508 containerd[1590]: time="2025-09-05T06:24:27.475480660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 06:24:27.475920 containerd[1590]: time="2025-09-05T06:24:27.475889821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 5 06:24:28.030505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830774445.mount: Deactivated successfully. Sep 5 06:24:30.361137 containerd[1590]: time="2025-09-05T06:24:30.361076583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:30.361823 containerd[1590]: time="2025-09-05T06:24:30.361794037Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 5 06:24:30.362893 containerd[1590]: time="2025-09-05T06:24:30.362847694Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:30.365086 containerd[1590]: time="2025-09-05T06:24:30.365052862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:30.366057 containerd[1590]: time="2025-09-05T06:24:30.366030010Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.890112039s" Sep 5 06:24:30.366111 containerd[1590]: time="2025-09-05T06:24:30.366059119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 5 06:24:32.123052 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:24:32.123222 systemd[1]: kubelet.service: Consumed 366ms CPU time, 111.1M memory peak. Sep 5 06:24:32.125280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:24:32.147921 systemd[1]: Reload requested from client PID 2268 ('systemctl') (unit session-7.scope)... Sep 5 06:24:32.147936 systemd[1]: Reloading... Sep 5 06:24:32.239571 zram_generator::config[2313]: No configuration found. Sep 5 06:24:32.632657 systemd[1]: Reloading finished in 484 ms. Sep 5 06:24:32.701441 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 06:24:32.701577 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 06:24:32.701876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:24:32.701922 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.2M memory peak. Sep 5 06:24:32.703421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:24:32.892056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:24:32.895812 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 06:24:32.933504 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:24:32.933504 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 06:24:32.933504 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:24:32.933833 kubelet[2358]: I0905 06:24:32.933576 2358 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 06:24:33.369521 kubelet[2358]: I0905 06:24:33.369415 2358 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 06:24:33.369521 kubelet[2358]: I0905 06:24:33.369445 2358 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 06:24:33.369774 kubelet[2358]: I0905 06:24:33.369736 2358 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 06:24:33.396989 kubelet[2358]: E0905 06:24:33.396939 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:24:33.397145 kubelet[2358]: I0905 06:24:33.397106 2358 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 06:24:33.404245 kubelet[2358]: I0905 06:24:33.404214 2358 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 06:24:33.410827 kubelet[2358]: I0905 06:24:33.409554 2358 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 06:24:33.410827 kubelet[2358]: I0905 06:24:33.410509 2358 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 06:24:33.410827 kubelet[2358]: I0905 06:24:33.410560 2358 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 06:24:33.410988 kubelet[2358]: I0905 06:24:33.410843 2358 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 06:24:33.410988 kubelet[2358]: I0905 06:24:33.410853 2358 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 06:24:33.410988 kubelet[2358]: I0905 06:24:33.410974 2358 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:24:33.413698 kubelet[2358]: I0905 06:24:33.413668 2358 kubelet.go:446] "Attempting to sync node with API server" Sep 5 06:24:33.413742 kubelet[2358]: I0905 06:24:33.413703 2358 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 06:24:33.413742 kubelet[2358]: I0905 06:24:33.413728 2358 kubelet.go:352] "Adding apiserver pod source" Sep 5 06:24:33.413742 kubelet[2358]: I0905 06:24:33.413739 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 06:24:33.417552 kubelet[2358]: I0905 06:24:33.417512 2358 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 06:24:33.417775 kubelet[2358]: W0905 06:24:33.417733 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 06:24:33.417827 kubelet[2358]: E0905 06:24:33.417798 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:24:33.417943 kubelet[2358]: W0905 06:24:33.417890 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 06:24:33.417943 kubelet[2358]: I0905 06:24:33.417928 2358 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 06:24:33.418021 kubelet[2358]: E0905 06:24:33.418004 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:24:33.418400 kubelet[2358]: W0905 06:24:33.418371 2358 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 06:24:33.420024 kubelet[2358]: I0905 06:24:33.420006 2358 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 06:24:33.420061 kubelet[2358]: I0905 06:24:33.420041 2358 server.go:1287] "Started kubelet" Sep 5 06:24:33.420914 kubelet[2358]: I0905 06:24:33.420887 2358 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 06:24:33.421475 kubelet[2358]: I0905 06:24:33.421400 2358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 06:24:33.421680 kubelet[2358]: I0905 06:24:33.421659 2358 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 06:24:33.421705 kubelet[2358]: I0905 06:24:33.421681 2358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 06:24:33.421705 kubelet[2358]: I0905 06:24:33.421443 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 06:24:33.421759 kubelet[2358]: I0905 06:24:33.421725 2358 server.go:479] "Adding debug handlers to kubelet server" Sep 5 06:24:33.423639 kubelet[2358]: E0905 06:24:33.423604 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:33.423685 kubelet[2358]: I0905 06:24:33.423653 2358 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 06:24:33.423806 kubelet[2358]: I0905 06:24:33.423785 2358 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 06:24:33.423845 kubelet[2358]: I0905 06:24:33.423834 2358 reconciler.go:26] "Reconciler: start to sync state" Sep 5 06:24:33.424525 kubelet[2358]: W0905 06:24:33.424470 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 06:24:33.424700 kubelet[2358]: E0905 06:24:33.424643 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:24:33.424914 kubelet[2358]: E0905 06:24:33.424887 2358 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 06:24:33.425353 kubelet[2358]: I0905 06:24:33.425332 2358 factory.go:221] Registration of the containerd container factory successfully Sep 5 06:24:33.425353 kubelet[2358]: I0905 06:24:33.425346 2358 factory.go:221] Registration of the systemd container factory successfully Sep 5 06:24:33.425415 kubelet[2358]: E0905 06:24:33.425332 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="200ms" Sep 5 06:24:33.425439 kubelet[2358]: I0905 06:24:33.425416 2358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 06:24:33.428686 kubelet[2358]: E0905 06:24:33.427402 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18624ed326910562 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 06:24:33.420019042 +0000 UTC m=+0.520367466,LastTimestamp:2025-09-05 06:24:33.420019042 +0000 UTC m=+0.520367466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 06:24:33.437670 kubelet[2358]: I0905 06:24:33.437621 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 06:24:33.440611 kubelet[2358]: I0905 06:24:33.440577 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 06:24:33.440611 kubelet[2358]: I0905 06:24:33.440607 2358 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 06:24:33.440704 kubelet[2358]: I0905 06:24:33.440658 2358 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 06:24:33.440704 kubelet[2358]: I0905 06:24:33.440666 2358 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 06:24:33.440744 kubelet[2358]: E0905 06:24:33.440720 2358 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 06:24:33.442753 kubelet[2358]: I0905 06:24:33.442697 2358 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 06:24:33.442753 kubelet[2358]: I0905 06:24:33.442719 2358 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 06:24:33.442753 kubelet[2358]: I0905 06:24:33.442733 2358 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:24:33.444447 kubelet[2358]: W0905 06:24:33.444398 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 06:24:33.444499 kubelet[2358]: E0905 06:24:33.444448 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:24:33.523782 kubelet[2358]: E0905 06:24:33.523763 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:33.541694 kubelet[2358]: E0905 06:24:33.541655 2358 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 06:24:33.623968 kubelet[2358]: E0905 06:24:33.623895 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:33.626617 kubelet[2358]: E0905 06:24:33.626578 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="400ms" Sep 5 06:24:33.725020 kubelet[2358]: E0905 06:24:33.724979 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:33.742375 kubelet[2358]: E0905 06:24:33.742330 2358 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 06:24:33.825803 kubelet[2358]: E0905 06:24:33.825766 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:33.926920 kubelet[2358]: E0905 06:24:33.926852 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:33.983762 kubelet[2358]: I0905 06:24:33.983718 2358 policy_none.go:49] "None policy: Start" Sep 5 06:24:33.983762 kubelet[2358]: I0905 06:24:33.983740 2358 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 06:24:33.983762 kubelet[2358]: I0905 06:24:33.983760 2358 state_mem.go:35] "Initializing new in-memory state store" Sep 5 06:24:33.990060 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 06:24:34.000478 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 06:24:34.003338 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 06:24:34.023344 kubelet[2358]: I0905 06:24:34.023326 2358 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 06:24:34.023758 kubelet[2358]: I0905 06:24:34.023553 2358 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 06:24:34.023758 kubelet[2358]: I0905 06:24:34.023571 2358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 06:24:34.023834 kubelet[2358]: I0905 06:24:34.023777 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 06:24:34.024722 kubelet[2358]: E0905 06:24:34.024677 2358 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 06:24:34.024820 kubelet[2358]: E0905 06:24:34.024730 2358 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 06:24:34.027807 kubelet[2358]: E0905 06:24:34.027770 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.4:6443: connect: connection refused" interval="800ms" Sep 5 06:24:34.124907 kubelet[2358]: I0905 06:24:34.124886 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:24:34.125211 kubelet[2358]: E0905 06:24:34.125171 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Sep 5 06:24:34.149868 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 5 06:24:34.157326 kubelet[2358]: E0905 06:24:34.157297 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:24:34.160627 systemd[1]: Created slice kubepods-burstable-podd28bd2c6efefbed67a7768821a423ea2.slice - libcontainer container kubepods-burstable-podd28bd2c6efefbed67a7768821a423ea2.slice. Sep 5 06:24:34.162190 kubelet[2358]: E0905 06:24:34.162171 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:24:34.163778 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 5 06:24:34.165325 kubelet[2358]: E0905 06:24:34.165297 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:24:34.229794 kubelet[2358]: I0905 06:24:34.229724 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d28bd2c6efefbed67a7768821a423ea2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d28bd2c6efefbed67a7768821a423ea2\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:34.229794 kubelet[2358]: I0905 06:24:34.229752 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:34.229794 kubelet[2358]: I0905 06:24:34.229772 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:34.229794 kubelet[2358]: I0905 06:24:34.229789 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:34.229921 kubelet[2358]: I0905 06:24:34.229812 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:34.229921 kubelet[2358]: I0905 06:24:34.229837 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:34.229921 kubelet[2358]: I0905 06:24:34.229888 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 5 06:24:34.230005 kubelet[2358]: I0905 06:24:34.229950 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d28bd2c6efefbed67a7768821a423ea2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d28bd2c6efefbed67a7768821a423ea2\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:34.230005 kubelet[2358]: I0905 06:24:34.229973 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d28bd2c6efefbed67a7768821a423ea2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d28bd2c6efefbed67a7768821a423ea2\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:34.326984 kubelet[2358]: I0905 06:24:34.326937 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:24:34.327278 kubelet[2358]: E0905 06:24:34.327241 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Sep 5 06:24:34.372926 kubelet[2358]: W0905 06:24:34.372867 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 06:24:34.372985 kubelet[2358]: E0905 06:24:34.372937 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:24:34.458510 kubelet[2358]: E0905 06:24:34.458474 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:34.459043 containerd[1590]: time="2025-09-05T06:24:34.458995483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 5 06:24:34.463215 kubelet[2358]: E0905 06:24:34.463196 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:34.463588 containerd[1590]: time="2025-09-05T06:24:34.463557221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d28bd2c6efefbed67a7768821a423ea2,Namespace:kube-system,Attempt:0,}" Sep 5 06:24:34.465877 kubelet[2358]: E0905 06:24:34.465847 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:34.466203 containerd[1590]: time="2025-09-05T06:24:34.466171115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 5 06:24:34.489170 containerd[1590]: time="2025-09-05T06:24:34.489036122Z" level=info msg="connecting to shim 40532000effa3b0ec7731199ab9a82a3eb5023b7a5a09469558133667fa65b19" address="unix:///run/containerd/s/d453517ee70ed88831327b5c089a34245777ff8ac5bd3b5982bafc4437101649" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:24:34.497312 containerd[1590]: time="2025-09-05T06:24:34.497285010Z" level=info msg="connecting to shim 623c9dbba23d10722792caa05a552ebbc6472879d72d1c5f504ab3ae9bc05a1c" address="unix:///run/containerd/s/8cd68f39e30a55d6a1c7717468e20eec2add204d35080554e1a9363603ceb59b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:24:34.510940 containerd[1590]: time="2025-09-05T06:24:34.510286521Z" level=info msg="connecting to shim e82eca16efe59afbbdecd4632fc975e1a0cfeab3ef5dc65611eae66ecb4253ef" address="unix:///run/containerd/s/76e47e0296704082a4d328d6be9f88e2d988360de549f4461ccfa1fd6296d613" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:24:34.523663 systemd[1]: Started cri-containerd-40532000effa3b0ec7731199ab9a82a3eb5023b7a5a09469558133667fa65b19.scope - libcontainer container 40532000effa3b0ec7731199ab9a82a3eb5023b7a5a09469558133667fa65b19. Sep 5 06:24:34.527227 systemd[1]: Started cri-containerd-623c9dbba23d10722792caa05a552ebbc6472879d72d1c5f504ab3ae9bc05a1c.scope - libcontainer container 623c9dbba23d10722792caa05a552ebbc6472879d72d1c5f504ab3ae9bc05a1c. Sep 5 06:24:34.531126 systemd[1]: Started cri-containerd-e82eca16efe59afbbdecd4632fc975e1a0cfeab3ef5dc65611eae66ecb4253ef.scope - libcontainer container e82eca16efe59afbbdecd4632fc975e1a0cfeab3ef5dc65611eae66ecb4253ef. Sep 5 06:24:34.575523 containerd[1590]: time="2025-09-05T06:24:34.575474436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"40532000effa3b0ec7731199ab9a82a3eb5023b7a5a09469558133667fa65b19\"" Sep 5 06:24:34.577406 kubelet[2358]: E0905 06:24:34.577369 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:34.578048 containerd[1590]: time="2025-09-05T06:24:34.578014761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d28bd2c6efefbed67a7768821a423ea2,Namespace:kube-system,Attempt:0,} returns sandbox id \"623c9dbba23d10722792caa05a552ebbc6472879d72d1c5f504ab3ae9bc05a1c\"" Sep 5 06:24:34.579067 kubelet[2358]: E0905 06:24:34.579049 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:34.580883 containerd[1590]: time="2025-09-05T06:24:34.580822560Z" level=info msg="CreateContainer within sandbox \"40532000effa3b0ec7731199ab9a82a3eb5023b7a5a09469558133667fa65b19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 06:24:34.580955 containerd[1590]: time="2025-09-05T06:24:34.580926552Z" level=info msg="CreateContainer within sandbox \"623c9dbba23d10722792caa05a552ebbc6472879d72d1c5f504ab3ae9bc05a1c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 06:24:34.581099 containerd[1590]: time="2025-09-05T06:24:34.581074643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"e82eca16efe59afbbdecd4632fc975e1a0cfeab3ef5dc65611eae66ecb4253ef\"" Sep 5 06:24:34.582311 kubelet[2358]: E0905 06:24:34.582288 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:34.584173 containerd[1590]: time="2025-09-05T06:24:34.583586572Z" level=info msg="CreateContainer within sandbox \"e82eca16efe59afbbdecd4632fc975e1a0cfeab3ef5dc65611eae66ecb4253ef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 06:24:34.590871 containerd[1590]: time="2025-09-05T06:24:34.590840237Z" level=info msg="Container f96daf1c8ff557e20c0b329dd08aecd344f0f9386545dd5a4741a8ea80dca39f: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:24:34.597735 containerd[1590]: time="2025-09-05T06:24:34.597701215Z" level=info msg="Container 0f95446ef5867d3cf7482b41b5baa36009cc2dda67d04f327ffd7ee73c10b4d1: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:24:34.602213 containerd[1590]: time="2025-09-05T06:24:34.602178627Z" level=info msg="CreateContainer within sandbox \"40532000effa3b0ec7731199ab9a82a3eb5023b7a5a09469558133667fa65b19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f96daf1c8ff557e20c0b329dd08aecd344f0f9386545dd5a4741a8ea80dca39f\"" Sep 5 06:24:34.602917 containerd[1590]: time="2025-09-05T06:24:34.602876436Z" level=info msg="StartContainer for \"f96daf1c8ff557e20c0b329dd08aecd344f0f9386545dd5a4741a8ea80dca39f\"" Sep 5 06:24:34.603606 containerd[1590]: time="2025-09-05T06:24:34.603572147Z" level=info msg="Container a0cdf0636e913f5e6d11735ab137797143308eb2d74083f4a8409c59ea517b05: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:24:34.603883 containerd[1590]: time="2025-09-05T06:24:34.603858956Z" level=info msg="connecting to shim f96daf1c8ff557e20c0b329dd08aecd344f0f9386545dd5a4741a8ea80dca39f" address="unix:///run/containerd/s/d453517ee70ed88831327b5c089a34245777ff8ac5bd3b5982bafc4437101649" protocol=ttrpc version=3 Sep 5 06:24:34.606107 containerd[1590]: time="2025-09-05T06:24:34.606070039Z" level=info msg="CreateContainer within sandbox \"623c9dbba23d10722792caa05a552ebbc6472879d72d1c5f504ab3ae9bc05a1c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f95446ef5867d3cf7482b41b5baa36009cc2dda67d04f327ffd7ee73c10b4d1\"" Sep 5 06:24:34.606419 containerd[1590]: time="2025-09-05T06:24:34.606391014Z" level=info msg="StartContainer for \"0f95446ef5867d3cf7482b41b5baa36009cc2dda67d04f327ffd7ee73c10b4d1\"" Sep 5 06:24:34.607347 containerd[1590]: time="2025-09-05T06:24:34.607324719Z" level=info msg="connecting to shim 0f95446ef5867d3cf7482b41b5baa36009cc2dda67d04f327ffd7ee73c10b4d1" address="unix:///run/containerd/s/8cd68f39e30a55d6a1c7717468e20eec2add204d35080554e1a9363603ceb59b" protocol=ttrpc version=3 Sep 5 06:24:34.610553 containerd[1590]: time="2025-09-05T06:24:34.610396762Z" level=info msg="CreateContainer within sandbox \"e82eca16efe59afbbdecd4632fc975e1a0cfeab3ef5dc65611eae66ecb4253ef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a0cdf0636e913f5e6d11735ab137797143308eb2d74083f4a8409c59ea517b05\"" Sep 5 06:24:34.611010 containerd[1590]: time="2025-09-05T06:24:34.610982913Z" level=info msg="StartContainer for \"a0cdf0636e913f5e6d11735ab137797143308eb2d74083f4a8409c59ea517b05\"" Sep 5 06:24:34.611966 containerd[1590]: time="2025-09-05T06:24:34.611933938Z" level=info msg="connecting to shim a0cdf0636e913f5e6d11735ab137797143308eb2d74083f4a8409c59ea517b05" address="unix:///run/containerd/s/76e47e0296704082a4d328d6be9f88e2d988360de549f4461ccfa1fd6296d613" protocol=ttrpc version=3 Sep 5 06:24:34.623680 systemd[1]: Started cri-containerd-f96daf1c8ff557e20c0b329dd08aecd344f0f9386545dd5a4741a8ea80dca39f.scope - libcontainer container f96daf1c8ff557e20c0b329dd08aecd344f0f9386545dd5a4741a8ea80dca39f. Sep 5 06:24:34.633676 systemd[1]: Started cri-containerd-0f95446ef5867d3cf7482b41b5baa36009cc2dda67d04f327ffd7ee73c10b4d1.scope - libcontainer container 0f95446ef5867d3cf7482b41b5baa36009cc2dda67d04f327ffd7ee73c10b4d1. Sep 5 06:24:34.634830 systemd[1]: Started cri-containerd-a0cdf0636e913f5e6d11735ab137797143308eb2d74083f4a8409c59ea517b05.scope - libcontainer container a0cdf0636e913f5e6d11735ab137797143308eb2d74083f4a8409c59ea517b05. Sep 5 06:24:34.637954 kubelet[2358]: W0905 06:24:34.637515 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.4:6443: connect: connection refused Sep 5 06:24:34.638016 kubelet[2358]: E0905 06:24:34.637963 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.4:6443: connect: connection refused" logger="UnhandledError" Sep 5 06:24:34.687310 containerd[1590]: time="2025-09-05T06:24:34.686799880Z" level=info msg="StartContainer for \"f96daf1c8ff557e20c0b329dd08aecd344f0f9386545dd5a4741a8ea80dca39f\" returns successfully" Sep 5 06:24:34.690502 containerd[1590]: time="2025-09-05T06:24:34.690433912Z" level=info msg="StartContainer for \"a0cdf0636e913f5e6d11735ab137797143308eb2d74083f4a8409c59ea517b05\" returns successfully" Sep 5 06:24:34.692547 containerd[1590]: time="2025-09-05T06:24:34.692500999Z" level=info msg="StartContainer for \"0f95446ef5867d3cf7482b41b5baa36009cc2dda67d04f327ffd7ee73c10b4d1\" returns successfully" Sep 5 06:24:34.729336 kubelet[2358]: I0905 06:24:34.729311 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:24:34.729980 kubelet[2358]: E0905 06:24:34.729958 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.4:6443/api/v1/nodes\": dial tcp 10.0.0.4:6443: connect: connection refused" node="localhost" Sep 5 06:24:35.448100 kubelet[2358]: E0905 06:24:35.448066 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:24:35.448485 kubelet[2358]: E0905 06:24:35.448183 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:35.449711 kubelet[2358]: E0905 06:24:35.449524 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:24:35.449972 kubelet[2358]: E0905 06:24:35.449941 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:35.455761 kubelet[2358]: E0905 06:24:35.455734 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:24:35.455868 kubelet[2358]: E0905 06:24:35.455836 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:35.531338 kubelet[2358]: I0905 06:24:35.531297 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:24:35.647928 kubelet[2358]: E0905 06:24:35.647889 2358 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 06:24:35.742226 kubelet[2358]: I0905 06:24:35.741869 2358 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 06:24:35.742226 kubelet[2358]: E0905 06:24:35.741907 2358 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 06:24:35.753149 kubelet[2358]: E0905 06:24:35.753120 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:35.853986 kubelet[2358]: E0905 06:24:35.853923 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:35.955031 kubelet[2358]: E0905 06:24:35.954993 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.055963 kubelet[2358]: E0905 06:24:36.055880 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.156591 kubelet[2358]: E0905 06:24:36.156561 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.257307 kubelet[2358]: E0905 06:24:36.257271 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.358007 kubelet[2358]: E0905 06:24:36.357914 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.455328 kubelet[2358]: E0905 06:24:36.455307 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:24:36.455634 kubelet[2358]: E0905 06:24:36.455419 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:24:36.455634 kubelet[2358]: E0905 06:24:36.455429 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:36.455634 kubelet[2358]: E0905 06:24:36.455567 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:36.458453 kubelet[2358]: E0905 06:24:36.458434 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.559550 kubelet[2358]: E0905 06:24:36.559492 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.660212 kubelet[2358]: E0905 06:24:36.660127 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.760816 kubelet[2358]: E0905 06:24:36.760781 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.861407 kubelet[2358]: E0905 06:24:36.861373 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:24:36.925380 kubelet[2358]: I0905 06:24:36.925137 2358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:24:36.931419 kubelet[2358]: I0905 06:24:36.931397 2358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:36.935019 kubelet[2358]: I0905 06:24:36.934972 2358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:37.419081 kubelet[2358]: I0905 06:24:37.418982 2358 apiserver.go:52] "Watching apiserver" Sep 5 06:24:37.420659 kubelet[2358]: E0905 06:24:37.420624 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:37.424142 kubelet[2358]: I0905 06:24:37.424112 2358 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 06:24:37.455663 kubelet[2358]: E0905 06:24:37.455521 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:37.456053 kubelet[2358]: E0905 06:24:37.455782 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:37.834459 systemd[1]: Reload requested from client PID 2632 ('systemctl') (unit session-7.scope)... Sep 5 06:24:37.834473 systemd[1]: Reloading... Sep 5 06:24:37.914583 zram_generator::config[2678]: No configuration found. Sep 5 06:24:38.299233 systemd[1]: Reloading finished in 464 ms. Sep 5 06:24:38.332844 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:24:38.361059 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 06:24:38.361325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:24:38.361369 systemd[1]: kubelet.service: Consumed 928ms CPU time, 132.3M memory peak. Sep 5 06:24:38.364034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:24:38.568331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:24:38.580908 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 06:24:38.619509 kubelet[2720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:24:38.619509 kubelet[2720]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 06:24:38.619509 kubelet[2720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:24:38.619851 kubelet[2720]: I0905 06:24:38.619612 2720 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 06:24:38.626481 kubelet[2720]: I0905 06:24:38.626443 2720 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 06:24:38.626481 kubelet[2720]: I0905 06:24:38.626472 2720 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 06:24:38.626791 kubelet[2720]: I0905 06:24:38.626756 2720 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 06:24:38.627907 kubelet[2720]: I0905 06:24:38.627882 2720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 06:24:38.630019 kubelet[2720]: I0905 06:24:38.629992 2720 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 06:24:38.633857 kubelet[2720]: I0905 06:24:38.633824 2720 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 06:24:38.638970 kubelet[2720]: I0905 06:24:38.638951 2720 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 06:24:38.639230 kubelet[2720]: I0905 06:24:38.639183 2720 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 06:24:38.639379 kubelet[2720]: I0905 06:24:38.639219 2720 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 06:24:38.639458 kubelet[2720]: I0905 06:24:38.639380 2720 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 06:24:38.639458 kubelet[2720]: I0905 06:24:38.639390 2720 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 06:24:38.639458 kubelet[2720]: I0905 06:24:38.639432 2720 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:24:38.639608 kubelet[2720]: I0905 06:24:38.639591 2720 kubelet.go:446] "Attempting to sync node with API server" Sep 5 06:24:38.639649 kubelet[2720]: I0905 06:24:38.639613 2720 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 06:24:38.640329 kubelet[2720]: I0905 06:24:38.640304 2720 kubelet.go:352] "Adding apiserver pod source" Sep 5 06:24:38.640329 kubelet[2720]: I0905 06:24:38.640320 2720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 06:24:38.644349 kubelet[2720]: I0905 06:24:38.644279 2720 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 06:24:38.645400 kubelet[2720]: I0905 06:24:38.645380 2720 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 06:24:38.646635 kubelet[2720]: I0905 06:24:38.646615 2720 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 06:24:38.646696 kubelet[2720]: I0905 06:24:38.646643 2720 server.go:1287] "Started kubelet" Sep 5 06:24:38.647111 kubelet[2720]: I0905 06:24:38.647044 2720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 06:24:38.647368 kubelet[2720]: I0905 06:24:38.647349 2720 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 06:24:38.647438 kubelet[2720]: I0905 06:24:38.647402 2720 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 06:24:38.648277 kubelet[2720]: I0905 06:24:38.648229 2720 server.go:479] "Adding debug handlers to kubelet server" Sep 5 06:24:38.648909 kubelet[2720]: I0905 06:24:38.648820 2720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 06:24:38.649796 kubelet[2720]: I0905 06:24:38.648928 2720 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 06:24:38.650122 kubelet[2720]: I0905 06:24:38.650104 2720 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 06:24:38.650365 kubelet[2720]: I0905 06:24:38.650350 2720 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 06:24:38.650501 kubelet[2720]: I0905 06:24:38.650483 2720 reconciler.go:26] "Reconciler: start to sync state" Sep 5 06:24:38.651317 kubelet[2720]: I0905 06:24:38.651276 2720 factory.go:221] Registration of the systemd container factory successfully Sep 5 06:24:38.651639 kubelet[2720]: I0905 06:24:38.651587 2720 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 06:24:38.653428 kubelet[2720]: E0905 06:24:38.653003 2720 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 06:24:38.654625 kubelet[2720]: I0905 06:24:38.654596 2720 factory.go:221] Registration of the containerd container factory successfully Sep 5 06:24:38.661442 kubelet[2720]: I0905 06:24:38.661387 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 06:24:38.664291 kubelet[2720]: I0905 06:24:38.664257 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 06:24:38.664340 kubelet[2720]: I0905 06:24:38.664303 2720 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 06:24:38.664340 kubelet[2720]: I0905 06:24:38.664323 2720 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 06:24:38.664340 kubelet[2720]: I0905 06:24:38.664330 2720 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 06:24:38.665019 kubelet[2720]: E0905 06:24:38.664706 2720 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 06:24:38.687056 kubelet[2720]: I0905 06:24:38.687029 2720 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 06:24:38.687056 kubelet[2720]: I0905 06:24:38.687042 2720 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 06:24:38.687056 kubelet[2720]: I0905 06:24:38.687059 2720 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:24:38.687198 kubelet[2720]: I0905 06:24:38.687180 2720 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 06:24:38.687232 kubelet[2720]: I0905 06:24:38.687193 2720 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 06:24:38.687232 kubelet[2720]: I0905 06:24:38.687212 2720 policy_none.go:49] "None policy: Start" Sep 5 06:24:38.687232 kubelet[2720]: I0905 06:24:38.687220 2720 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 06:24:38.687232 kubelet[2720]: I0905 06:24:38.687230 2720 state_mem.go:35] "Initializing new in-memory state store" Sep 5 06:24:38.687390 kubelet[2720]: I0905 06:24:38.687373 2720 state_mem.go:75] "Updated machine memory state" Sep 5 06:24:38.691205 kubelet[2720]: I0905 06:24:38.691178 2720 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 06:24:38.691387 kubelet[2720]: I0905 06:24:38.691362 2720 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 06:24:38.691387 kubelet[2720]: I0905 06:24:38.691380 2720 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 06:24:38.691646 kubelet[2720]: I0905 06:24:38.691619 2720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 06:24:38.693510 kubelet[2720]: E0905 06:24:38.693479 2720 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 06:24:38.765365 kubelet[2720]: I0905 06:24:38.765248 2720 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:24:38.765476 kubelet[2720]: I0905 06:24:38.765406 2720 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:38.765476 kubelet[2720]: I0905 06:24:38.765422 2720 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:38.769974 kubelet[2720]: E0905 06:24:38.769943 2720 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 06:24:38.770383 kubelet[2720]: E0905 06:24:38.770355 2720 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:38.770383 kubelet[2720]: E0905 06:24:38.770384 2720 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:38.798314 kubelet[2720]: I0905 06:24:38.798292 2720 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:24:38.804601 kubelet[2720]: I0905 06:24:38.804561 2720 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 06:24:38.804749 kubelet[2720]: I0905 06:24:38.804623 2720 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 06:24:38.851726 kubelet[2720]: I0905 06:24:38.851644 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:38.851726 kubelet[2720]: I0905 06:24:38.851672 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:38.851726 kubelet[2720]: I0905 06:24:38.851693 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d28bd2c6efefbed67a7768821a423ea2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d28bd2c6efefbed67a7768821a423ea2\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:38.851726 kubelet[2720]: I0905 06:24:38.851711 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:38.851726 kubelet[2720]: I0905 06:24:38.851726 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:38.851888 kubelet[2720]: I0905 06:24:38.851742 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:24:38.851888 kubelet[2720]: I0905 06:24:38.851758 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 5 06:24:38.851888 kubelet[2720]: I0905 06:24:38.851772 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d28bd2c6efefbed67a7768821a423ea2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d28bd2c6efefbed67a7768821a423ea2\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:38.851888 kubelet[2720]: I0905 06:24:38.851787 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d28bd2c6efefbed67a7768821a423ea2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d28bd2c6efefbed67a7768821a423ea2\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:39.070895 kubelet[2720]: E0905 06:24:39.070792 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:39.070895 kubelet[2720]: E0905 06:24:39.070806 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:39.070895 kubelet[2720]: E0905 06:24:39.070830 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:39.641281 kubelet[2720]: I0905 06:24:39.641245 2720 apiserver.go:52] "Watching apiserver" Sep 5 06:24:39.650459 kubelet[2720]: I0905 06:24:39.650429 2720 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 06:24:39.674989 kubelet[2720]: I0905 06:24:39.674946 2720 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:24:39.675134 kubelet[2720]: E0905 06:24:39.675000 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:39.675272 kubelet[2720]: I0905 06:24:39.675249 2720 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:39.680171 kubelet[2720]: E0905 06:24:39.680131 2720 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 06:24:39.680171 kubelet[2720]: E0905 06:24:39.680167 2720 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 06:24:39.680329 kubelet[2720]: E0905 06:24:39.680258 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:39.680329 kubelet[2720]: E0905 06:24:39.680287 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:39.690899 kubelet[2720]: I0905 06:24:39.690835 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.6907872470000003 podStartE2EDuration="3.690787247s" podCreationTimestamp="2025-09-05 06:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:24:39.690759574 +0000 UTC m=+1.105752889" watchObservedRunningTime="2025-09-05 06:24:39.690787247 +0000 UTC m=+1.105780562" Sep 5 06:24:40.093821 kubelet[2720]: I0905 06:24:40.093611 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.09359423 podStartE2EDuration="4.09359423s" podCreationTimestamp="2025-09-05 06:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:24:40.087194221 +0000 UTC m=+1.502187526" watchObservedRunningTime="2025-09-05 06:24:40.09359423 +0000 UTC m=+1.508587545" Sep 5 06:24:40.094246 kubelet[2720]: I0905 06:24:40.094206 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.094198872 podStartE2EDuration="4.094198872s" podCreationTimestamp="2025-09-05 06:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:24:40.093591553 +0000 UTC m=+1.508584858" watchObservedRunningTime="2025-09-05 06:24:40.094198872 +0000 UTC m=+1.509192207" Sep 5 06:24:40.675892 kubelet[2720]: E0905 06:24:40.675859 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:40.676288 kubelet[2720]: E0905 06:24:40.675987 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:42.783512 kubelet[2720]: I0905 06:24:42.783476 2720 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 06:24:42.784005 containerd[1590]: time="2025-09-05T06:24:42.783883125Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 06:24:42.784230 kubelet[2720]: I0905 06:24:42.784138 2720 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 06:24:43.140613 kubelet[2720]: E0905 06:24:43.140501 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:43.774987 systemd[1]: Created slice kubepods-besteffort-pod638ba083_9678_4948_8372_581bcddc2ab0.slice - libcontainer container kubepods-besteffort-pod638ba083_9678_4948_8372_581bcddc2ab0.slice. Sep 5 06:24:43.782886 kubelet[2720]: I0905 06:24:43.782848 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638ba083-9678-4948-8372-581bcddc2ab0-xtables-lock\") pod \"kube-proxy-s5bjz\" (UID: \"638ba083-9678-4948-8372-581bcddc2ab0\") " pod="kube-system/kube-proxy-s5bjz" Sep 5 06:24:43.782960 kubelet[2720]: I0905 06:24:43.782886 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtql\" (UniqueName: \"kubernetes.io/projected/638ba083-9678-4948-8372-581bcddc2ab0-kube-api-access-bqtql\") pod \"kube-proxy-s5bjz\" (UID: \"638ba083-9678-4948-8372-581bcddc2ab0\") " pod="kube-system/kube-proxy-s5bjz" Sep 5 06:24:43.782960 kubelet[2720]: I0905 06:24:43.782908 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/638ba083-9678-4948-8372-581bcddc2ab0-kube-proxy\") pod \"kube-proxy-s5bjz\" (UID: \"638ba083-9678-4948-8372-581bcddc2ab0\") " pod="kube-system/kube-proxy-s5bjz" Sep 5 06:24:43.782960 kubelet[2720]: I0905 06:24:43.782921 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638ba083-9678-4948-8372-581bcddc2ab0-lib-modules\") pod \"kube-proxy-s5bjz\" (UID: \"638ba083-9678-4948-8372-581bcddc2ab0\") " pod="kube-system/kube-proxy-s5bjz" Sep 5 06:24:44.093605 systemd[1]: Created slice kubepods-besteffort-podbbb5775e_ab79_41af_b591_77f134818ae3.slice - libcontainer container kubepods-besteffort-podbbb5775e_ab79_41af_b591_77f134818ae3.slice. Sep 5 06:24:44.096137 kubelet[2720]: E0905 06:24:44.095813 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:44.096494 containerd[1590]: time="2025-09-05T06:24:44.096458406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5bjz,Uid:638ba083-9678-4948-8372-581bcddc2ab0,Namespace:kube-system,Attempt:0,}" Sep 5 06:24:44.120193 containerd[1590]: time="2025-09-05T06:24:44.120164478Z" level=info msg="connecting to shim cdf0f7e4d165e39ba0573bf2f81b0e19e41b0b85cda08a507f2511a24a8781b4" address="unix:///run/containerd/s/e9204989c677f8c0e6189af3fab3cee778627690848d8d558389842e57f3ba19" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:24:44.142686 systemd[1]: Started cri-containerd-cdf0f7e4d165e39ba0573bf2f81b0e19e41b0b85cda08a507f2511a24a8781b4.scope - libcontainer container cdf0f7e4d165e39ba0573bf2f81b0e19e41b0b85cda08a507f2511a24a8781b4. Sep 5 06:24:44.165997 containerd[1590]: time="2025-09-05T06:24:44.165968014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5bjz,Uid:638ba083-9678-4948-8372-581bcddc2ab0,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdf0f7e4d165e39ba0573bf2f81b0e19e41b0b85cda08a507f2511a24a8781b4\"" Sep 5 06:24:44.166567 kubelet[2720]: E0905 06:24:44.166523 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:44.169752 containerd[1590]: time="2025-09-05T06:24:44.169708962Z" level=info msg="CreateContainer within sandbox \"cdf0f7e4d165e39ba0573bf2f81b0e19e41b0b85cda08a507f2511a24a8781b4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 06:24:44.179846 containerd[1590]: time="2025-09-05T06:24:44.179815821Z" level=info msg="Container 88bb63a46add9dd2125f4b160ec39a65b4d32f5cef09f3dc3a09e8348e3afbb1: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:24:44.183260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824982296.mount: Deactivated successfully. Sep 5 06:24:44.185849 kubelet[2720]: I0905 06:24:44.185789 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbb5775e-ab79-41af-b591-77f134818ae3-var-lib-calico\") pod \"tigera-operator-755d956888-zxjrb\" (UID: \"bbb5775e-ab79-41af-b591-77f134818ae3\") " pod="tigera-operator/tigera-operator-755d956888-zxjrb" Sep 5 06:24:44.185849 kubelet[2720]: I0905 06:24:44.185838 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8xth\" (UniqueName: \"kubernetes.io/projected/bbb5775e-ab79-41af-b591-77f134818ae3-kube-api-access-q8xth\") pod \"tigera-operator-755d956888-zxjrb\" (UID: \"bbb5775e-ab79-41af-b591-77f134818ae3\") " pod="tigera-operator/tigera-operator-755d956888-zxjrb" Sep 5 06:24:44.190732 containerd[1590]: time="2025-09-05T06:24:44.190694099Z" level=info msg="CreateContainer within sandbox \"cdf0f7e4d165e39ba0573bf2f81b0e19e41b0b85cda08a507f2511a24a8781b4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"88bb63a46add9dd2125f4b160ec39a65b4d32f5cef09f3dc3a09e8348e3afbb1\"" Sep 5 06:24:44.191221 containerd[1590]: time="2025-09-05T06:24:44.191178488Z" level=info msg="StartContainer for \"88bb63a46add9dd2125f4b160ec39a65b4d32f5cef09f3dc3a09e8348e3afbb1\"" Sep 5 06:24:44.192486 containerd[1590]: time="2025-09-05T06:24:44.192458134Z" level=info msg="connecting to shim 88bb63a46add9dd2125f4b160ec39a65b4d32f5cef09f3dc3a09e8348e3afbb1" address="unix:///run/containerd/s/e9204989c677f8c0e6189af3fab3cee778627690848d8d558389842e57f3ba19" protocol=ttrpc version=3 Sep 5 06:24:44.212689 systemd[1]: Started cri-containerd-88bb63a46add9dd2125f4b160ec39a65b4d32f5cef09f3dc3a09e8348e3afbb1.scope - libcontainer container 88bb63a46add9dd2125f4b160ec39a65b4d32f5cef09f3dc3a09e8348e3afbb1. Sep 5 06:24:44.250649 containerd[1590]: time="2025-09-05T06:24:44.250598358Z" level=info msg="StartContainer for \"88bb63a46add9dd2125f4b160ec39a65b4d32f5cef09f3dc3a09e8348e3afbb1\" returns successfully" Sep 5 06:24:44.396693 containerd[1590]: time="2025-09-05T06:24:44.396593659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-zxjrb,Uid:bbb5775e-ab79-41af-b591-77f134818ae3,Namespace:tigera-operator,Attempt:0,}" Sep 5 06:24:44.416439 containerd[1590]: time="2025-09-05T06:24:44.416404808Z" level=info msg="connecting to shim 223a65221f8e69ec412a90a3aaa280028837bd98672a17e59c5a8c76b3fcc2b7" address="unix:///run/containerd/s/a67bdbd020d8288234153b154bb7f9418297a3d1a3c03070d752b687f526fc36" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:24:44.443654 systemd[1]: Started cri-containerd-223a65221f8e69ec412a90a3aaa280028837bd98672a17e59c5a8c76b3fcc2b7.scope - libcontainer container 223a65221f8e69ec412a90a3aaa280028837bd98672a17e59c5a8c76b3fcc2b7. Sep 5 06:24:44.483270 containerd[1590]: time="2025-09-05T06:24:44.483226922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-zxjrb,Uid:bbb5775e-ab79-41af-b591-77f134818ae3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"223a65221f8e69ec412a90a3aaa280028837bd98672a17e59c5a8c76b3fcc2b7\"" Sep 5 06:24:44.485105 containerd[1590]: time="2025-09-05T06:24:44.485069518Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 06:24:44.683980 kubelet[2720]: E0905 06:24:44.683872 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:44.691020 kubelet[2720]: I0905 06:24:44.690983 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s5bjz" podStartSLOduration=1.6909725180000001 podStartE2EDuration="1.690972518s" podCreationTimestamp="2025-09-05 06:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:24:44.690785502 +0000 UTC m=+6.105778817" watchObservedRunningTime="2025-09-05 06:24:44.690972518 +0000 UTC m=+6.105965833" Sep 5 06:24:45.004655 kubelet[2720]: E0905 06:24:45.004524 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:45.687760 kubelet[2720]: E0905 06:24:45.687719 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:46.381298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284894404.mount: Deactivated successfully. Sep 5 06:24:46.699438 containerd[1590]: time="2025-09-05T06:24:46.699319849Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:46.700096 containerd[1590]: time="2025-09-05T06:24:46.700044350Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 06:24:46.701163 containerd[1590]: time="2025-09-05T06:24:46.701129317Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:46.703163 containerd[1590]: time="2025-09-05T06:24:46.703123674Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:46.703757 containerd[1590]: time="2025-09-05T06:24:46.703721134Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.218613022s" Sep 5 06:24:46.703796 containerd[1590]: time="2025-09-05T06:24:46.703753480Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 06:24:46.705474 containerd[1590]: time="2025-09-05T06:24:46.705450020Z" level=info msg="CreateContainer within sandbox \"223a65221f8e69ec412a90a3aaa280028837bd98672a17e59c5a8c76b3fcc2b7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 06:24:46.713127 containerd[1590]: time="2025-09-05T06:24:46.713086487Z" level=info msg="Container 548eabc77a1ca24161c07887bd07902c9348c98b5df996902a6daedf5391f43f: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:24:46.718448 containerd[1590]: time="2025-09-05T06:24:46.718412227Z" level=info msg="CreateContainer within sandbox \"223a65221f8e69ec412a90a3aaa280028837bd98672a17e59c5a8c76b3fcc2b7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"548eabc77a1ca24161c07887bd07902c9348c98b5df996902a6daedf5391f43f\"" Sep 5 06:24:46.718771 containerd[1590]: time="2025-09-05T06:24:46.718743755Z" level=info msg="StartContainer for \"548eabc77a1ca24161c07887bd07902c9348c98b5df996902a6daedf5391f43f\"" Sep 5 06:24:46.719435 containerd[1590]: time="2025-09-05T06:24:46.719413265Z" level=info msg="connecting to shim 548eabc77a1ca24161c07887bd07902c9348c98b5df996902a6daedf5391f43f" address="unix:///run/containerd/s/a67bdbd020d8288234153b154bb7f9418297a3d1a3c03070d752b687f526fc36" protocol=ttrpc version=3 Sep 5 06:24:46.774660 systemd[1]: Started cri-containerd-548eabc77a1ca24161c07887bd07902c9348c98b5df996902a6daedf5391f43f.scope - libcontainer container 548eabc77a1ca24161c07887bd07902c9348c98b5df996902a6daedf5391f43f. Sep 5 06:24:46.803648 containerd[1590]: time="2025-09-05T06:24:46.803602579Z" level=info msg="StartContainer for \"548eabc77a1ca24161c07887bd07902c9348c98b5df996902a6daedf5391f43f\" returns successfully" Sep 5 06:24:47.697781 kubelet[2720]: I0905 06:24:47.697718 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-zxjrb" podStartSLOduration=1.477855134 podStartE2EDuration="3.69770203s" podCreationTimestamp="2025-09-05 06:24:44 +0000 UTC" firstStartedPulling="2025-09-05 06:24:44.484517193 +0000 UTC m=+5.899510509" lastFinishedPulling="2025-09-05 06:24:46.70436409 +0000 UTC m=+8.119357405" observedRunningTime="2025-09-05 06:24:47.697634221 +0000 UTC m=+9.112627536" watchObservedRunningTime="2025-09-05 06:24:47.69770203 +0000 UTC m=+9.112695335" Sep 5 06:24:49.290561 kubelet[2720]: E0905 06:24:49.290108 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:49.695363 kubelet[2720]: E0905 06:24:49.695249 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:52.330257 sudo[1801]: pam_unix(sudo:session): session closed for user root Sep 5 06:24:52.331587 sshd[1800]: Connection closed by 10.0.0.1 port 55856 Sep 5 06:24:52.332369 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Sep 5 06:24:52.337270 systemd[1]: sshd@6-10.0.0.4:22-10.0.0.1:55856.service: Deactivated successfully. Sep 5 06:24:52.344250 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 06:24:52.344780 systemd[1]: session-7.scope: Consumed 3.604s CPU time, 227.5M memory peak. Sep 5 06:24:52.347467 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Sep 5 06:24:52.350816 systemd-logind[1573]: Removed session 7. Sep 5 06:24:53.145603 kubelet[2720]: E0905 06:24:53.145509 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:54.788753 systemd[1]: Created slice kubepods-besteffort-podb13fcdbe_a309_454f_b646_c0aec2d59ecb.slice - libcontainer container kubepods-besteffort-podb13fcdbe_a309_454f_b646_c0aec2d59ecb.slice. Sep 5 06:24:54.855409 kubelet[2720]: I0905 06:24:54.855356 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b13fcdbe-a309-454f-b646-c0aec2d59ecb-typha-certs\") pod \"calico-typha-646c87784c-2r4gd\" (UID: \"b13fcdbe-a309-454f-b646-c0aec2d59ecb\") " pod="calico-system/calico-typha-646c87784c-2r4gd" Sep 5 06:24:54.855409 kubelet[2720]: I0905 06:24:54.855392 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sptf2\" (UniqueName: \"kubernetes.io/projected/b13fcdbe-a309-454f-b646-c0aec2d59ecb-kube-api-access-sptf2\") pod \"calico-typha-646c87784c-2r4gd\" (UID: \"b13fcdbe-a309-454f-b646-c0aec2d59ecb\") " pod="calico-system/calico-typha-646c87784c-2r4gd" Sep 5 06:24:54.855409 kubelet[2720]: I0905 06:24:54.855414 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b13fcdbe-a309-454f-b646-c0aec2d59ecb-tigera-ca-bundle\") pod \"calico-typha-646c87784c-2r4gd\" (UID: \"b13fcdbe-a309-454f-b646-c0aec2d59ecb\") " pod="calico-system/calico-typha-646c87784c-2r4gd" Sep 5 06:24:55.093506 kubelet[2720]: E0905 06:24:55.093401 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:55.093861 containerd[1590]: time="2025-09-05T06:24:55.093799101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-646c87784c-2r4gd,Uid:b13fcdbe-a309-454f-b646-c0aec2d59ecb,Namespace:calico-system,Attempt:0,}" Sep 5 06:24:55.264644 systemd[1]: Created slice kubepods-besteffort-pod0ba16447_194f_4d00_9ac7_9b15b21a01ad.slice - libcontainer container kubepods-besteffort-pod0ba16447_194f_4d00_9ac7_9b15b21a01ad.slice. Sep 5 06:24:55.280177 containerd[1590]: time="2025-09-05T06:24:55.280131852Z" level=info msg="connecting to shim ae8d8af5e1f495e0922e09484ba0054ac998f0e5e600695c6b052f41a78c8981" address="unix:///run/containerd/s/b231f3ead0d416348b607c61dedba066944e851a0d56ca0955b85b56dfb4afe7" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:24:55.307665 systemd[1]: Started cri-containerd-ae8d8af5e1f495e0922e09484ba0054ac998f0e5e600695c6b052f41a78c8981.scope - libcontainer container ae8d8af5e1f495e0922e09484ba0054ac998f0e5e600695c6b052f41a78c8981. Sep 5 06:24:55.348080 containerd[1590]: time="2025-09-05T06:24:55.347990366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-646c87784c-2r4gd,Uid:b13fcdbe-a309-454f-b646-c0aec2d59ecb,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae8d8af5e1f495e0922e09484ba0054ac998f0e5e600695c6b052f41a78c8981\"" Sep 5 06:24:55.350111 kubelet[2720]: E0905 06:24:55.350079 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:55.350832 containerd[1590]: time="2025-09-05T06:24:55.350802498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 06:24:55.358541 kubelet[2720]: I0905 06:24:55.358494 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-xtables-lock\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358606 kubelet[2720]: I0905 06:24:55.358543 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-cni-bin-dir\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358606 kubelet[2720]: I0905 06:24:55.358562 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-var-lib-calico\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358662 kubelet[2720]: I0905 06:24:55.358637 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-cni-log-dir\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358692 kubelet[2720]: I0905 06:24:55.358672 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-policysync\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358714 kubelet[2720]: I0905 06:24:55.358696 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtqbb\" (UniqueName: \"kubernetes.io/projected/0ba16447-194f-4d00-9ac7-9b15b21a01ad-kube-api-access-mtqbb\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358742 kubelet[2720]: I0905 06:24:55.358715 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ba16447-194f-4d00-9ac7-9b15b21a01ad-tigera-ca-bundle\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358742 kubelet[2720]: I0905 06:24:55.358735 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-var-run-calico\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358790 kubelet[2720]: I0905 06:24:55.358752 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-flexvol-driver-host\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358790 kubelet[2720]: I0905 06:24:55.358765 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-lib-modules\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358790 kubelet[2720]: I0905 06:24:55.358778 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0ba16447-194f-4d00-9ac7-9b15b21a01ad-cni-net-dir\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.358862 kubelet[2720]: I0905 06:24:55.358791 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0ba16447-194f-4d00-9ac7-9b15b21a01ad-node-certs\") pod \"calico-node-6jnmc\" (UID: \"0ba16447-194f-4d00-9ac7-9b15b21a01ad\") " pod="calico-system/calico-node-6jnmc" Sep 5 06:24:55.460868 kubelet[2720]: E0905 06:24:55.460809 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.460868 kubelet[2720]: W0905 06:24:55.460826 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.460868 kubelet[2720]: E0905 06:24:55.460858 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.461193 kubelet[2720]: E0905 06:24:55.461172 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.461193 kubelet[2720]: W0905 06:24:55.461191 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.461255 kubelet[2720]: E0905 06:24:55.461209 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.461457 kubelet[2720]: E0905 06:24:55.461442 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.461457 kubelet[2720]: W0905 06:24:55.461452 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.461584 kubelet[2720]: E0905 06:24:55.461460 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.466142 kubelet[2720]: E0905 06:24:55.466110 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.466142 kubelet[2720]: W0905 06:24:55.466139 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.466265 kubelet[2720]: E0905 06:24:55.466165 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.466400 kubelet[2720]: E0905 06:24:55.466369 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.466400 kubelet[2720]: W0905 06:24:55.466389 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.466559 kubelet[2720]: E0905 06:24:55.466411 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.520507 kubelet[2720]: E0905 06:24:55.520261 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm52w" podUID="d7097e42-b7ac-47fd-82a8-b7a571e5a93b" Sep 5 06:24:55.550348 kubelet[2720]: E0905 06:24:55.550314 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.550348 kubelet[2720]: W0905 06:24:55.550337 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.550348 kubelet[2720]: E0905 06:24:55.550359 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.550972 kubelet[2720]: E0905 06:24:55.550598 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.550972 kubelet[2720]: W0905 06:24:55.550609 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.550972 kubelet[2720]: E0905 06:24:55.550618 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.550972 kubelet[2720]: E0905 06:24:55.550784 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.550972 kubelet[2720]: W0905 06:24:55.550791 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.550972 kubelet[2720]: E0905 06:24:55.550799 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.551255 kubelet[2720]: E0905 06:24:55.551229 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.551255 kubelet[2720]: W0905 06:24:55.551246 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.551314 kubelet[2720]: E0905 06:24:55.551257 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.551615 kubelet[2720]: E0905 06:24:55.551472 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.551615 kubelet[2720]: W0905 06:24:55.551492 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.551615 kubelet[2720]: E0905 06:24:55.551514 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.551752 kubelet[2720]: E0905 06:24:55.551741 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.551809 kubelet[2720]: W0905 06:24:55.551798 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.551917 kubelet[2720]: E0905 06:24:55.551848 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.552496 kubelet[2720]: E0905 06:24:55.552078 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.552496 kubelet[2720]: W0905 06:24:55.552104 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.552496 kubelet[2720]: E0905 06:24:55.552126 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.552496 kubelet[2720]: E0905 06:24:55.552331 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.552496 kubelet[2720]: W0905 06:24:55.552338 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.552496 kubelet[2720]: E0905 06:24:55.552346 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.554892 kubelet[2720]: E0905 06:24:55.554819 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.554892 kubelet[2720]: W0905 06:24:55.554839 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.554892 kubelet[2720]: E0905 06:24:55.554851 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.555305 kubelet[2720]: E0905 06:24:55.555294 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.555390 kubelet[2720]: W0905 06:24:55.555379 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.555548 kubelet[2720]: E0905 06:24:55.555457 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.555766 kubelet[2720]: E0905 06:24:55.555733 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.555844 kubelet[2720]: W0905 06:24:55.555814 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.555920 kubelet[2720]: E0905 06:24:55.555884 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.556258 kubelet[2720]: E0905 06:24:55.556228 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.556304 kubelet[2720]: W0905 06:24:55.556246 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.556304 kubelet[2720]: E0905 06:24:55.556281 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.556869 kubelet[2720]: E0905 06:24:55.556837 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.556869 kubelet[2720]: W0905 06:24:55.556865 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.557358 kubelet[2720]: E0905 06:24:55.556892 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.557358 kubelet[2720]: E0905 06:24:55.557042 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.557358 kubelet[2720]: W0905 06:24:55.557048 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.557358 kubelet[2720]: E0905 06:24:55.557056 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.557358 kubelet[2720]: E0905 06:24:55.557236 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.557358 kubelet[2720]: W0905 06:24:55.557252 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.557358 kubelet[2720]: E0905 06:24:55.557263 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.557514 kubelet[2720]: E0905 06:24:55.557435 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.557514 kubelet[2720]: W0905 06:24:55.557443 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.557514 kubelet[2720]: E0905 06:24:55.557450 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.557709 kubelet[2720]: E0905 06:24:55.557686 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.557709 kubelet[2720]: W0905 06:24:55.557701 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.557765 kubelet[2720]: E0905 06:24:55.557711 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.557923 kubelet[2720]: E0905 06:24:55.557909 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.557923 kubelet[2720]: W0905 06:24:55.557919 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.558011 kubelet[2720]: E0905 06:24:55.557928 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.558095 kubelet[2720]: E0905 06:24:55.558080 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.558095 kubelet[2720]: W0905 06:24:55.558090 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.558213 kubelet[2720]: E0905 06:24:55.558097 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.558297 kubelet[2720]: E0905 06:24:55.558267 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.558297 kubelet[2720]: W0905 06:24:55.558277 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.558297 kubelet[2720]: E0905 06:24:55.558285 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.559496 kubelet[2720]: E0905 06:24:55.559480 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.559496 kubelet[2720]: W0905 06:24:55.559491 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.559574 kubelet[2720]: E0905 06:24:55.559522 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.559574 kubelet[2720]: I0905 06:24:55.559571 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7097e42-b7ac-47fd-82a8-b7a571e5a93b-kubelet-dir\") pod \"csi-node-driver-wm52w\" (UID: \"d7097e42-b7ac-47fd-82a8-b7a571e5a93b\") " pod="calico-system/csi-node-driver-wm52w" Sep 5 06:24:55.559821 kubelet[2720]: E0905 06:24:55.559805 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.559821 kubelet[2720]: W0905 06:24:55.559816 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.559883 kubelet[2720]: E0905 06:24:55.559831 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.559883 kubelet[2720]: I0905 06:24:55.559847 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d7097e42-b7ac-47fd-82a8-b7a571e5a93b-varrun\") pod \"csi-node-driver-wm52w\" (UID: \"d7097e42-b7ac-47fd-82a8-b7a571e5a93b\") " pod="calico-system/csi-node-driver-wm52w" Sep 5 06:24:55.560085 kubelet[2720]: E0905 06:24:55.560067 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.560085 kubelet[2720]: W0905 06:24:55.560081 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.560157 kubelet[2720]: E0905 06:24:55.560097 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.560383 kubelet[2720]: E0905 06:24:55.560355 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.560383 kubelet[2720]: W0905 06:24:55.560371 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.560558 kubelet[2720]: E0905 06:24:55.560393 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.560834 kubelet[2720]: E0905 06:24:55.560606 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.560834 kubelet[2720]: W0905 06:24:55.560618 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.560834 kubelet[2720]: E0905 06:24:55.560631 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.560834 kubelet[2720]: I0905 06:24:55.560652 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d7097e42-b7ac-47fd-82a8-b7a571e5a93b-socket-dir\") pod \"csi-node-driver-wm52w\" (UID: \"d7097e42-b7ac-47fd-82a8-b7a571e5a93b\") " pod="calico-system/csi-node-driver-wm52w" Sep 5 06:24:55.560933 kubelet[2720]: E0905 06:24:55.560849 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.560933 kubelet[2720]: W0905 06:24:55.560862 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.560933 kubelet[2720]: E0905 06:24:55.560878 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.561087 kubelet[2720]: E0905 06:24:55.561070 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.561087 kubelet[2720]: W0905 06:24:55.561080 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.561138 kubelet[2720]: E0905 06:24:55.561093 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.561305 kubelet[2720]: E0905 06:24:55.561288 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.561305 kubelet[2720]: W0905 06:24:55.561300 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.561365 kubelet[2720]: E0905 06:24:55.561315 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.561365 kubelet[2720]: I0905 06:24:55.561332 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d7097e42-b7ac-47fd-82a8-b7a571e5a93b-registration-dir\") pod \"csi-node-driver-wm52w\" (UID: \"d7097e42-b7ac-47fd-82a8-b7a571e5a93b\") " pod="calico-system/csi-node-driver-wm52w" Sep 5 06:24:55.561510 kubelet[2720]: E0905 06:24:55.561494 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.561510 kubelet[2720]: W0905 06:24:55.561505 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.561609 kubelet[2720]: E0905 06:24:55.561518 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.561609 kubelet[2720]: I0905 06:24:55.561598 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6rr\" (UniqueName: \"kubernetes.io/projected/d7097e42-b7ac-47fd-82a8-b7a571e5a93b-kube-api-access-pb6rr\") pod \"csi-node-driver-wm52w\" (UID: \"d7097e42-b7ac-47fd-82a8-b7a571e5a93b\") " pod="calico-system/csi-node-driver-wm52w" Sep 5 06:24:55.561792 kubelet[2720]: E0905 06:24:55.561771 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.561792 kubelet[2720]: W0905 06:24:55.561786 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.561839 kubelet[2720]: E0905 06:24:55.561804 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.561966 kubelet[2720]: E0905 06:24:55.561952 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.561966 kubelet[2720]: W0905 06:24:55.561961 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.562054 kubelet[2720]: E0905 06:24:55.561972 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.562205 kubelet[2720]: E0905 06:24:55.562175 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.562205 kubelet[2720]: W0905 06:24:55.562191 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.562205 kubelet[2720]: E0905 06:24:55.562210 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.562496 kubelet[2720]: E0905 06:24:55.562470 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.562496 kubelet[2720]: W0905 06:24:55.562481 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.562496 kubelet[2720]: E0905 06:24:55.562497 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.562763 kubelet[2720]: E0905 06:24:55.562740 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.562763 kubelet[2720]: W0905 06:24:55.562755 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.562763 kubelet[2720]: E0905 06:24:55.562766 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.562938 kubelet[2720]: E0905 06:24:55.562923 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.562938 kubelet[2720]: W0905 06:24:55.562933 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.563041 kubelet[2720]: E0905 06:24:55.562940 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.568618 containerd[1590]: time="2025-09-05T06:24:55.568585111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6jnmc,Uid:0ba16447-194f-4d00-9ac7-9b15b21a01ad,Namespace:calico-system,Attempt:0,}" Sep 5 06:24:55.591478 containerd[1590]: time="2025-09-05T06:24:55.591444430Z" level=info msg="connecting to shim 9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa" address="unix:///run/containerd/s/6caac75a188e4a370ddb756aa4034888ec0462db31304e4e2e6a9f8dda905125" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:24:55.620661 systemd[1]: Started cri-containerd-9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa.scope - libcontainer container 9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa. Sep 5 06:24:55.647589 containerd[1590]: time="2025-09-05T06:24:55.647548680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6jnmc,Uid:0ba16447-194f-4d00-9ac7-9b15b21a01ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa\"" Sep 5 06:24:55.663619 kubelet[2720]: E0905 06:24:55.663585 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.663619 kubelet[2720]: W0905 06:24:55.663612 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.663619 kubelet[2720]: E0905 06:24:55.663632 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.664164 kubelet[2720]: E0905 06:24:55.664127 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.664164 kubelet[2720]: W0905 06:24:55.664166 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.664261 kubelet[2720]: E0905 06:24:55.664185 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.664522 kubelet[2720]: E0905 06:24:55.664500 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.664522 kubelet[2720]: W0905 06:24:55.664514 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.664593 kubelet[2720]: E0905 06:24:55.664565 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.664839 kubelet[2720]: E0905 06:24:55.664823 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.664839 kubelet[2720]: W0905 06:24:55.664837 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.664912 kubelet[2720]: E0905 06:24:55.664850 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.665342 kubelet[2720]: E0905 06:24:55.665299 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.665342 kubelet[2720]: W0905 06:24:55.665313 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.665547 kubelet[2720]: E0905 06:24:55.665400 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.665694 kubelet[2720]: E0905 06:24:55.665587 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.665694 kubelet[2720]: W0905 06:24:55.665594 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.665694 kubelet[2720]: E0905 06:24:55.665650 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.666503 kubelet[2720]: E0905 06:24:55.666482 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.666503 kubelet[2720]: W0905 06:24:55.666496 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.666658 kubelet[2720]: E0905 06:24:55.666642 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.666726 kubelet[2720]: E0905 06:24:55.666712 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.666726 kubelet[2720]: W0905 06:24:55.666722 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.667175 kubelet[2720]: E0905 06:24:55.667048 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.667461 kubelet[2720]: E0905 06:24:55.667436 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.667461 kubelet[2720]: W0905 06:24:55.667450 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.667620 kubelet[2720]: E0905 06:24:55.667560 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.667873 kubelet[2720]: E0905 06:24:55.667784 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.667873 kubelet[2720]: W0905 06:24:55.667807 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.667873 kubelet[2720]: E0905 06:24:55.667845 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.668099 kubelet[2720]: E0905 06:24:55.668069 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.668099 kubelet[2720]: W0905 06:24:55.668091 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.668255 kubelet[2720]: E0905 06:24:55.668238 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.668361 kubelet[2720]: E0905 06:24:55.668338 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.668361 kubelet[2720]: W0905 06:24:55.668349 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.668510 kubelet[2720]: E0905 06:24:55.668380 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.668662 kubelet[2720]: E0905 06:24:55.668579 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.668662 kubelet[2720]: W0905 06:24:55.668618 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.668723 kubelet[2720]: E0905 06:24:55.668710 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.668949 kubelet[2720]: E0905 06:24:55.668895 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.668949 kubelet[2720]: W0905 06:24:55.668907 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.669006 kubelet[2720]: E0905 06:24:55.668973 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.669237 kubelet[2720]: E0905 06:24:55.669222 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.669237 kubelet[2720]: W0905 06:24:55.669233 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.669327 kubelet[2720]: E0905 06:24:55.669318 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.669662 kubelet[2720]: E0905 06:24:55.669646 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.669714 kubelet[2720]: W0905 06:24:55.669690 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.669714 kubelet[2720]: E0905 06:24:55.669711 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.670032 kubelet[2720]: E0905 06:24:55.670017 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.670032 kubelet[2720]: W0905 06:24:55.670027 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.670187 kubelet[2720]: E0905 06:24:55.670052 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.670318 kubelet[2720]: E0905 06:24:55.670266 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.670318 kubelet[2720]: W0905 06:24:55.670277 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.670318 kubelet[2720]: E0905 06:24:55.670319 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.670708 kubelet[2720]: E0905 06:24:55.670688 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.670758 kubelet[2720]: W0905 06:24:55.670700 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.670783 kubelet[2720]: E0905 06:24:55.670771 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.670980 kubelet[2720]: E0905 06:24:55.670964 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.670980 kubelet[2720]: W0905 06:24:55.670977 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.671129 kubelet[2720]: E0905 06:24:55.671088 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.671293 kubelet[2720]: E0905 06:24:55.671213 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.671293 kubelet[2720]: W0905 06:24:55.671223 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.671293 kubelet[2720]: E0905 06:24:55.671278 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.671407 kubelet[2720]: E0905 06:24:55.671392 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.671407 kubelet[2720]: W0905 06:24:55.671401 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.671485 kubelet[2720]: E0905 06:24:55.671462 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.671675 kubelet[2720]: E0905 06:24:55.671659 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.671675 kubelet[2720]: W0905 06:24:55.671669 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.671726 kubelet[2720]: E0905 06:24:55.671684 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.671911 kubelet[2720]: E0905 06:24:55.671896 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.671911 kubelet[2720]: W0905 06:24:55.671905 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.671978 kubelet[2720]: E0905 06:24:55.671919 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.672153 kubelet[2720]: E0905 06:24:55.672135 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.672183 kubelet[2720]: W0905 06:24:55.672167 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.672220 kubelet[2720]: E0905 06:24:55.672182 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.680224 kubelet[2720]: E0905 06:24:55.680149 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:55.680224 kubelet[2720]: W0905 06:24:55.680160 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:55.680224 kubelet[2720]: E0905 06:24:55.680168 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:55.843849 update_engine[1578]: I20250905 06:24:55.843782 1578 update_attempter.cc:509] Updating boot flags... Sep 5 06:24:56.748063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875299320.mount: Deactivated successfully. Sep 5 06:24:57.664887 kubelet[2720]: E0905 06:24:57.664837 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm52w" podUID="d7097e42-b7ac-47fd-82a8-b7a571e5a93b" Sep 5 06:24:58.493168 containerd[1590]: time="2025-09-05T06:24:58.493113946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:58.493831 containerd[1590]: time="2025-09-05T06:24:58.493799964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 06:24:58.494919 containerd[1590]: time="2025-09-05T06:24:58.494872973Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:58.496856 containerd[1590]: time="2025-09-05T06:24:58.496818178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:58.497413 containerd[1590]: time="2025-09-05T06:24:58.497357794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.14652061s" Sep 5 06:24:58.497413 containerd[1590]: time="2025-09-05T06:24:58.497394372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 06:24:58.498978 containerd[1590]: time="2025-09-05T06:24:58.498496332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 06:24:58.508025 containerd[1590]: time="2025-09-05T06:24:58.507991165Z" level=info msg="CreateContainer within sandbox \"ae8d8af5e1f495e0922e09484ba0054ac998f0e5e600695c6b052f41a78c8981\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 06:24:58.515566 containerd[1590]: time="2025-09-05T06:24:58.515522623Z" level=info msg="Container 828c3e348a3faba56eca4b652bb656126ffc923e5ac20eea539c546cae217746: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:24:58.523065 containerd[1590]: time="2025-09-05T06:24:58.523026663Z" level=info msg="CreateContainer within sandbox \"ae8d8af5e1f495e0922e09484ba0054ac998f0e5e600695c6b052f41a78c8981\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"828c3e348a3faba56eca4b652bb656126ffc923e5ac20eea539c546cae217746\"" Sep 5 06:24:58.523408 containerd[1590]: time="2025-09-05T06:24:58.523373175Z" level=info msg="StartContainer for \"828c3e348a3faba56eca4b652bb656126ffc923e5ac20eea539c546cae217746\"" Sep 5 06:24:58.524302 containerd[1590]: time="2025-09-05T06:24:58.524275386Z" level=info msg="connecting to shim 828c3e348a3faba56eca4b652bb656126ffc923e5ac20eea539c546cae217746" address="unix:///run/containerd/s/b231f3ead0d416348b607c61dedba066944e851a0d56ca0955b85b56dfb4afe7" protocol=ttrpc version=3 Sep 5 06:24:58.548679 systemd[1]: Started cri-containerd-828c3e348a3faba56eca4b652bb656126ffc923e5ac20eea539c546cae217746.scope - libcontainer container 828c3e348a3faba56eca4b652bb656126ffc923e5ac20eea539c546cae217746. Sep 5 06:24:58.593916 containerd[1590]: time="2025-09-05T06:24:58.593864769Z" level=info msg="StartContainer for \"828c3e348a3faba56eca4b652bb656126ffc923e5ac20eea539c546cae217746\" returns successfully" Sep 5 06:24:58.714380 kubelet[2720]: E0905 06:24:58.714336 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:58.722763 kubelet[2720]: I0905 06:24:58.722700 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-646c87784c-2r4gd" podStartSLOduration=1.575028306 podStartE2EDuration="4.722686176s" podCreationTimestamp="2025-09-05 06:24:54 +0000 UTC" firstStartedPulling="2025-09-05 06:24:55.350491459 +0000 UTC m=+16.765484774" lastFinishedPulling="2025-09-05 06:24:58.498149318 +0000 UTC m=+19.913142644" observedRunningTime="2025-09-05 06:24:58.722057771 +0000 UTC m=+20.137051086" watchObservedRunningTime="2025-09-05 06:24:58.722686176 +0000 UTC m=+20.137679491" Sep 5 06:24:58.778694 kubelet[2720]: E0905 06:24:58.778430 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.778694 kubelet[2720]: W0905 06:24:58.778552 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.778694 kubelet[2720]: E0905 06:24:58.778574 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.779926 kubelet[2720]: E0905 06:24:58.779899 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.779926 kubelet[2720]: W0905 06:24:58.779911 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.779926 kubelet[2720]: E0905 06:24:58.779921 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.780611 kubelet[2720]: E0905 06:24:58.780571 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.780611 kubelet[2720]: W0905 06:24:58.780584 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.780611 kubelet[2720]: E0905 06:24:58.780592 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.781005 kubelet[2720]: E0905 06:24:58.780967 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.781005 kubelet[2720]: W0905 06:24:58.780998 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.781064 kubelet[2720]: E0905 06:24:58.781024 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.781299 kubelet[2720]: E0905 06:24:58.781280 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.781299 kubelet[2720]: W0905 06:24:58.781290 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.781299 kubelet[2720]: E0905 06:24:58.781299 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.781469 kubelet[2720]: E0905 06:24:58.781454 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.781469 kubelet[2720]: W0905 06:24:58.781466 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.781529 kubelet[2720]: E0905 06:24:58.781474 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.781666 kubelet[2720]: E0905 06:24:58.781652 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.781666 kubelet[2720]: W0905 06:24:58.781661 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.781718 kubelet[2720]: E0905 06:24:58.781668 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.781839 kubelet[2720]: E0905 06:24:58.781824 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.781839 kubelet[2720]: W0905 06:24:58.781833 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.781889 kubelet[2720]: E0905 06:24:58.781842 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.782022 kubelet[2720]: E0905 06:24:58.782008 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.782022 kubelet[2720]: W0905 06:24:58.782017 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.782074 kubelet[2720]: E0905 06:24:58.782024 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.782183 kubelet[2720]: E0905 06:24:58.782170 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.782183 kubelet[2720]: W0905 06:24:58.782179 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.782231 kubelet[2720]: E0905 06:24:58.782187 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.782349 kubelet[2720]: E0905 06:24:58.782334 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.782349 kubelet[2720]: W0905 06:24:58.782343 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.782410 kubelet[2720]: E0905 06:24:58.782351 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.782646 kubelet[2720]: E0905 06:24:58.782612 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.782646 kubelet[2720]: W0905 06:24:58.782628 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.782646 kubelet[2720]: E0905 06:24:58.782637 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.783153 kubelet[2720]: E0905 06:24:58.783122 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.783153 kubelet[2720]: W0905 06:24:58.783136 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.783153 kubelet[2720]: E0905 06:24:58.783145 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.784544 kubelet[2720]: E0905 06:24:58.784503 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.784544 kubelet[2720]: W0905 06:24:58.784517 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.784544 kubelet[2720]: E0905 06:24:58.784527 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.784811 kubelet[2720]: E0905 06:24:58.784783 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.784838 kubelet[2720]: W0905 06:24:58.784810 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.784872 kubelet[2720]: E0905 06:24:58.784837 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.790709 kubelet[2720]: E0905 06:24:58.790670 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.790709 kubelet[2720]: W0905 06:24:58.790693 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.790817 kubelet[2720]: E0905 06:24:58.790718 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.791020 kubelet[2720]: E0905 06:24:58.791004 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.791020 kubelet[2720]: W0905 06:24:58.791015 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.791081 kubelet[2720]: E0905 06:24:58.791029 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.791265 kubelet[2720]: E0905 06:24:58.791242 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.791265 kubelet[2720]: W0905 06:24:58.791253 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.791312 kubelet[2720]: E0905 06:24:58.791267 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.791517 kubelet[2720]: E0905 06:24:58.791496 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.791517 kubelet[2720]: W0905 06:24:58.791513 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.791640 kubelet[2720]: E0905 06:24:58.791624 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.791727 kubelet[2720]: E0905 06:24:58.791705 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.791727 kubelet[2720]: W0905 06:24:58.791717 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.791777 kubelet[2720]: E0905 06:24:58.791728 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.791932 kubelet[2720]: E0905 06:24:58.791915 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.791932 kubelet[2720]: W0905 06:24:58.791927 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.792089 kubelet[2720]: E0905 06:24:58.791945 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.792292 kubelet[2720]: E0905 06:24:58.792178 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.792292 kubelet[2720]: W0905 06:24:58.792192 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.792292 kubelet[2720]: E0905 06:24:58.792208 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.792576 kubelet[2720]: E0905 06:24:58.792457 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.792576 kubelet[2720]: W0905 06:24:58.792471 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.792576 kubelet[2720]: E0905 06:24:58.792510 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.792837 kubelet[2720]: E0905 06:24:58.792747 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.792837 kubelet[2720]: W0905 06:24:58.792761 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.792837 kubelet[2720]: E0905 06:24:58.792774 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.793022 kubelet[2720]: E0905 06:24:58.793010 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.793022 kubelet[2720]: W0905 06:24:58.793018 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.793127 kubelet[2720]: E0905 06:24:58.793079 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.793318 kubelet[2720]: E0905 06:24:58.793296 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.793318 kubelet[2720]: W0905 06:24:58.793311 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.793380 kubelet[2720]: E0905 06:24:58.793341 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.793606 kubelet[2720]: E0905 06:24:58.793562 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.793606 kubelet[2720]: W0905 06:24:58.793577 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.793606 kubelet[2720]: E0905 06:24:58.793594 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.794042 kubelet[2720]: E0905 06:24:58.794025 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.794042 kubelet[2720]: W0905 06:24:58.794037 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.794105 kubelet[2720]: E0905 06:24:58.794048 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.794373 kubelet[2720]: E0905 06:24:58.794357 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.794373 kubelet[2720]: W0905 06:24:58.794369 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.794431 kubelet[2720]: E0905 06:24:58.794390 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.794611 kubelet[2720]: E0905 06:24:58.794597 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.794611 kubelet[2720]: W0905 06:24:58.794606 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.794674 kubelet[2720]: E0905 06:24:58.794629 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.794842 kubelet[2720]: E0905 06:24:58.794828 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.794842 kubelet[2720]: W0905 06:24:58.794837 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.794895 kubelet[2720]: E0905 06:24:58.794849 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.795464 kubelet[2720]: E0905 06:24:58.795442 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.795464 kubelet[2720]: W0905 06:24:58.795457 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.795526 kubelet[2720]: E0905 06:24:58.795500 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:58.795722 kubelet[2720]: E0905 06:24:58.795707 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:58.795722 kubelet[2720]: W0905 06:24:58.795717 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:58.795796 kubelet[2720]: E0905 06:24:58.795727 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.664805 kubelet[2720]: E0905 06:24:59.664752 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm52w" podUID="d7097e42-b7ac-47fd-82a8-b7a571e5a93b" Sep 5 06:24:59.715661 kubelet[2720]: I0905 06:24:59.715611 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:24:59.716078 kubelet[2720]: E0905 06:24:59.715963 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:24:59.791510 kubelet[2720]: E0905 06:24:59.791473 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.791510 kubelet[2720]: W0905 06:24:59.791499 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.791510 kubelet[2720]: E0905 06:24:59.791520 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.791780 kubelet[2720]: E0905 06:24:59.791763 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.791780 kubelet[2720]: W0905 06:24:59.791774 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.791840 kubelet[2720]: E0905 06:24:59.791783 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.792033 kubelet[2720]: E0905 06:24:59.791993 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.792033 kubelet[2720]: W0905 06:24:59.792008 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.792033 kubelet[2720]: E0905 06:24:59.792016 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.792291 kubelet[2720]: E0905 06:24:59.792267 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.792291 kubelet[2720]: W0905 06:24:59.792281 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.792291 kubelet[2720]: E0905 06:24:59.792289 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.792492 kubelet[2720]: E0905 06:24:59.792469 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.792522 kubelet[2720]: W0905 06:24:59.792511 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.792562 kubelet[2720]: E0905 06:24:59.792521 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.792724 kubelet[2720]: E0905 06:24:59.792702 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.792724 kubelet[2720]: W0905 06:24:59.792717 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.792724 kubelet[2720]: E0905 06:24:59.792724 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.792894 kubelet[2720]: E0905 06:24:59.792872 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.792923 kubelet[2720]: W0905 06:24:59.792886 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.792923 kubelet[2720]: E0905 06:24:59.792916 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.793116 kubelet[2720]: E0905 06:24:59.793091 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.793116 kubelet[2720]: W0905 06:24:59.793104 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.793116 kubelet[2720]: E0905 06:24:59.793114 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.793362 kubelet[2720]: E0905 06:24:59.793331 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.793399 kubelet[2720]: W0905 06:24:59.793359 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.793399 kubelet[2720]: E0905 06:24:59.793389 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.793651 kubelet[2720]: E0905 06:24:59.793620 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.793651 kubelet[2720]: W0905 06:24:59.793644 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.793700 kubelet[2720]: E0905 06:24:59.793653 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.793826 kubelet[2720]: E0905 06:24:59.793811 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.793826 kubelet[2720]: W0905 06:24:59.793821 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.793876 kubelet[2720]: E0905 06:24:59.793829 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.794007 kubelet[2720]: E0905 06:24:59.793990 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.794007 kubelet[2720]: W0905 06:24:59.794002 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.794060 kubelet[2720]: E0905 06:24:59.794009 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.794229 kubelet[2720]: E0905 06:24:59.794206 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.794229 kubelet[2720]: W0905 06:24:59.794216 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.794293 kubelet[2720]: E0905 06:24:59.794233 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.794454 kubelet[2720]: E0905 06:24:59.794432 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.794482 kubelet[2720]: W0905 06:24:59.794466 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.794482 kubelet[2720]: E0905 06:24:59.794478 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.794753 kubelet[2720]: E0905 06:24:59.794733 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.794753 kubelet[2720]: W0905 06:24:59.794747 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.794814 kubelet[2720]: E0905 06:24:59.794756 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.797147 kubelet[2720]: E0905 06:24:59.797116 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.797147 kubelet[2720]: W0905 06:24:59.797142 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.797215 kubelet[2720]: E0905 06:24:59.797152 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.797474 kubelet[2720]: E0905 06:24:59.797456 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.797474 kubelet[2720]: W0905 06:24:59.797471 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.797564 kubelet[2720]: E0905 06:24:59.797486 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.797879 kubelet[2720]: E0905 06:24:59.797843 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.798030 kubelet[2720]: W0905 06:24:59.797868 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.798030 kubelet[2720]: E0905 06:24:59.798012 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.798394 kubelet[2720]: E0905 06:24:59.798378 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.798394 kubelet[2720]: W0905 06:24:59.798390 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.798465 kubelet[2720]: E0905 06:24:59.798408 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.798744 kubelet[2720]: E0905 06:24:59.798728 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.798744 kubelet[2720]: W0905 06:24:59.798739 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.798797 kubelet[2720]: E0905 06:24:59.798755 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.799748 kubelet[2720]: E0905 06:24:59.799036 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.799748 kubelet[2720]: W0905 06:24:59.799044 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.799748 kubelet[2720]: E0905 06:24:59.799059 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.799748 kubelet[2720]: E0905 06:24:59.799512 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.799748 kubelet[2720]: W0905 06:24:59.799561 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.799748 kubelet[2720]: E0905 06:24:59.799715 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.799748 kubelet[2720]: W0905 06:24:59.799721 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.800006 kubelet[2720]: E0905 06:24:59.799992 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.800123 kubelet[2720]: E0905 06:24:59.800009 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.800123 kubelet[2720]: W0905 06:24:59.800110 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.800123 kubelet[2720]: E0905 06:24:59.800120 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.800291 kubelet[2720]: E0905 06:24:59.800039 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.800345 kubelet[2720]: E0905 06:24:59.800328 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.800345 kubelet[2720]: W0905 06:24:59.800341 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.800406 kubelet[2720]: E0905 06:24:59.800350 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.800569 kubelet[2720]: E0905 06:24:59.800554 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.800569 kubelet[2720]: W0905 06:24:59.800567 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.800818 kubelet[2720]: E0905 06:24:59.800575 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.800818 kubelet[2720]: E0905 06:24:59.800750 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.800818 kubelet[2720]: W0905 06:24:59.800768 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.800818 kubelet[2720]: E0905 06:24:59.800783 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.801045 kubelet[2720]: E0905 06:24:59.801008 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.801045 kubelet[2720]: W0905 06:24:59.801019 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.801045 kubelet[2720]: E0905 06:24:59.801040 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.801312 kubelet[2720]: E0905 06:24:59.801295 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.801312 kubelet[2720]: W0905 06:24:59.801308 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.801374 kubelet[2720]: E0905 06:24:59.801322 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.801509 kubelet[2720]: E0905 06:24:59.801494 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.801509 kubelet[2720]: W0905 06:24:59.801504 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.801578 kubelet[2720]: E0905 06:24:59.801517 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.801734 kubelet[2720]: E0905 06:24:59.801719 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.801734 kubelet[2720]: W0905 06:24:59.801728 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.801794 kubelet[2720]: E0905 06:24:59.801742 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.802044 kubelet[2720]: E0905 06:24:59.802014 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.802044 kubelet[2720]: W0905 06:24:59.802036 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.802114 kubelet[2720]: E0905 06:24:59.802061 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.802291 kubelet[2720]: E0905 06:24:59.802274 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:24:59.802291 kubelet[2720]: W0905 06:24:59.802285 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:24:59.802349 kubelet[2720]: E0905 06:24:59.802293 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:24:59.808186 containerd[1590]: time="2025-09-05T06:24:59.808152803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:59.809024 containerd[1590]: time="2025-09-05T06:24:59.809001199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 06:24:59.810077 containerd[1590]: time="2025-09-05T06:24:59.810031122Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:59.811870 containerd[1590]: time="2025-09-05T06:24:59.811840293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:24:59.812391 containerd[1590]: time="2025-09-05T06:24:59.812348295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.313295703s" Sep 5 06:24:59.812391 containerd[1590]: time="2025-09-05T06:24:59.812385675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 06:24:59.815181 containerd[1590]: time="2025-09-05T06:24:59.815146652Z" level=info msg="CreateContainer within sandbox \"9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 06:24:59.827887 containerd[1590]: time="2025-09-05T06:24:59.827834756Z" level=info msg="Container 72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:24:59.831268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070604879.mount: Deactivated successfully. Sep 5 06:24:59.838423 containerd[1590]: time="2025-09-05T06:24:59.838374905Z" level=info msg="CreateContainer within sandbox \"9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d\"" Sep 5 06:24:59.839565 containerd[1590]: time="2025-09-05T06:24:59.839025710Z" level=info msg="StartContainer for \"72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d\"" Sep 5 06:24:59.840954 containerd[1590]: time="2025-09-05T06:24:59.840737013Z" level=info msg="connecting to shim 72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d" address="unix:///run/containerd/s/6caac75a188e4a370ddb756aa4034888ec0462db31304e4e2e6a9f8dda905125" protocol=ttrpc version=3 Sep 5 06:24:59.866680 systemd[1]: Started cri-containerd-72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d.scope - libcontainer container 72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d. Sep 5 06:24:59.906479 containerd[1590]: time="2025-09-05T06:24:59.906447509Z" level=info msg="StartContainer for \"72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d\" returns successfully" Sep 5 06:24:59.916933 systemd[1]: cri-containerd-72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d.scope: Deactivated successfully. Sep 5 06:24:59.918899 containerd[1590]: time="2025-09-05T06:24:59.918856528Z" level=info msg="received exit event container_id:\"72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d\" id:\"72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d\" pid:3453 exited_at:{seconds:1757053499 nanos:918320838}" Sep 5 06:24:59.919094 containerd[1590]: time="2025-09-05T06:24:59.919051453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d\" id:\"72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d\" pid:3453 exited_at:{seconds:1757053499 nanos:918320838}" Sep 5 06:24:59.942499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72a99d61ed05866cfd94c52715d6346b44f9857728ba0421e6c05d3a60f0355d-rootfs.mount: Deactivated successfully. Sep 5 06:25:00.720206 containerd[1590]: time="2025-09-05T06:25:00.720164812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 06:25:01.665358 kubelet[2720]: E0905 06:25:01.665305 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm52w" podUID="d7097e42-b7ac-47fd-82a8-b7a571e5a93b" Sep 5 06:25:02.279983 kubelet[2720]: I0905 06:25:02.279827 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:25:02.280202 kubelet[2720]: E0905 06:25:02.280118 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:02.722138 kubelet[2720]: E0905 06:25:02.722090 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:03.665079 kubelet[2720]: E0905 06:25:03.665025 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm52w" podUID="d7097e42-b7ac-47fd-82a8-b7a571e5a93b" Sep 5 06:25:05.664859 kubelet[2720]: E0905 06:25:05.664570 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm52w" podUID="d7097e42-b7ac-47fd-82a8-b7a571e5a93b" Sep 5 06:25:07.664907 kubelet[2720]: E0905 06:25:07.664851 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm52w" podUID="d7097e42-b7ac-47fd-82a8-b7a571e5a93b" Sep 5 06:25:07.883516 containerd[1590]: time="2025-09-05T06:25:07.883483971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:07.884197 containerd[1590]: time="2025-09-05T06:25:07.884153307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 06:25:07.885237 containerd[1590]: time="2025-09-05T06:25:07.885191447Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:07.887180 containerd[1590]: time="2025-09-05T06:25:07.887140099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:07.887680 containerd[1590]: time="2025-09-05T06:25:07.887647812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 7.167446042s" Sep 5 06:25:07.887680 containerd[1590]: time="2025-09-05T06:25:07.887674346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 06:25:07.890377 containerd[1590]: time="2025-09-05T06:25:07.890316281Z" level=info msg="CreateContainer within sandbox \"9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 06:25:07.898763 containerd[1590]: time="2025-09-05T06:25:07.898732313Z" level=info msg="Container 857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:07.908402 containerd[1590]: time="2025-09-05T06:25:07.908375573Z" level=info msg="CreateContainer within sandbox \"9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a\"" Sep 5 06:25:07.908736 containerd[1590]: time="2025-09-05T06:25:07.908696152Z" level=info msg="StartContainer for \"857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a\"" Sep 5 06:25:07.909973 containerd[1590]: time="2025-09-05T06:25:07.909941700Z" level=info msg="connecting to shim 857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a" address="unix:///run/containerd/s/6caac75a188e4a370ddb756aa4034888ec0462db31304e4e2e6a9f8dda905125" protocol=ttrpc version=3 Sep 5 06:25:07.939658 systemd[1]: Started cri-containerd-857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a.scope - libcontainer container 857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a. Sep 5 06:25:07.978454 containerd[1590]: time="2025-09-05T06:25:07.978416234Z" level=info msg="StartContainer for \"857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a\" returns successfully" Sep 5 06:25:09.185709 containerd[1590]: time="2025-09-05T06:25:09.185649377Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 06:25:09.188707 systemd[1]: cri-containerd-857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a.scope: Deactivated successfully. Sep 5 06:25:09.189038 systemd[1]: cri-containerd-857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a.scope: Consumed 560ms CPU time, 179.6M memory peak, 2.9M read from disk, 171.3M written to disk. Sep 5 06:25:09.189709 containerd[1590]: time="2025-09-05T06:25:09.189676021Z" level=info msg="received exit event container_id:\"857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a\" id:\"857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a\" pid:3515 exited_at:{seconds:1757053509 nanos:189448266}" Sep 5 06:25:09.189824 containerd[1590]: time="2025-09-05T06:25:09.189741204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a\" id:\"857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a\" pid:3515 exited_at:{seconds:1757053509 nanos:189448266}" Sep 5 06:25:09.211380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-857117d1603143751f3754f2f3487f326e66cf58b2eb87a865da95b9aa89112a-rootfs.mount: Deactivated successfully. Sep 5 06:25:09.256075 kubelet[2720]: I0905 06:25:09.256048 2720 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 06:25:09.288151 systemd[1]: Created slice kubepods-burstable-pod371ec781_2b0f_45aa_9541_2c1f320435dc.slice - libcontainer container kubepods-burstable-pod371ec781_2b0f_45aa_9541_2c1f320435dc.slice. Sep 5 06:25:09.300049 systemd[1]: Created slice kubepods-burstable-pod8866500a_5b0b_49df_9d08_3732719c5d62.slice - libcontainer container kubepods-burstable-pod8866500a_5b0b_49df_9d08_3732719c5d62.slice. Sep 5 06:25:09.305927 systemd[1]: Created slice kubepods-besteffort-pod37e7b780_cbc8_47a6_ab71_691c12c694ee.slice - libcontainer container kubepods-besteffort-pod37e7b780_cbc8_47a6_ab71_691c12c694ee.slice. Sep 5 06:25:09.311427 systemd[1]: Created slice kubepods-besteffort-pod8219e107_b937_4e6b_b7b5_7db6885eb957.slice - libcontainer container kubepods-besteffort-pod8219e107_b937_4e6b_b7b5_7db6885eb957.slice. Sep 5 06:25:09.316723 systemd[1]: Created slice kubepods-besteffort-pod5f3cc73f_b045_466c_a34c_a851f0fd880e.slice - libcontainer container kubepods-besteffort-pod5f3cc73f_b045_466c_a34c_a851f0fd880e.slice. Sep 5 06:25:09.323099 systemd[1]: Created slice kubepods-besteffort-pod283ad304_6ada_44a3_8506_2d3a232ec8bc.slice - libcontainer container kubepods-besteffort-pod283ad304_6ada_44a3_8506_2d3a232ec8bc.slice. Sep 5 06:25:09.327860 systemd[1]: Created slice kubepods-besteffort-podd313d40b_c6da_43b3_8554_8b5d19920e5a.slice - libcontainer container kubepods-besteffort-podd313d40b_c6da_43b3_8554_8b5d19920e5a.slice. Sep 5 06:25:09.364558 kubelet[2720]: I0905 06:25:09.364501 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8219e107-b937-4e6b-b7b5-7db6885eb957-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-p7g92\" (UID: \"8219e107-b937-4e6b-b7b5-7db6885eb957\") " pod="calico-system/goldmane-54d579b49d-p7g92" Sep 5 06:25:09.364558 kubelet[2720]: I0905 06:25:09.364557 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/283ad304-6ada-44a3-8506-2d3a232ec8bc-calico-apiserver-certs\") pod \"calico-apiserver-9c99b967c-5jd7t\" (UID: \"283ad304-6ada-44a3-8506-2d3a232ec8bc\") " pod="calico-apiserver/calico-apiserver-9c99b967c-5jd7t" Sep 5 06:25:09.364675 kubelet[2720]: I0905 06:25:09.364577 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfb4\" (UniqueName: \"kubernetes.io/projected/5f3cc73f-b045-466c-a34c-a851f0fd880e-kube-api-access-wsfb4\") pod \"whisker-7b9d69677c-m595m\" (UID: \"5f3cc73f-b045-466c-a34c-a851f0fd880e\") " pod="calico-system/whisker-7b9d69677c-m595m" Sep 5 06:25:09.364675 kubelet[2720]: I0905 06:25:09.364593 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhg4\" (UniqueName: \"kubernetes.io/projected/8219e107-b937-4e6b-b7b5-7db6885eb957-kube-api-access-2mhg4\") pod \"goldmane-54d579b49d-p7g92\" (UID: \"8219e107-b937-4e6b-b7b5-7db6885eb957\") " pod="calico-system/goldmane-54d579b49d-p7g92" Sep 5 06:25:09.364675 kubelet[2720]: I0905 06:25:09.364610 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37e7b780-cbc8-47a6-ab71-691c12c694ee-calico-apiserver-certs\") pod \"calico-apiserver-9c99b967c-9pckk\" (UID: \"37e7b780-cbc8-47a6-ab71-691c12c694ee\") " pod="calico-apiserver/calico-apiserver-9c99b967c-9pckk" Sep 5 06:25:09.364675 kubelet[2720]: I0905 06:25:09.364626 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9v4\" (UniqueName: \"kubernetes.io/projected/8866500a-5b0b-49df-9d08-3732719c5d62-kube-api-access-5k9v4\") pod \"coredns-668d6bf9bc-d58vl\" (UID: \"8866500a-5b0b-49df-9d08-3732719c5d62\") " pod="kube-system/coredns-668d6bf9bc-d58vl" Sep 5 06:25:09.364675 kubelet[2720]: I0905 06:25:09.364642 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8219e107-b937-4e6b-b7b5-7db6885eb957-goldmane-key-pair\") pod \"goldmane-54d579b49d-p7g92\" (UID: \"8219e107-b937-4e6b-b7b5-7db6885eb957\") " pod="calico-system/goldmane-54d579b49d-p7g92" Sep 5 06:25:09.364802 kubelet[2720]: I0905 06:25:09.364656 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjlc\" (UniqueName: \"kubernetes.io/projected/283ad304-6ada-44a3-8506-2d3a232ec8bc-kube-api-access-9rjlc\") pod \"calico-apiserver-9c99b967c-5jd7t\" (UID: \"283ad304-6ada-44a3-8506-2d3a232ec8bc\") " pod="calico-apiserver/calico-apiserver-9c99b967c-5jd7t" Sep 5 06:25:09.364802 kubelet[2720]: I0905 06:25:09.364670 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5f3cc73f-b045-466c-a34c-a851f0fd880e-whisker-backend-key-pair\") pod \"whisker-7b9d69677c-m595m\" (UID: \"5f3cc73f-b045-466c-a34c-a851f0fd880e\") " pod="calico-system/whisker-7b9d69677c-m595m" Sep 5 06:25:09.364802 kubelet[2720]: I0905 06:25:09.364684 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3cc73f-b045-466c-a34c-a851f0fd880e-whisker-ca-bundle\") pod \"whisker-7b9d69677c-m595m\" (UID: \"5f3cc73f-b045-466c-a34c-a851f0fd880e\") " pod="calico-system/whisker-7b9d69677c-m595m" Sep 5 06:25:09.364802 kubelet[2720]: I0905 06:25:09.364702 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8866500a-5b0b-49df-9d08-3732719c5d62-config-volume\") pod \"coredns-668d6bf9bc-d58vl\" (UID: \"8866500a-5b0b-49df-9d08-3732719c5d62\") " pod="kube-system/coredns-668d6bf9bc-d58vl" Sep 5 06:25:09.364802 kubelet[2720]: I0905 06:25:09.364722 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcxkf\" (UniqueName: \"kubernetes.io/projected/d313d40b-c6da-43b3-8554-8b5d19920e5a-kube-api-access-xcxkf\") pod \"calico-kube-controllers-766c899479-rdpbp\" (UID: \"d313d40b-c6da-43b3-8554-8b5d19920e5a\") " pod="calico-system/calico-kube-controllers-766c899479-rdpbp" Sep 5 06:25:09.364931 kubelet[2720]: I0905 06:25:09.364736 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6zwc\" (UniqueName: \"kubernetes.io/projected/37e7b780-cbc8-47a6-ab71-691c12c694ee-kube-api-access-n6zwc\") pod \"calico-apiserver-9c99b967c-9pckk\" (UID: \"37e7b780-cbc8-47a6-ab71-691c12c694ee\") " pod="calico-apiserver/calico-apiserver-9c99b967c-9pckk" Sep 5 06:25:09.364931 kubelet[2720]: I0905 06:25:09.364752 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d313d40b-c6da-43b3-8554-8b5d19920e5a-tigera-ca-bundle\") pod \"calico-kube-controllers-766c899479-rdpbp\" (UID: \"d313d40b-c6da-43b3-8554-8b5d19920e5a\") " pod="calico-system/calico-kube-controllers-766c899479-rdpbp" Sep 5 06:25:09.364931 kubelet[2720]: I0905 06:25:09.364768 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8219e107-b937-4e6b-b7b5-7db6885eb957-config\") pod \"goldmane-54d579b49d-p7g92\" (UID: \"8219e107-b937-4e6b-b7b5-7db6885eb957\") " pod="calico-system/goldmane-54d579b49d-p7g92" Sep 5 06:25:09.364931 kubelet[2720]: I0905 06:25:09.364785 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/371ec781-2b0f-45aa-9541-2c1f320435dc-config-volume\") pod \"coredns-668d6bf9bc-sgvgl\" (UID: \"371ec781-2b0f-45aa-9541-2c1f320435dc\") " pod="kube-system/coredns-668d6bf9bc-sgvgl" Sep 5 06:25:09.364931 kubelet[2720]: I0905 06:25:09.364800 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8s6\" (UniqueName: \"kubernetes.io/projected/371ec781-2b0f-45aa-9541-2c1f320435dc-kube-api-access-pb8s6\") pod \"coredns-668d6bf9bc-sgvgl\" (UID: \"371ec781-2b0f-45aa-9541-2c1f320435dc\") " pod="kube-system/coredns-668d6bf9bc-sgvgl" Sep 5 06:25:09.595780 kubelet[2720]: E0905 06:25:09.595688 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:09.596888 containerd[1590]: time="2025-09-05T06:25:09.596855544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sgvgl,Uid:371ec781-2b0f-45aa-9541-2c1f320435dc,Namespace:kube-system,Attempt:0,}" Sep 5 06:25:09.603988 kubelet[2720]: E0905 06:25:09.603963 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:09.605254 containerd[1590]: time="2025-09-05T06:25:09.605213492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d58vl,Uid:8866500a-5b0b-49df-9d08-3732719c5d62,Namespace:kube-system,Attempt:0,}" Sep 5 06:25:09.609477 containerd[1590]: time="2025-09-05T06:25:09.609434312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c99b967c-9pckk,Uid:37e7b780-cbc8-47a6-ab71-691c12c694ee,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:25:09.615438 containerd[1590]: time="2025-09-05T06:25:09.615413599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-p7g92,Uid:8219e107-b937-4e6b-b7b5-7db6885eb957,Namespace:calico-system,Attempt:0,}" Sep 5 06:25:09.621654 containerd[1590]: time="2025-09-05T06:25:09.621606452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9d69677c-m595m,Uid:5f3cc73f-b045-466c-a34c-a851f0fd880e,Namespace:calico-system,Attempt:0,}" Sep 5 06:25:09.626587 containerd[1590]: time="2025-09-05T06:25:09.626560978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c99b967c-5jd7t,Uid:283ad304-6ada-44a3-8506-2d3a232ec8bc,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:25:09.633247 containerd[1590]: time="2025-09-05T06:25:09.633218117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766c899479-rdpbp,Uid:d313d40b-c6da-43b3-8554-8b5d19920e5a,Namespace:calico-system,Attempt:0,}" Sep 5 06:25:09.670759 systemd[1]: Created slice kubepods-besteffort-podd7097e42_b7ac_47fd_82a8_b7a571e5a93b.slice - libcontainer container kubepods-besteffort-podd7097e42_b7ac_47fd_82a8_b7a571e5a93b.slice. Sep 5 06:25:09.676522 containerd[1590]: time="2025-09-05T06:25:09.676483082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm52w,Uid:d7097e42-b7ac-47fd-82a8-b7a571e5a93b,Namespace:calico-system,Attempt:0,}" Sep 5 06:25:09.707036 containerd[1590]: time="2025-09-05T06:25:09.706930609Z" level=error msg="Failed to destroy network for sandbox \"53943a9b5459a8fe198e7abfbdf12b47a5ac39c023efbc124967fae966a1ccf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.721844 containerd[1590]: time="2025-09-05T06:25:09.721712472Z" level=error msg="Failed to destroy network for sandbox \"326679be8ec32a5cfcc89564bf44e96f58b0fa20ca38608d49ecb5d950292057\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.733813 containerd[1590]: time="2025-09-05T06:25:09.733760158Z" level=error msg="Failed to destroy network for sandbox \"300dca6521d88d103cc39533615927cf327164325baa79e329f3535890df9ecd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.745512 containerd[1590]: time="2025-09-05T06:25:09.745463511Z" level=error msg="Failed to destroy network for sandbox \"990ce20ee636334477263bf82660ad8f53d26c16320c040b59145cb201884b86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.746255 containerd[1590]: time="2025-09-05T06:25:09.746190193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c99b967c-9pckk,Uid:37e7b780-cbc8-47a6-ab71-691c12c694ee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53943a9b5459a8fe198e7abfbdf12b47a5ac39c023efbc124967fae966a1ccf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.746591 containerd[1590]: time="2025-09-05T06:25:09.746480435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sgvgl,Uid:371ec781-2b0f-45aa-9541-2c1f320435dc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"326679be8ec32a5cfcc89564bf44e96f58b0fa20ca38608d49ecb5d950292057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.747127 containerd[1590]: time="2025-09-05T06:25:09.746200905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9d69677c-m595m,Uid:5f3cc73f-b045-466c-a34c-a851f0fd880e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"300dca6521d88d103cc39533615927cf327164325baa79e329f3535890df9ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.747127 containerd[1590]: time="2025-09-05T06:25:09.747005296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d58vl,Uid:8866500a-5b0b-49df-9d08-3732719c5d62,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ce20ee636334477263bf82660ad8f53d26c16320c040b59145cb201884b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.747819 containerd[1590]: time="2025-09-05T06:25:09.747612554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 06:25:09.753724 kubelet[2720]: E0905 06:25:09.753676 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ce20ee636334477263bf82660ad8f53d26c16320c040b59145cb201884b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.753931 kubelet[2720]: E0905 06:25:09.753725 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53943a9b5459a8fe198e7abfbdf12b47a5ac39c023efbc124967fae966a1ccf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.753931 kubelet[2720]: E0905 06:25:09.753755 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326679be8ec32a5cfcc89564bf44e96f58b0fa20ca38608d49ecb5d950292057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.753931 kubelet[2720]: E0905 06:25:09.753774 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326679be8ec32a5cfcc89564bf44e96f58b0fa20ca38608d49ecb5d950292057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sgvgl" Sep 5 06:25:09.753931 kubelet[2720]: E0905 06:25:09.753777 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53943a9b5459a8fe198e7abfbdf12b47a5ac39c023efbc124967fae966a1ccf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c99b967c-9pckk" Sep 5 06:25:09.754110 kubelet[2720]: E0905 06:25:09.753792 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"326679be8ec32a5cfcc89564bf44e96f58b0fa20ca38608d49ecb5d950292057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sgvgl" Sep 5 06:25:09.754110 kubelet[2720]: E0905 06:25:09.753800 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53943a9b5459a8fe198e7abfbdf12b47a5ac39c023efbc124967fae966a1ccf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c99b967c-9pckk" Sep 5 06:25:09.754110 kubelet[2720]: E0905 06:25:09.753809 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300dca6521d88d103cc39533615927cf327164325baa79e329f3535890df9ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.754209 kubelet[2720]: E0905 06:25:09.753829 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sgvgl_kube-system(371ec781-2b0f-45aa-9541-2c1f320435dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sgvgl_kube-system(371ec781-2b0f-45aa-9541-2c1f320435dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"326679be8ec32a5cfcc89564bf44e96f58b0fa20ca38608d49ecb5d950292057\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sgvgl" podUID="371ec781-2b0f-45aa-9541-2c1f320435dc" Sep 5 06:25:09.754209 kubelet[2720]: E0905 06:25:09.753838 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9c99b967c-9pckk_calico-apiserver(37e7b780-cbc8-47a6-ab71-691c12c694ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9c99b967c-9pckk_calico-apiserver(37e7b780-cbc8-47a6-ab71-691c12c694ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53943a9b5459a8fe198e7abfbdf12b47a5ac39c023efbc124967fae966a1ccf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c99b967c-9pckk" podUID="37e7b780-cbc8-47a6-ab71-691c12c694ee" Sep 5 06:25:09.754209 kubelet[2720]: E0905 06:25:09.753738 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ce20ee636334477263bf82660ad8f53d26c16320c040b59145cb201884b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d58vl" Sep 5 06:25:09.754382 kubelet[2720]: E0905 06:25:09.753864 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300dca6521d88d103cc39533615927cf327164325baa79e329f3535890df9ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9d69677c-m595m" Sep 5 06:25:09.754382 kubelet[2720]: E0905 06:25:09.753872 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990ce20ee636334477263bf82660ad8f53d26c16320c040b59145cb201884b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d58vl" Sep 5 06:25:09.754382 kubelet[2720]: E0905 06:25:09.753883 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300dca6521d88d103cc39533615927cf327164325baa79e329f3535890df9ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9d69677c-m595m" Sep 5 06:25:09.754501 kubelet[2720]: E0905 06:25:09.753896 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d58vl_kube-system(8866500a-5b0b-49df-9d08-3732719c5d62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d58vl_kube-system(8866500a-5b0b-49df-9d08-3732719c5d62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"990ce20ee636334477263bf82660ad8f53d26c16320c040b59145cb201884b86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d58vl" podUID="8866500a-5b0b-49df-9d08-3732719c5d62" Sep 5 06:25:09.754501 kubelet[2720]: E0905 06:25:09.753922 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b9d69677c-m595m_calico-system(5f3cc73f-b045-466c-a34c-a851f0fd880e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b9d69677c-m595m_calico-system(5f3cc73f-b045-466c-a34c-a851f0fd880e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"300dca6521d88d103cc39533615927cf327164325baa79e329f3535890df9ecd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b9d69677c-m595m" podUID="5f3cc73f-b045-466c-a34c-a851f0fd880e" Sep 5 06:25:09.755835 containerd[1590]: time="2025-09-05T06:25:09.755797040Z" level=error msg="Failed to destroy network for sandbox \"0b65e1898a12b52a0712e67ecdab197c829ba10c5d75a3e4a0786c2180c059e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.756947 containerd[1590]: time="2025-09-05T06:25:09.756917485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c99b967c-5jd7t,Uid:283ad304-6ada-44a3-8506-2d3a232ec8bc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b65e1898a12b52a0712e67ecdab197c829ba10c5d75a3e4a0786c2180c059e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.757104 kubelet[2720]: E0905 06:25:09.757079 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b65e1898a12b52a0712e67ecdab197c829ba10c5d75a3e4a0786c2180c059e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.757140 kubelet[2720]: E0905 06:25:09.757110 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b65e1898a12b52a0712e67ecdab197c829ba10c5d75a3e4a0786c2180c059e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c99b967c-5jd7t" Sep 5 06:25:09.757140 kubelet[2720]: E0905 06:25:09.757126 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b65e1898a12b52a0712e67ecdab197c829ba10c5d75a3e4a0786c2180c059e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c99b967c-5jd7t" Sep 5 06:25:09.757189 kubelet[2720]: E0905 06:25:09.757154 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9c99b967c-5jd7t_calico-apiserver(283ad304-6ada-44a3-8506-2d3a232ec8bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9c99b967c-5jd7t_calico-apiserver(283ad304-6ada-44a3-8506-2d3a232ec8bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b65e1898a12b52a0712e67ecdab197c829ba10c5d75a3e4a0786c2180c059e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c99b967c-5jd7t" podUID="283ad304-6ada-44a3-8506-2d3a232ec8bc" Sep 5 06:25:09.763634 containerd[1590]: time="2025-09-05T06:25:09.763580517Z" level=error msg="Failed to destroy network for sandbox \"7601c6404e4e5a8fda809cc073440e5a9c613afe33c61634d559fcb5aadc16ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.767596 containerd[1590]: time="2025-09-05T06:25:09.767542759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-p7g92,Uid:8219e107-b937-4e6b-b7b5-7db6885eb957,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7601c6404e4e5a8fda809cc073440e5a9c613afe33c61634d559fcb5aadc16ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.768136 kubelet[2720]: E0905 06:25:09.768089 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7601c6404e4e5a8fda809cc073440e5a9c613afe33c61634d559fcb5aadc16ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.768709 kubelet[2720]: E0905 06:25:09.768307 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7601c6404e4e5a8fda809cc073440e5a9c613afe33c61634d559fcb5aadc16ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-p7g92" Sep 5 06:25:09.768709 kubelet[2720]: E0905 06:25:09.768332 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7601c6404e4e5a8fda809cc073440e5a9c613afe33c61634d559fcb5aadc16ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-p7g92" Sep 5 06:25:09.768709 kubelet[2720]: E0905 06:25:09.768372 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-p7g92_calico-system(8219e107-b937-4e6b-b7b5-7db6885eb957)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-p7g92_calico-system(8219e107-b937-4e6b-b7b5-7db6885eb957)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7601c6404e4e5a8fda809cc073440e5a9c613afe33c61634d559fcb5aadc16ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-p7g92" podUID="8219e107-b937-4e6b-b7b5-7db6885eb957" Sep 5 06:25:09.778420 containerd[1590]: time="2025-09-05T06:25:09.778387531Z" level=error msg="Failed to destroy network for sandbox \"0427cc4a84b95c879c0141b6581b522bdd4331f268c4febc607c4e25a3a2b3a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.779625 containerd[1590]: time="2025-09-05T06:25:09.779600885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766c899479-rdpbp,Uid:d313d40b-c6da-43b3-8554-8b5d19920e5a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0427cc4a84b95c879c0141b6581b522bdd4331f268c4febc607c4e25a3a2b3a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.779887 kubelet[2720]: E0905 06:25:09.779856 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0427cc4a84b95c879c0141b6581b522bdd4331f268c4febc607c4e25a3a2b3a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.779935 kubelet[2720]: E0905 06:25:09.779904 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0427cc4a84b95c879c0141b6581b522bdd4331f268c4febc607c4e25a3a2b3a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766c899479-rdpbp" Sep 5 06:25:09.779935 kubelet[2720]: E0905 06:25:09.779927 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0427cc4a84b95c879c0141b6581b522bdd4331f268c4febc607c4e25a3a2b3a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766c899479-rdpbp" Sep 5 06:25:09.780018 kubelet[2720]: E0905 06:25:09.779969 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-766c899479-rdpbp_calico-system(d313d40b-c6da-43b3-8554-8b5d19920e5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-766c899479-rdpbp_calico-system(d313d40b-c6da-43b3-8554-8b5d19920e5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0427cc4a84b95c879c0141b6581b522bdd4331f268c4febc607c4e25a3a2b3a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-766c899479-rdpbp" podUID="d313d40b-c6da-43b3-8554-8b5d19920e5a" Sep 5 06:25:09.786427 containerd[1590]: time="2025-09-05T06:25:09.786391077Z" level=error msg="Failed to destroy network for sandbox \"0219603df217bfdbb4ff415b5d55a5bb67045f3891ffcac4bcd3a997efb3366d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.787603 containerd[1590]: time="2025-09-05T06:25:09.787546494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm52w,Uid:d7097e42-b7ac-47fd-82a8-b7a571e5a93b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0219603df217bfdbb4ff415b5d55a5bb67045f3891ffcac4bcd3a997efb3366d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.787787 kubelet[2720]: E0905 06:25:09.787711 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0219603df217bfdbb4ff415b5d55a5bb67045f3891ffcac4bcd3a997efb3366d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:25:09.787787 kubelet[2720]: E0905 06:25:09.787751 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0219603df217bfdbb4ff415b5d55a5bb67045f3891ffcac4bcd3a997efb3366d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wm52w" Sep 5 06:25:09.787787 kubelet[2720]: E0905 06:25:09.787768 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0219603df217bfdbb4ff415b5d55a5bb67045f3891ffcac4bcd3a997efb3366d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wm52w" Sep 5 06:25:09.787875 kubelet[2720]: E0905 06:25:09.787798 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wm52w_calico-system(d7097e42-b7ac-47fd-82a8-b7a571e5a93b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wm52w_calico-system(d7097e42-b7ac-47fd-82a8-b7a571e5a93b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0219603df217bfdbb4ff415b5d55a5bb67045f3891ffcac4bcd3a997efb3366d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wm52w" podUID="d7097e42-b7ac-47fd-82a8-b7a571e5a93b" Sep 5 06:25:15.874843 systemd[1]: Started sshd@7-10.0.0.4:22-10.0.0.1:40988.service - OpenSSH per-connection server daemon (10.0.0.1:40988). Sep 5 06:25:15.934349 sshd[3834]: Accepted publickey for core from 10.0.0.1 port 40988 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:15.935618 sshd-session[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:15.940844 systemd-logind[1573]: New session 8 of user core. Sep 5 06:25:15.948648 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 06:25:16.084401 sshd[3839]: Connection closed by 10.0.0.1 port 40988 Sep 5 06:25:16.085698 sshd-session[3834]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:16.090035 systemd[1]: sshd@7-10.0.0.4:22-10.0.0.1:40988.service: Deactivated successfully. Sep 5 06:25:16.092192 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 06:25:16.093153 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Sep 5 06:25:16.095568 systemd-logind[1573]: Removed session 8. Sep 5 06:25:17.367457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397225810.mount: Deactivated successfully. Sep 5 06:25:18.404128 containerd[1590]: time="2025-09-05T06:25:18.404078578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:18.404843 containerd[1590]: time="2025-09-05T06:25:18.404812273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 06:25:18.406187 containerd[1590]: time="2025-09-05T06:25:18.406123872Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:18.407977 containerd[1590]: time="2025-09-05T06:25:18.407929336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:18.408398 containerd[1590]: time="2025-09-05T06:25:18.408374255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.660730738s" Sep 5 06:25:18.408440 containerd[1590]: time="2025-09-05T06:25:18.408401669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 06:25:18.422338 containerd[1590]: time="2025-09-05T06:25:18.422307604Z" level=info msg="CreateContainer within sandbox \"9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 06:25:18.430523 containerd[1590]: time="2025-09-05T06:25:18.430474729Z" level=info msg="Container c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:18.450902 containerd[1590]: time="2025-09-05T06:25:18.450863215Z" level=info msg="CreateContainer within sandbox \"9513bde9e1e96b5eeb02ea7c41adfd61ce63d7b977b5314044aed230466023fa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9\"" Sep 5 06:25:18.451362 containerd[1590]: time="2025-09-05T06:25:18.451319767Z" level=info msg="StartContainer for \"c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9\"" Sep 5 06:25:18.452756 containerd[1590]: time="2025-09-05T06:25:18.452720303Z" level=info msg="connecting to shim c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9" address="unix:///run/containerd/s/6caac75a188e4a370ddb756aa4034888ec0462db31304e4e2e6a9f8dda905125" protocol=ttrpc version=3 Sep 5 06:25:18.473675 systemd[1]: Started cri-containerd-c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9.scope - libcontainer container c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9. Sep 5 06:25:18.516691 containerd[1590]: time="2025-09-05T06:25:18.516632977Z" level=info msg="StartContainer for \"c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9\" returns successfully" Sep 5 06:25:18.600466 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 06:25:18.601766 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 06:25:18.724266 kubelet[2720]: I0905 06:25:18.724130 2720 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5f3cc73f-b045-466c-a34c-a851f0fd880e-whisker-backend-key-pair\") pod \"5f3cc73f-b045-466c-a34c-a851f0fd880e\" (UID: \"5f3cc73f-b045-466c-a34c-a851f0fd880e\") " Sep 5 06:25:18.724762 kubelet[2720]: I0905 06:25:18.724582 2720 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3cc73f-b045-466c-a34c-a851f0fd880e-whisker-ca-bundle\") pod \"5f3cc73f-b045-466c-a34c-a851f0fd880e\" (UID: \"5f3cc73f-b045-466c-a34c-a851f0fd880e\") " Sep 5 06:25:18.724762 kubelet[2720]: I0905 06:25:18.724612 2720 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfb4\" (UniqueName: \"kubernetes.io/projected/5f3cc73f-b045-466c-a34c-a851f0fd880e-kube-api-access-wsfb4\") pod \"5f3cc73f-b045-466c-a34c-a851f0fd880e\" (UID: \"5f3cc73f-b045-466c-a34c-a851f0fd880e\") " Sep 5 06:25:18.725168 kubelet[2720]: I0905 06:25:18.725137 2720 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f3cc73f-b045-466c-a34c-a851f0fd880e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5f3cc73f-b045-466c-a34c-a851f0fd880e" (UID: "5f3cc73f-b045-466c-a34c-a851f0fd880e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 06:25:18.727825 kubelet[2720]: I0905 06:25:18.727798 2720 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3cc73f-b045-466c-a34c-a851f0fd880e-kube-api-access-wsfb4" (OuterVolumeSpecName: "kube-api-access-wsfb4") pod "5f3cc73f-b045-466c-a34c-a851f0fd880e" (UID: "5f3cc73f-b045-466c-a34c-a851f0fd880e"). InnerVolumeSpecName "kube-api-access-wsfb4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 06:25:18.728120 kubelet[2720]: I0905 06:25:18.728097 2720 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f3cc73f-b045-466c-a34c-a851f0fd880e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5f3cc73f-b045-466c-a34c-a851f0fd880e" (UID: "5f3cc73f-b045-466c-a34c-a851f0fd880e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 06:25:18.767016 systemd[1]: Removed slice kubepods-besteffort-pod5f3cc73f_b045_466c_a34c_a851f0fd880e.slice - libcontainer container kubepods-besteffort-pod5f3cc73f_b045_466c_a34c_a851f0fd880e.slice. Sep 5 06:25:18.785522 kubelet[2720]: I0905 06:25:18.785271 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6jnmc" podStartSLOduration=1.02438478 podStartE2EDuration="23.785245762s" podCreationTimestamp="2025-09-05 06:24:55 +0000 UTC" firstStartedPulling="2025-09-05 06:24:55.648841164 +0000 UTC m=+17.063834479" lastFinishedPulling="2025-09-05 06:25:18.409702146 +0000 UTC m=+39.824695461" observedRunningTime="2025-09-05 06:25:18.784756885 +0000 UTC m=+40.199750190" watchObservedRunningTime="2025-09-05 06:25:18.785245762 +0000 UTC m=+40.200239077" Sep 5 06:25:18.826336 kubelet[2720]: I0905 06:25:18.825864 2720 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5f3cc73f-b045-466c-a34c-a851f0fd880e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 06:25:18.826336 kubelet[2720]: I0905 06:25:18.825893 2720 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f3cc73f-b045-466c-a34c-a851f0fd880e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 06:25:18.826336 kubelet[2720]: I0905 06:25:18.825904 2720 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wsfb4\" (UniqueName: \"kubernetes.io/projected/5f3cc73f-b045-466c-a34c-a851f0fd880e-kube-api-access-wsfb4\") on node \"localhost\" DevicePath \"\"" Sep 5 06:25:18.841633 systemd[1]: Created slice kubepods-besteffort-podbf8ec0f7_1fcb_4413_9a58_35aa8996c918.slice - libcontainer container kubepods-besteffort-podbf8ec0f7_1fcb_4413_9a58_35aa8996c918.slice. Sep 5 06:25:18.915497 containerd[1590]: time="2025-09-05T06:25:18.915441571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9\" id:\"af8719f953e9523c89bd4f77998ccbb9b55444135d4804c0306173fc044b0e83\" pid:3924 exit_status:1 exited_at:{seconds:1757053518 nanos:914445112}" Sep 5 06:25:18.926430 kubelet[2720]: I0905 06:25:18.926388 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhclc\" (UniqueName: \"kubernetes.io/projected/bf8ec0f7-1fcb-4413-9a58-35aa8996c918-kube-api-access-hhclc\") pod \"whisker-6dd68d8cdd-ggc7m\" (UID: \"bf8ec0f7-1fcb-4413-9a58-35aa8996c918\") " pod="calico-system/whisker-6dd68d8cdd-ggc7m" Sep 5 06:25:18.926430 kubelet[2720]: I0905 06:25:18.926437 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8ec0f7-1fcb-4413-9a58-35aa8996c918-whisker-ca-bundle\") pod \"whisker-6dd68d8cdd-ggc7m\" (UID: \"bf8ec0f7-1fcb-4413-9a58-35aa8996c918\") " pod="calico-system/whisker-6dd68d8cdd-ggc7m" Sep 5 06:25:18.926608 kubelet[2720]: I0905 06:25:18.926454 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bf8ec0f7-1fcb-4413-9a58-35aa8996c918-whisker-backend-key-pair\") pod \"whisker-6dd68d8cdd-ggc7m\" (UID: \"bf8ec0f7-1fcb-4413-9a58-35aa8996c918\") " pod="calico-system/whisker-6dd68d8cdd-ggc7m" Sep 5 06:25:19.145994 containerd[1590]: time="2025-09-05T06:25:19.145891926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd68d8cdd-ggc7m,Uid:bf8ec0f7-1fcb-4413-9a58-35aa8996c918,Namespace:calico-system,Attempt:0,}" Sep 5 06:25:19.416742 systemd[1]: var-lib-kubelet-pods-5f3cc73f\x2db045\x2d466c\x2da34c\x2da851f0fd880e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwsfb4.mount: Deactivated successfully. Sep 5 06:25:19.416862 systemd[1]: var-lib-kubelet-pods-5f3cc73f\x2db045\x2d466c\x2da34c\x2da851f0fd880e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 06:25:19.843280 systemd-networkd[1499]: cali82dcd443abe: Link UP Sep 5 06:25:19.843725 systemd-networkd[1499]: cali82dcd443abe: Gained carrier Sep 5 06:25:19.846176 containerd[1590]: time="2025-09-05T06:25:19.846133573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9\" id:\"c06a6349d73969e0b59ac9e6d70548bc52522fff039308230c850dd1876bbaaf\" pid:3980 exit_status:1 exited_at:{seconds:1757053519 nanos:845801542}" Sep 5 06:25:19.857227 containerd[1590]: 2025-09-05 06:25:19.727 [INFO][3947] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:25:19.857227 containerd[1590]: 2025-09-05 06:25:19.742 [INFO][3947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0 whisker-6dd68d8cdd- calico-system bf8ec0f7-1fcb-4413-9a58-35aa8996c918 929 0 2025-09-05 06:25:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dd68d8cdd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6dd68d8cdd-ggc7m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali82dcd443abe [] [] }} ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Namespace="calico-system" Pod="whisker-6dd68d8cdd-ggc7m" WorkloadEndpoint="localhost-k8s-whisker--6dd68d8cdd--ggc7m-" Sep 5 06:25:19.857227 containerd[1590]: 2025-09-05 06:25:19.742 [INFO][3947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Namespace="calico-system" Pod="whisker-6dd68d8cdd-ggc7m" WorkloadEndpoint="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" Sep 5 06:25:19.857227 containerd[1590]: 2025-09-05 06:25:19.801 [INFO][3961] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" HandleID="k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Workload="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.801 [INFO][3961] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" HandleID="k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Workload="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6dd68d8cdd-ggc7m", "timestamp":"2025-09-05 06:25:19.800987754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.801 [INFO][3961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.801 [INFO][3961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.801 [INFO][3961] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.808 [INFO][3961] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" host="localhost" Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.816 [INFO][3961] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.819 [INFO][3961] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.821 [INFO][3961] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.823 [INFO][3961] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:19.857502 containerd[1590]: 2025-09-05 06:25:19.823 [INFO][3961] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" host="localhost" Sep 5 06:25:19.857805 containerd[1590]: 2025-09-05 06:25:19.824 [INFO][3961] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c Sep 5 06:25:19.857805 containerd[1590]: 2025-09-05 06:25:19.827 [INFO][3961] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" host="localhost" Sep 5 06:25:19.857805 containerd[1590]: 2025-09-05 06:25:19.831 [INFO][3961] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" host="localhost" Sep 5 06:25:19.857805 containerd[1590]: 2025-09-05 06:25:19.831 [INFO][3961] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" host="localhost" Sep 5 06:25:19.857805 containerd[1590]: 2025-09-05 06:25:19.831 [INFO][3961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:25:19.857805 containerd[1590]: 2025-09-05 06:25:19.831 [INFO][3961] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" HandleID="k8s-pod-network.6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Workload="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" Sep 5 06:25:19.857966 containerd[1590]: 2025-09-05 06:25:19.835 [INFO][3947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Namespace="calico-system" Pod="whisker-6dd68d8cdd-ggc7m" WorkloadEndpoint="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0", GenerateName:"whisker-6dd68d8cdd-", Namespace:"calico-system", SelfLink:"", UID:"bf8ec0f7-1fcb-4413-9a58-35aa8996c918", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 25, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dd68d8cdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6dd68d8cdd-ggc7m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali82dcd443abe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:19.857966 containerd[1590]: 2025-09-05 06:25:19.835 [INFO][3947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Namespace="calico-system" Pod="whisker-6dd68d8cdd-ggc7m" WorkloadEndpoint="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" Sep 5 06:25:19.858060 containerd[1590]: 2025-09-05 06:25:19.835 [INFO][3947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82dcd443abe ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Namespace="calico-system" Pod="whisker-6dd68d8cdd-ggc7m" WorkloadEndpoint="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" Sep 5 06:25:19.858060 containerd[1590]: 2025-09-05 06:25:19.842 [INFO][3947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Namespace="calico-system" Pod="whisker-6dd68d8cdd-ggc7m" WorkloadEndpoint="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" Sep 5 06:25:19.858131 containerd[1590]: 2025-09-05 06:25:19.843 [INFO][3947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Namespace="calico-system" Pod="whisker-6dd68d8cdd-ggc7m" WorkloadEndpoint="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0", GenerateName:"whisker-6dd68d8cdd-", Namespace:"calico-system", SelfLink:"", UID:"bf8ec0f7-1fcb-4413-9a58-35aa8996c918", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 25, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dd68d8cdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c", Pod:"whisker-6dd68d8cdd-ggc7m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali82dcd443abe", MAC:"da:7a:b2:25:c8:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:19.858201 containerd[1590]: 2025-09-05 06:25:19.854 [INFO][3947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" Namespace="calico-system" Pod="whisker-6dd68d8cdd-ggc7m" WorkloadEndpoint="localhost-k8s-whisker--6dd68d8cdd--ggc7m-eth0" Sep 5 06:25:20.036503 containerd[1590]: time="2025-09-05T06:25:20.036431413Z" level=info msg="connecting to shim 6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c" address="unix:///run/containerd/s/4f74a33369a9026c32a0216e4d827afddec46f451aa51194e534fffceadd3c20" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:25:20.093696 systemd[1]: Started cri-containerd-6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c.scope - libcontainer container 6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c. Sep 5 06:25:20.109376 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:25:20.160235 containerd[1590]: time="2025-09-05T06:25:20.160185047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd68d8cdd-ggc7m,Uid:bf8ec0f7-1fcb-4413-9a58-35aa8996c918,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c\"" Sep 5 06:25:20.161739 containerd[1590]: time="2025-09-05T06:25:20.161707797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 06:25:20.380663 systemd-networkd[1499]: vxlan.calico: Link UP Sep 5 06:25:20.380675 systemd-networkd[1499]: vxlan.calico: Gained carrier Sep 5 06:25:20.680027 kubelet[2720]: I0905 06:25:20.679984 2720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3cc73f-b045-466c-a34c-a851f0fd880e" path="/var/lib/kubelet/pods/5f3cc73f-b045-466c-a34c-a851f0fd880e/volumes" Sep 5 06:25:20.848612 containerd[1590]: time="2025-09-05T06:25:20.848554433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9\" id:\"bc06c77a76a4b3baed8b877d80323d0bdd73f5a8f1e838d5a4cf8ba90ab4f2b5\" pid:4259 exit_status:1 exited_at:{seconds:1757053520 nanos:848264727}" Sep 5 06:25:21.110334 systemd[1]: Started sshd@8-10.0.0.4:22-10.0.0.1:58380.service - OpenSSH per-connection server daemon (10.0.0.1:58380). Sep 5 06:25:21.156641 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 58380 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:21.158526 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:21.162908 systemd-logind[1573]: New session 9 of user core. Sep 5 06:25:21.173674 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 06:25:21.300872 sshd[4277]: Connection closed by 10.0.0.1 port 58380 Sep 5 06:25:21.301221 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:21.305686 systemd[1]: sshd@8-10.0.0.4:22-10.0.0.1:58380.service: Deactivated successfully. Sep 5 06:25:21.307708 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 06:25:21.308411 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Sep 5 06:25:21.309449 systemd-logind[1573]: Removed session 9. Sep 5 06:25:21.437669 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Sep 5 06:25:21.693708 systemd-networkd[1499]: cali82dcd443abe: Gained IPv6LL Sep 5 06:25:21.744911 containerd[1590]: time="2025-09-05T06:25:21.744864488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:21.745585 containerd[1590]: time="2025-09-05T06:25:21.745549981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 06:25:21.746804 containerd[1590]: time="2025-09-05T06:25:21.746749183Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:21.748883 containerd[1590]: time="2025-09-05T06:25:21.748846630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:21.749388 containerd[1590]: time="2025-09-05T06:25:21.749358748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.587612815s" Sep 5 06:25:21.749421 containerd[1590]: time="2025-09-05T06:25:21.749391592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 06:25:21.751031 containerd[1590]: time="2025-09-05T06:25:21.751004087Z" level=info msg="CreateContainer within sandbox \"6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 06:25:21.758547 containerd[1590]: time="2025-09-05T06:25:21.758121277Z" level=info msg="Container ed8e0a05ed5255fc9001dda0602e0d3b6d71384ecd3eb5aa37961e2de3db3b9b: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:21.767751 containerd[1590]: time="2025-09-05T06:25:21.767710610Z" level=info msg="CreateContainer within sandbox \"6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ed8e0a05ed5255fc9001dda0602e0d3b6d71384ecd3eb5aa37961e2de3db3b9b\"" Sep 5 06:25:21.768680 containerd[1590]: time="2025-09-05T06:25:21.768653103Z" level=info msg="StartContainer for \"ed8e0a05ed5255fc9001dda0602e0d3b6d71384ecd3eb5aa37961e2de3db3b9b\"" Sep 5 06:25:21.769946 containerd[1590]: time="2025-09-05T06:25:21.769526518Z" level=info msg="connecting to shim ed8e0a05ed5255fc9001dda0602e0d3b6d71384ecd3eb5aa37961e2de3db3b9b" address="unix:///run/containerd/s/4f74a33369a9026c32a0216e4d827afddec46f451aa51194e534fffceadd3c20" protocol=ttrpc version=3 Sep 5 06:25:21.794680 systemd[1]: Started cri-containerd-ed8e0a05ed5255fc9001dda0602e0d3b6d71384ecd3eb5aa37961e2de3db3b9b.scope - libcontainer container ed8e0a05ed5255fc9001dda0602e0d3b6d71384ecd3eb5aa37961e2de3db3b9b. Sep 5 06:25:21.839805 containerd[1590]: time="2025-09-05T06:25:21.839756845Z" level=info msg="StartContainer for \"ed8e0a05ed5255fc9001dda0602e0d3b6d71384ecd3eb5aa37961e2de3db3b9b\" returns successfully" Sep 5 06:25:21.841303 containerd[1590]: time="2025-09-05T06:25:21.841254671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 06:25:22.664945 kubelet[2720]: E0905 06:25:22.664892 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:22.666113 containerd[1590]: time="2025-09-05T06:25:22.666070105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sgvgl,Uid:371ec781-2b0f-45aa-9541-2c1f320435dc,Namespace:kube-system,Attempt:0,}" Sep 5 06:25:22.666700 containerd[1590]: time="2025-09-05T06:25:22.666082029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766c899479-rdpbp,Uid:d313d40b-c6da-43b3-8554-8b5d19920e5a,Namespace:calico-system,Attempt:0,}" Sep 5 06:25:22.798346 systemd-networkd[1499]: cali0c6ea307da5: Link UP Sep 5 06:25:22.798553 systemd-networkd[1499]: cali0c6ea307da5: Gained carrier Sep 5 06:25:22.807722 containerd[1590]: 2025-09-05 06:25:22.706 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0 calico-kube-controllers-766c899479- calico-system d313d40b-c6da-43b3-8554-8b5d19920e5a 823 0 2025-09-05 06:24:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:766c899479 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-766c899479-rdpbp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0c6ea307da5 [] [] }} ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Namespace="calico-system" Pod="calico-kube-controllers-766c899479-rdpbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-" Sep 5 06:25:22.807722 containerd[1590]: 2025-09-05 06:25:22.706 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Namespace="calico-system" Pod="calico-kube-controllers-766c899479-rdpbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" Sep 5 06:25:22.807722 containerd[1590]: 2025-09-05 06:25:22.736 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" HandleID="k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Workload="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.736 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" HandleID="k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Workload="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000591df0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-766c899479-rdpbp", "timestamp":"2025-09-05 06:25:22.736160746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.736 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.736 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.736 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.742 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" host="localhost" Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.745 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.750 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.752 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.753 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:22.807904 containerd[1590]: 2025-09-05 06:25:22.754 [INFO][4359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" host="localhost" Sep 5 06:25:22.808127 containerd[1590]: 2025-09-05 06:25:22.769 [INFO][4359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020 Sep 5 06:25:22.808127 containerd[1590]: 2025-09-05 06:25:22.785 [INFO][4359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" host="localhost" Sep 5 06:25:22.808127 containerd[1590]: 2025-09-05 06:25:22.792 [INFO][4359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" host="localhost" Sep 5 06:25:22.808127 containerd[1590]: 2025-09-05 06:25:22.792 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" host="localhost" Sep 5 06:25:22.808127 containerd[1590]: 2025-09-05 06:25:22.792 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:25:22.808127 containerd[1590]: 2025-09-05 06:25:22.792 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" HandleID="k8s-pod-network.96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Workload="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" Sep 5 06:25:22.808473 containerd[1590]: 2025-09-05 06:25:22.795 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Namespace="calico-system" Pod="calico-kube-controllers-766c899479-rdpbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0", GenerateName:"calico-kube-controllers-766c899479-", Namespace:"calico-system", SelfLink:"", UID:"d313d40b-c6da-43b3-8554-8b5d19920e5a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"766c899479", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-766c899479-rdpbp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c6ea307da5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:22.808543 containerd[1590]: 2025-09-05 06:25:22.795 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Namespace="calico-system" Pod="calico-kube-controllers-766c899479-rdpbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" Sep 5 06:25:22.808543 containerd[1590]: 2025-09-05 06:25:22.795 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c6ea307da5 ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Namespace="calico-system" Pod="calico-kube-controllers-766c899479-rdpbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" Sep 5 06:25:22.808543 containerd[1590]: 2025-09-05 06:25:22.798 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Namespace="calico-system" Pod="calico-kube-controllers-766c899479-rdpbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" Sep 5 06:25:22.808620 containerd[1590]: 2025-09-05 06:25:22.798 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Namespace="calico-system" Pod="calico-kube-controllers-766c899479-rdpbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0", GenerateName:"calico-kube-controllers-766c899479-", Namespace:"calico-system", SelfLink:"", UID:"d313d40b-c6da-43b3-8554-8b5d19920e5a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"766c899479", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020", Pod:"calico-kube-controllers-766c899479-rdpbp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c6ea307da5", MAC:"86:8f:80:97:56:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:22.808669 containerd[1590]: 2025-09-05 06:25:22.805 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" Namespace="calico-system" Pod="calico-kube-controllers-766c899479-rdpbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c899479--rdpbp-eth0" Sep 5 06:25:22.852126 containerd[1590]: time="2025-09-05T06:25:22.852040474Z" level=info msg="connecting to shim 96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020" address="unix:///run/containerd/s/3246afb8d304bec84e0fffd30f8b3268e426d2ea8371a4a14ed5a65f5ac485fb" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:25:22.886672 systemd[1]: Started cri-containerd-96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020.scope - libcontainer container 96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020. Sep 5 06:25:22.887512 systemd-networkd[1499]: cali6ae076cc212: Link UP Sep 5 06:25:22.889769 systemd-networkd[1499]: cali6ae076cc212: Gained carrier Sep 5 06:25:22.909337 containerd[1590]: 2025-09-05 06:25:22.709 [INFO][4330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0 coredns-668d6bf9bc- kube-system 371ec781-2b0f-45aa-9541-2c1f320435dc 816 0 2025-09-05 06:24:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-sgvgl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ae076cc212 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-sgvgl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sgvgl-" Sep 5 06:25:22.909337 containerd[1590]: 2025-09-05 06:25:22.709 [INFO][4330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-sgvgl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" Sep 5 06:25:22.909337 containerd[1590]: 2025-09-05 06:25:22.743 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" HandleID="k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Workload="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.743 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" HandleID="k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Workload="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-sgvgl", "timestamp":"2025-09-05 06:25:22.74341601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.743 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.792 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.792 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.846 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" host="localhost" Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.852 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.859 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.863 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.866 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:22.909692 containerd[1590]: 2025-09-05 06:25:22.866 [INFO][4365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" host="localhost" Sep 5 06:25:22.910022 containerd[1590]: 2025-09-05 06:25:22.867 [INFO][4365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1 Sep 5 06:25:22.910022 containerd[1590]: 2025-09-05 06:25:22.872 [INFO][4365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" host="localhost" Sep 5 06:25:22.910022 containerd[1590]: 2025-09-05 06:25:22.879 [INFO][4365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" host="localhost" Sep 5 06:25:22.910022 containerd[1590]: 2025-09-05 06:25:22.879 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" host="localhost" Sep 5 06:25:22.910022 containerd[1590]: 2025-09-05 06:25:22.879 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:25:22.910022 containerd[1590]: 2025-09-05 06:25:22.879 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" HandleID="k8s-pod-network.2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Workload="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" Sep 5 06:25:22.910140 containerd[1590]: 2025-09-05 06:25:22.883 [INFO][4330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-sgvgl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"371ec781-2b0f-45aa-9541-2c1f320435dc", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-sgvgl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ae076cc212", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:22.910219 containerd[1590]: 2025-09-05 06:25:22.883 [INFO][4330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-sgvgl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" Sep 5 06:25:22.910219 containerd[1590]: 2025-09-05 06:25:22.883 [INFO][4330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ae076cc212 ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-sgvgl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" Sep 5 06:25:22.910219 containerd[1590]: 2025-09-05 06:25:22.888 [INFO][4330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-sgvgl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" Sep 5 06:25:22.910291 containerd[1590]: 2025-09-05 06:25:22.888 [INFO][4330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-sgvgl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"371ec781-2b0f-45aa-9541-2c1f320435dc", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1", Pod:"coredns-668d6bf9bc-sgvgl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ae076cc212", MAC:"12:02:a2:30:6d:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:22.910291 containerd[1590]: 2025-09-05 06:25:22.898 [INFO][4330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-sgvgl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sgvgl-eth0" Sep 5 06:25:22.912666 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:25:22.942574 containerd[1590]: time="2025-09-05T06:25:22.942319599Z" level=info msg="connecting to shim 2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1" address="unix:///run/containerd/s/a8aa8dae797bdcd398a71d62972b170e8e945c08d6c559e7bb6d4676a5cd3f9b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:25:22.959839 containerd[1590]: time="2025-09-05T06:25:22.959805261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766c899479-rdpbp,Uid:d313d40b-c6da-43b3-8554-8b5d19920e5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020\"" Sep 5 06:25:22.978672 systemd[1]: Started cri-containerd-2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1.scope - libcontainer container 2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1. Sep 5 06:25:22.989942 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:25:23.031055 containerd[1590]: time="2025-09-05T06:25:23.031014481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sgvgl,Uid:371ec781-2b0f-45aa-9541-2c1f320435dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1\"" Sep 5 06:25:23.032115 kubelet[2720]: E0905 06:25:23.032067 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:23.034886 containerd[1590]: time="2025-09-05T06:25:23.034840732Z" level=info msg="CreateContainer within sandbox \"2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 06:25:23.048045 containerd[1590]: time="2025-09-05T06:25:23.047997330Z" level=info msg="Container 4b3e8cd6c162a5799d3a9ba061093704c5e05105e66ac965f8e2fcd8803bb13a: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:23.060299 containerd[1590]: time="2025-09-05T06:25:23.060267923Z" level=info msg="CreateContainer within sandbox \"2863b9f596b65dcbc4bac664273d349af132a51c6669f9f94e12f10efe37b4a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b3e8cd6c162a5799d3a9ba061093704c5e05105e66ac965f8e2fcd8803bb13a\"" Sep 5 06:25:23.061328 containerd[1590]: time="2025-09-05T06:25:23.061236572Z" level=info msg="StartContainer for \"4b3e8cd6c162a5799d3a9ba061093704c5e05105e66ac965f8e2fcd8803bb13a\"" Sep 5 06:25:23.062286 containerd[1590]: time="2025-09-05T06:25:23.062255952Z" level=info msg="connecting to shim 4b3e8cd6c162a5799d3a9ba061093704c5e05105e66ac965f8e2fcd8803bb13a" address="unix:///run/containerd/s/a8aa8dae797bdcd398a71d62972b170e8e945c08d6c559e7bb6d4676a5cd3f9b" protocol=ttrpc version=3 Sep 5 06:25:23.085666 systemd[1]: Started cri-containerd-4b3e8cd6c162a5799d3a9ba061093704c5e05105e66ac965f8e2fcd8803bb13a.scope - libcontainer container 4b3e8cd6c162a5799d3a9ba061093704c5e05105e66ac965f8e2fcd8803bb13a. Sep 5 06:25:23.118582 containerd[1590]: time="2025-09-05T06:25:23.118503148Z" level=info msg="StartContainer for \"4b3e8cd6c162a5799d3a9ba061093704c5e05105e66ac965f8e2fcd8803bb13a\" returns successfully" Sep 5 06:25:23.667460 containerd[1590]: time="2025-09-05T06:25:23.667207781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c99b967c-5jd7t,Uid:283ad304-6ada-44a3-8506-2d3a232ec8bc,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:25:23.667460 containerd[1590]: time="2025-09-05T06:25:23.667288010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c99b967c-9pckk,Uid:37e7b780-cbc8-47a6-ab71-691c12c694ee,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:25:23.775527 kubelet[2720]: E0905 06:25:23.775496 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:23.785791 kubelet[2720]: I0905 06:25:23.785660 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sgvgl" podStartSLOduration=39.785644818 podStartE2EDuration="39.785644818s" podCreationTimestamp="2025-09-05 06:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:25:23.78501512 +0000 UTC m=+45.200008425" watchObservedRunningTime="2025-09-05 06:25:23.785644818 +0000 UTC m=+45.200638133" Sep 5 06:25:23.845824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999448373.mount: Deactivated successfully. Sep 5 06:25:23.999741 systemd-networkd[1499]: cali6ae076cc212: Gained IPv6LL Sep 5 06:25:24.027695 systemd-networkd[1499]: cali9879b57b394: Link UP Sep 5 06:25:24.028991 systemd-networkd[1499]: cali9879b57b394: Gained carrier Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.961 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0 calico-apiserver-9c99b967c- calico-apiserver 283ad304-6ada-44a3-8506-2d3a232ec8bc 825 0 2025-09-05 06:24:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c99b967c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9c99b967c-5jd7t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9879b57b394 [] [] }} ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-5jd7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.961 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-5jd7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.987 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" HandleID="k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Workload="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.987 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" HandleID="k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Workload="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019e840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9c99b967c-5jd7t", "timestamp":"2025-09-05 06:25:23.987096947 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.987 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.987 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.987 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.996 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:23.999 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.003 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.004 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.006 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.006 [INFO][4563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.008 [INFO][4563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10 Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.013 [INFO][4563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.018 [INFO][4563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.018 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" host="localhost" Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.018 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:25:24.040325 containerd[1590]: 2025-09-05 06:25:24.018 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" HandleID="k8s-pod-network.40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Workload="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" Sep 5 06:25:24.040955 containerd[1590]: 2025-09-05 06:25:24.021 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-5jd7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0", GenerateName:"calico-apiserver-9c99b967c-", Namespace:"calico-apiserver", SelfLink:"", UID:"283ad304-6ada-44a3-8506-2d3a232ec8bc", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c99b967c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9c99b967c-5jd7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9879b57b394", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:24.040955 containerd[1590]: 2025-09-05 06:25:24.021 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-5jd7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" Sep 5 06:25:24.040955 containerd[1590]: 2025-09-05 06:25:24.021 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9879b57b394 ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-5jd7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" Sep 5 06:25:24.040955 containerd[1590]: 2025-09-05 06:25:24.029 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-5jd7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" Sep 5 06:25:24.040955 containerd[1590]: 2025-09-05 06:25:24.029 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-5jd7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0", GenerateName:"calico-apiserver-9c99b967c-", Namespace:"calico-apiserver", SelfLink:"", UID:"283ad304-6ada-44a3-8506-2d3a232ec8bc", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c99b967c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10", Pod:"calico-apiserver-9c99b967c-5jd7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9879b57b394", MAC:"a2:14:7a:6d:49:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:24.040955 containerd[1590]: 2025-09-05 06:25:24.037 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-5jd7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--5jd7t-eth0" Sep 5 06:25:24.061929 containerd[1590]: time="2025-09-05T06:25:24.061882522Z" level=info msg="connecting to shim 40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10" address="unix:///run/containerd/s/7fc36ba20de483c9ee91d5c17144d5e2c982efacaa9ebd292456cae8a21f8714" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:25:24.100797 systemd[1]: Started cri-containerd-40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10.scope - libcontainer container 40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10. Sep 5 06:25:24.125623 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:25:24.134961 systemd-networkd[1499]: cali40da63bb1af: Link UP Sep 5 06:25:24.136387 systemd-networkd[1499]: cali40da63bb1af: Gained carrier Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:23.959 [INFO][4533] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0 calico-apiserver-9c99b967c- calico-apiserver 37e7b780-cbc8-47a6-ab71-691c12c694ee 827 0 2025-09-05 06:24:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c99b967c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9c99b967c-9pckk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali40da63bb1af [] [] }} ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-9pckk" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--9pckk-" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:23.959 [INFO][4533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-9pckk" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:23.995 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" HandleID="k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Workload="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:23.996 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" HandleID="k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Workload="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9c99b967c-9pckk", "timestamp":"2025-09-05 06:25:23.995818839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:23.996 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.019 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.019 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.097 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.101 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.105 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.107 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.108 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.108 [INFO][4561] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.110 [INFO][4561] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.116 [INFO][4561] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.123 [INFO][4561] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.123 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" host="localhost" Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.123 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:25:24.156592 containerd[1590]: 2025-09-05 06:25:24.123 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" HandleID="k8s-pod-network.fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Workload="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" Sep 5 06:25:24.157636 containerd[1590]: 2025-09-05 06:25:24.131 [INFO][4533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-9pckk" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0", GenerateName:"calico-apiserver-9c99b967c-", Namespace:"calico-apiserver", SelfLink:"", UID:"37e7b780-cbc8-47a6-ab71-691c12c694ee", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c99b967c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9c99b967c-9pckk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40da63bb1af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:24.157636 containerd[1590]: 2025-09-05 06:25:24.131 [INFO][4533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-9pckk" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" Sep 5 06:25:24.157636 containerd[1590]: 2025-09-05 06:25:24.131 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40da63bb1af ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-9pckk" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" Sep 5 06:25:24.157636 containerd[1590]: 2025-09-05 06:25:24.137 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-9pckk" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" Sep 5 06:25:24.157636 containerd[1590]: 2025-09-05 06:25:24.137 [INFO][4533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-9pckk" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0", GenerateName:"calico-apiserver-9c99b967c-", Namespace:"calico-apiserver", SelfLink:"", UID:"37e7b780-cbc8-47a6-ab71-691c12c694ee", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c99b967c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc", Pod:"calico-apiserver-9c99b967c-9pckk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40da63bb1af", MAC:"f2:10:27:78:fa:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:24.157636 containerd[1590]: 2025-09-05 06:25:24.148 [INFO][4533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" Namespace="calico-apiserver" Pod="calico-apiserver-9c99b967c-9pckk" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c99b967c--9pckk-eth0" Sep 5 06:25:24.160913 containerd[1590]: time="2025-09-05T06:25:24.160863423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c99b967c-5jd7t,Uid:283ad304-6ada-44a3-8506-2d3a232ec8bc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10\"" Sep 5 06:25:24.182053 containerd[1590]: time="2025-09-05T06:25:24.182008807Z" level=info msg="connecting to shim fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc" address="unix:///run/containerd/s/804f05ff80a33bd7f59a35e7b51cc653efa4dfd0e251427a533295a7777848e5" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:25:24.205764 systemd[1]: Started cri-containerd-fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc.scope - libcontainer container fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc. Sep 5 06:25:24.225243 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:25:24.262087 containerd[1590]: time="2025-09-05T06:25:24.262002083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c99b967c-9pckk,Uid:37e7b780-cbc8-47a6-ab71-691c12c694ee,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc\"" Sep 5 06:25:24.317717 systemd-networkd[1499]: cali0c6ea307da5: Gained IPv6LL Sep 5 06:25:24.665706 kubelet[2720]: E0905 06:25:24.665609 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:24.666159 containerd[1590]: time="2025-09-05T06:25:24.666118529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm52w,Uid:d7097e42-b7ac-47fd-82a8-b7a571e5a93b,Namespace:calico-system,Attempt:0,}" Sep 5 06:25:24.666282 containerd[1590]: time="2025-09-05T06:25:24.666207314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-p7g92,Uid:8219e107-b937-4e6b-b7b5-7db6885eb957,Namespace:calico-system,Attempt:0,}" Sep 5 06:25:24.666679 containerd[1590]: time="2025-09-05T06:25:24.666641283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d58vl,Uid:8866500a-5b0b-49df-9d08-3732719c5d62,Namespace:kube-system,Attempt:0,}" Sep 5 06:25:24.779864 kubelet[2720]: E0905 06:25:24.779820 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:24.853160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073824955.mount: Deactivated successfully. Sep 5 06:25:25.213761 systemd-networkd[1499]: cali40da63bb1af: Gained IPv6LL Sep 5 06:25:25.236617 systemd-networkd[1499]: cali93838c3e9fd: Link UP Sep 5 06:25:25.237502 systemd-networkd[1499]: cali93838c3e9fd: Gained carrier Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.824 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wm52w-eth0 csi-node-driver- calico-system d7097e42-b7ac-47fd-82a8-b7a571e5a93b 698 0 2025-09-05 06:24:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wm52w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali93838c3e9fd [] [] }} ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Namespace="calico-system" Pod="csi-node-driver-wm52w" WorkloadEndpoint="localhost-k8s-csi--node--driver--wm52w-" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.824 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Namespace="calico-system" Pod="csi-node-driver-wm52w" WorkloadEndpoint="localhost-k8s-csi--node--driver--wm52w-eth0" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.864 [INFO][4737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" HandleID="k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Workload="localhost-k8s-csi--node--driver--wm52w-eth0" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.864 [INFO][4737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" HandleID="k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Workload="localhost-k8s-csi--node--driver--wm52w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d8fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wm52w", "timestamp":"2025-09-05 06:25:24.864365876 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.864 [INFO][4737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.864 [INFO][4737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.864 [INFO][4737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.884 [INFO][4737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.891 [INFO][4737] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.901 [INFO][4737] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.903 [INFO][4737] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.906 [INFO][4737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.906 [INFO][4737] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.910 [INFO][4737] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:24.916 [INFO][4737] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:25.229 [INFO][4737] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:25.229 [INFO][4737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" host="localhost" Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:25.229 [INFO][4737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:25:25.424219 containerd[1590]: 2025-09-05 06:25:25.229 [INFO][4737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" HandleID="k8s-pod-network.d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Workload="localhost-k8s-csi--node--driver--wm52w-eth0" Sep 5 06:25:25.426060 containerd[1590]: 2025-09-05 06:25:25.233 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Namespace="calico-system" Pod="csi-node-driver-wm52w" WorkloadEndpoint="localhost-k8s-csi--node--driver--wm52w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wm52w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7097e42-b7ac-47fd-82a8-b7a571e5a93b", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wm52w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93838c3e9fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:25.426060 containerd[1590]: 2025-09-05 06:25:25.233 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Namespace="calico-system" Pod="csi-node-driver-wm52w" WorkloadEndpoint="localhost-k8s-csi--node--driver--wm52w-eth0" Sep 5 06:25:25.426060 containerd[1590]: 2025-09-05 06:25:25.233 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93838c3e9fd ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Namespace="calico-system" Pod="csi-node-driver-wm52w" WorkloadEndpoint="localhost-k8s-csi--node--driver--wm52w-eth0" Sep 5 06:25:25.426060 containerd[1590]: 2025-09-05 06:25:25.237 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Namespace="calico-system" Pod="csi-node-driver-wm52w" WorkloadEndpoint="localhost-k8s-csi--node--driver--wm52w-eth0" Sep 5 06:25:25.426060 containerd[1590]: 2025-09-05 06:25:25.238 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Namespace="calico-system" Pod="csi-node-driver-wm52w" WorkloadEndpoint="localhost-k8s-csi--node--driver--wm52w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wm52w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7097e42-b7ac-47fd-82a8-b7a571e5a93b", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f", Pod:"csi-node-driver-wm52w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93838c3e9fd", MAC:"ca:1f:9b:3e:94:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:25.426060 containerd[1590]: 2025-09-05 06:25:25.420 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" Namespace="calico-system" Pod="csi-node-driver-wm52w" WorkloadEndpoint="localhost-k8s-csi--node--driver--wm52w-eth0" Sep 5 06:25:25.428561 containerd[1590]: time="2025-09-05T06:25:25.427523578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 06:25:25.432299 containerd[1590]: time="2025-09-05T06:25:25.432266834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:25.434013 containerd[1590]: time="2025-09-05T06:25:25.433947384Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:25.434865 containerd[1590]: time="2025-09-05T06:25:25.434809169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:25.435349 containerd[1590]: time="2025-09-05T06:25:25.435317082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.594013884s" Sep 5 06:25:25.435402 containerd[1590]: time="2025-09-05T06:25:25.435350959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 06:25:25.436981 containerd[1590]: time="2025-09-05T06:25:25.436940842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 06:25:25.439154 containerd[1590]: time="2025-09-05T06:25:25.438721802Z" level=info msg="CreateContainer within sandbox \"6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 06:25:25.463277 containerd[1590]: time="2025-09-05T06:25:25.463214613Z" level=info msg="connecting to shim d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f" address="unix:///run/containerd/s/e37519f4ec3445f219a35d74c796d1524b066f39da1086cb33135101d66cd2d4" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:25:25.463435 systemd-networkd[1499]: caliae4bd43eb71: Link UP Sep 5 06:25:25.464966 systemd-networkd[1499]: caliae4bd43eb71: Gained carrier Sep 5 06:25:25.465863 containerd[1590]: time="2025-09-05T06:25:25.465770384Z" level=info msg="Container b26afabd5e6af2937cd9174f29394776a660869d2343dc6907a466f9edc4482b: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:25.480433 containerd[1590]: time="2025-09-05T06:25:25.480223231Z" level=info msg="CreateContainer within sandbox \"6c9fd9032e84c29ac3a1b7c4d09cc7ca1b51f53c00d5395663c77f544a04b97c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b26afabd5e6af2937cd9174f29394776a660869d2343dc6907a466f9edc4482b\"" Sep 5 06:25:25.481026 containerd[1590]: time="2025-09-05T06:25:25.481003163Z" level=info msg="StartContainer for \"b26afabd5e6af2937cd9174f29394776a660869d2343dc6907a466f9edc4482b\"" Sep 5 06:25:25.482308 containerd[1590]: time="2025-09-05T06:25:25.482221811Z" level=info msg="connecting to shim b26afabd5e6af2937cd9174f29394776a660869d2343dc6907a466f9edc4482b" address="unix:///run/containerd/s/4f74a33369a9026c32a0216e4d827afddec46f451aa51194e534fffceadd3c20" protocol=ttrpc version=3 Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:24.828 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d58vl-eth0 coredns-668d6bf9bc- kube-system 8866500a-5b0b-49df-9d08-3732719c5d62 826 0 2025-09-05 06:24:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d58vl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliae4bd43eb71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d58vl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d58vl-" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:24.831 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d58vl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:24.917 [INFO][4752] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" HandleID="k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Workload="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:24.919 [INFO][4752] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" HandleID="k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Workload="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d58vl", "timestamp":"2025-09-05 06:25:24.91717098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:24.919 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.229 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.229 [INFO][4752] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.237 [INFO][4752] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.423 [INFO][4752] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.430 [INFO][4752] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.433 [INFO][4752] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.437 [INFO][4752] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.437 [INFO][4752] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.439 [INFO][4752] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.442 [INFO][4752] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.448 [INFO][4752] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.448 [INFO][4752] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" host="localhost" Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.448 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:25:25.484417 containerd[1590]: 2025-09-05 06:25:25.448 [INFO][4752] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" HandleID="k8s-pod-network.3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Workload="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" Sep 5 06:25:25.484971 containerd[1590]: 2025-09-05 06:25:25.457 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d58vl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d58vl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8866500a-5b0b-49df-9d08-3732719c5d62", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d58vl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae4bd43eb71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:25.484971 containerd[1590]: 2025-09-05 06:25:25.457 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d58vl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" Sep 5 06:25:25.484971 containerd[1590]: 2025-09-05 06:25:25.457 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae4bd43eb71 ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d58vl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" Sep 5 06:25:25.484971 containerd[1590]: 2025-09-05 06:25:25.466 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d58vl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" Sep 5 06:25:25.484971 containerd[1590]: 2025-09-05 06:25:25.467 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d58vl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d58vl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8866500a-5b0b-49df-9d08-3732719c5d62", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd", Pod:"coredns-668d6bf9bc-d58vl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae4bd43eb71", MAC:"9a:5b:1b:c4:ae:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:25.484971 containerd[1590]: 2025-09-05 06:25:25.479 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d58vl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d58vl-eth0" Sep 5 06:25:25.495678 systemd[1]: Started cri-containerd-d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f.scope - libcontainer container d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f. Sep 5 06:25:25.499598 systemd[1]: Started cri-containerd-b26afabd5e6af2937cd9174f29394776a660869d2343dc6907a466f9edc4482b.scope - libcontainer container b26afabd5e6af2937cd9174f29394776a660869d2343dc6907a466f9edc4482b. Sep 5 06:25:25.512057 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:25:25.514206 containerd[1590]: time="2025-09-05T06:25:25.514169938Z" level=info msg="connecting to shim 3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd" address="unix:///run/containerd/s/05bc35234097c8060e8c3a695f4237b5f60f8cf5b0ea122cc677712a0bd27646" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:25:25.542248 containerd[1590]: time="2025-09-05T06:25:25.540847317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm52w,Uid:d7097e42-b7ac-47fd-82a8-b7a571e5a93b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f\"" Sep 5 06:25:25.543791 systemd[1]: Started cri-containerd-3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd.scope - libcontainer container 3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd. Sep 5 06:25:25.559350 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:25:25.576721 systemd-networkd[1499]: calidf5b5f4c39a: Link UP Sep 5 06:25:25.577700 systemd-networkd[1499]: calidf5b5f4c39a: Gained carrier Sep 5 06:25:25.585311 containerd[1590]: time="2025-09-05T06:25:25.585276413Z" level=info msg="StartContainer for \"b26afabd5e6af2937cd9174f29394776a660869d2343dc6907a466f9edc4482b\" returns successfully" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:24.831 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--p7g92-eth0 goldmane-54d579b49d- calico-system 8219e107-b937-4e6b-b7b5-7db6885eb957 824 0 2025-09-05 06:24:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-p7g92 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidf5b5f4c39a [] [] }} ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Namespace="calico-system" Pod="goldmane-54d579b49d-p7g92" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--p7g92-" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:24.831 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Namespace="calico-system" Pod="goldmane-54d579b49d-p7g92" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:24.924 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" HandleID="k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Workload="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:24.924 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" HandleID="k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Workload="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b07b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-p7g92", "timestamp":"2025-09-05 06:25:24.924153187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:24.924 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.449 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.449 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.455 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.525 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.531 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.533 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.535 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.535 [INFO][4744] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.539 [INFO][4744] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5 Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.545 [INFO][4744] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.557 [INFO][4744] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.557 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" host="localhost" Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.557 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:25:25.598194 containerd[1590]: 2025-09-05 06:25:25.557 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" HandleID="k8s-pod-network.c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Workload="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" Sep 5 06:25:25.598759 containerd[1590]: 2025-09-05 06:25:25.573 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Namespace="calico-system" Pod="goldmane-54d579b49d-p7g92" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--p7g92-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"8219e107-b937-4e6b-b7b5-7db6885eb957", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-p7g92", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf5b5f4c39a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:25.598759 containerd[1590]: 2025-09-05 06:25:25.573 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Namespace="calico-system" Pod="goldmane-54d579b49d-p7g92" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" Sep 5 06:25:25.598759 containerd[1590]: 2025-09-05 06:25:25.573 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf5b5f4c39a ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Namespace="calico-system" Pod="goldmane-54d579b49d-p7g92" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" Sep 5 06:25:25.598759 containerd[1590]: 2025-09-05 06:25:25.579 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Namespace="calico-system" Pod="goldmane-54d579b49d-p7g92" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" Sep 5 06:25:25.598759 containerd[1590]: 2025-09-05 06:25:25.580 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Namespace="calico-system" Pod="goldmane-54d579b49d-p7g92" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--p7g92-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"8219e107-b937-4e6b-b7b5-7db6885eb957", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5", Pod:"goldmane-54d579b49d-p7g92", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf5b5f4c39a", MAC:"be:37:91:48:4f:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:25:25.598759 containerd[1590]: 2025-09-05 06:25:25.595 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" Namespace="calico-system" Pod="goldmane-54d579b49d-p7g92" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--p7g92-eth0" Sep 5 06:25:25.603318 containerd[1590]: time="2025-09-05T06:25:25.603249488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d58vl,Uid:8866500a-5b0b-49df-9d08-3732719c5d62,Namespace:kube-system,Attempt:0,} returns sandbox id \"3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd\"" Sep 5 06:25:25.606547 kubelet[2720]: E0905 06:25:25.605502 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:25.611656 containerd[1590]: time="2025-09-05T06:25:25.611224300Z" level=info msg="CreateContainer within sandbox \"3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 06:25:25.786510 kubelet[2720]: E0905 06:25:25.786409 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:25.917689 systemd-networkd[1499]: cali9879b57b394: Gained IPv6LL Sep 5 06:25:25.930140 kubelet[2720]: I0905 06:25:25.930051 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6dd68d8cdd-ggc7m" podStartSLOduration=2.654711774 podStartE2EDuration="7.930036034s" podCreationTimestamp="2025-09-05 06:25:18 +0000 UTC" firstStartedPulling="2025-09-05 06:25:20.161321498 +0000 UTC m=+41.576314813" lastFinishedPulling="2025-09-05 06:25:25.436645758 +0000 UTC m=+46.851639073" observedRunningTime="2025-09-05 06:25:25.929971326 +0000 UTC m=+47.344964651" watchObservedRunningTime="2025-09-05 06:25:25.930036034 +0000 UTC m=+47.345029339" Sep 5 06:25:25.946360 containerd[1590]: time="2025-09-05T06:25:25.946300120Z" level=info msg="Container 691087195054e4b14a4780a2f71d5e5e6523ed8e578199e6c4c9ce968549b195: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:25.957010 containerd[1590]: time="2025-09-05T06:25:25.956384482Z" level=info msg="connecting to shim c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5" address="unix:///run/containerd/s/e7a3457bd9834acb31f769ecc550dcbf6fc7203cb839c27f018fa350a565f372" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:25:25.958055 containerd[1590]: time="2025-09-05T06:25:25.958007430Z" level=info msg="CreateContainer within sandbox \"3641d79365d7bc3166d36d58a075a0d8fdd98a00871f2c96ac0cb3970adbcefd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"691087195054e4b14a4780a2f71d5e5e6523ed8e578199e6c4c9ce968549b195\"" Sep 5 06:25:25.960489 containerd[1590]: time="2025-09-05T06:25:25.960343057Z" level=info msg="StartContainer for \"691087195054e4b14a4780a2f71d5e5e6523ed8e578199e6c4c9ce968549b195\"" Sep 5 06:25:25.961302 containerd[1590]: time="2025-09-05T06:25:25.961272394Z" level=info msg="connecting to shim 691087195054e4b14a4780a2f71d5e5e6523ed8e578199e6c4c9ce968549b195" address="unix:///run/containerd/s/05bc35234097c8060e8c3a695f4237b5f60f8cf5b0ea122cc677712a0bd27646" protocol=ttrpc version=3 Sep 5 06:25:25.996769 systemd[1]: Started cri-containerd-691087195054e4b14a4780a2f71d5e5e6523ed8e578199e6c4c9ce968549b195.scope - libcontainer container 691087195054e4b14a4780a2f71d5e5e6523ed8e578199e6c4c9ce968549b195. Sep 5 06:25:25.998606 systemd[1]: Started cri-containerd-c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5.scope - libcontainer container c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5. Sep 5 06:25:26.020553 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:25:26.039581 containerd[1590]: time="2025-09-05T06:25:26.039448208Z" level=info msg="StartContainer for \"691087195054e4b14a4780a2f71d5e5e6523ed8e578199e6c4c9ce968549b195\" returns successfully" Sep 5 06:25:26.052323 containerd[1590]: time="2025-09-05T06:25:26.052267709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-p7g92,Uid:8219e107-b937-4e6b-b7b5-7db6885eb957,Namespace:calico-system,Attempt:0,} returns sandbox id \"c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5\"" Sep 5 06:25:26.321261 systemd[1]: Started sshd@9-10.0.0.4:22-10.0.0.1:58382.service - OpenSSH per-connection server daemon (10.0.0.1:58382). Sep 5 06:25:26.394515 sshd[5004]: Accepted publickey for core from 10.0.0.1 port 58382 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:26.396440 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:26.400728 systemd-logind[1573]: New session 10 of user core. Sep 5 06:25:26.411676 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 06:25:26.559320 sshd[5007]: Connection closed by 10.0.0.1 port 58382 Sep 5 06:25:26.559703 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:26.569272 systemd[1]: sshd@9-10.0.0.4:22-10.0.0.1:58382.service: Deactivated successfully. Sep 5 06:25:26.571222 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 06:25:26.571967 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Sep 5 06:25:26.574882 systemd[1]: Started sshd@10-10.0.0.4:22-10.0.0.1:58384.service - OpenSSH per-connection server daemon (10.0.0.1:58384). Sep 5 06:25:26.575478 systemd-logind[1573]: Removed session 10. Sep 5 06:25:26.621391 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 58384 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:26.621736 systemd-networkd[1499]: cali93838c3e9fd: Gained IPv6LL Sep 5 06:25:26.623245 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:26.627320 systemd-logind[1573]: New session 11 of user core. Sep 5 06:25:26.636655 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 06:25:26.790137 kubelet[2720]: E0905 06:25:26.790102 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:26.826873 kubelet[2720]: I0905 06:25:26.826693 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d58vl" podStartSLOduration=42.826673318 podStartE2EDuration="42.826673318s" podCreationTimestamp="2025-09-05 06:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:25:26.82558042 +0000 UTC m=+48.240573735" watchObservedRunningTime="2025-09-05 06:25:26.826673318 +0000 UTC m=+48.241666633" Sep 5 06:25:26.840303 sshd[5024]: Connection closed by 10.0.0.1 port 58384 Sep 5 06:25:26.839753 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:26.853385 systemd[1]: sshd@10-10.0.0.4:22-10.0.0.1:58384.service: Deactivated successfully. Sep 5 06:25:26.855958 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 06:25:26.857608 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Sep 5 06:25:26.861419 systemd-logind[1573]: Removed session 11. Sep 5 06:25:26.862998 systemd[1]: Started sshd@11-10.0.0.4:22-10.0.0.1:58390.service - OpenSSH per-connection server daemon (10.0.0.1:58390). Sep 5 06:25:26.916389 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 58390 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:26.918199 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:26.922566 systemd-logind[1573]: New session 12 of user core. Sep 5 06:25:26.939645 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 06:25:27.065937 sshd[5040]: Connection closed by 10.0.0.1 port 58390 Sep 5 06:25:27.066271 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:27.070763 systemd[1]: sshd@11-10.0.0.4:22-10.0.0.1:58390.service: Deactivated successfully. Sep 5 06:25:27.072822 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 06:25:27.073789 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Sep 5 06:25:27.074906 systemd-logind[1573]: Removed session 12. Sep 5 06:25:27.197701 systemd-networkd[1499]: calidf5b5f4c39a: Gained IPv6LL Sep 5 06:25:27.198460 systemd-networkd[1499]: caliae4bd43eb71: Gained IPv6LL Sep 5 06:25:27.792966 kubelet[2720]: E0905 06:25:27.792924 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:28.706427 containerd[1590]: time="2025-09-05T06:25:28.706385848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:28.707325 containerd[1590]: time="2025-09-05T06:25:28.707131237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 06:25:28.708266 containerd[1590]: time="2025-09-05T06:25:28.708242997Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:28.710658 containerd[1590]: time="2025-09-05T06:25:28.710606975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:28.711201 containerd[1590]: time="2025-09-05T06:25:28.711169173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.274202479s" Sep 5 06:25:28.711245 containerd[1590]: time="2025-09-05T06:25:28.711200475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 06:25:28.712087 containerd[1590]: time="2025-09-05T06:25:28.712059968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 06:25:28.719215 containerd[1590]: time="2025-09-05T06:25:28.719185308Z" level=info msg="CreateContainer within sandbox \"96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 06:25:28.727191 containerd[1590]: time="2025-09-05T06:25:28.727159810Z" level=info msg="Container efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:28.734542 containerd[1590]: time="2025-09-05T06:25:28.734491848Z" level=info msg="CreateContainer within sandbox \"96a1ebe0f616362e88b9c587c18768006cf45ca68a2c98dae0a5753613729020\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859\"" Sep 5 06:25:28.734865 containerd[1590]: time="2025-09-05T06:25:28.734834202Z" level=info msg="StartContainer for \"efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859\"" Sep 5 06:25:28.735836 containerd[1590]: time="2025-09-05T06:25:28.735794263Z" level=info msg="connecting to shim efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859" address="unix:///run/containerd/s/3246afb8d304bec84e0fffd30f8b3268e426d2ea8371a4a14ed5a65f5ac485fb" protocol=ttrpc version=3 Sep 5 06:25:28.754683 systemd[1]: Started cri-containerd-efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859.scope - libcontainer container efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859. Sep 5 06:25:28.796212 kubelet[2720]: E0905 06:25:28.796181 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:25:28.941526 containerd[1590]: time="2025-09-05T06:25:28.941476966Z" level=info msg="StartContainer for \"efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859\" returns successfully" Sep 5 06:25:29.870698 containerd[1590]: time="2025-09-05T06:25:29.870652386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859\" id:\"dd7219127d28f33cfed91a1cc51b1a18bb1af19844d935d6a72642f3430739de\" pid:5118 exited_at:{seconds:1757053529 nanos:870346725}" Sep 5 06:25:29.883177 kubelet[2720]: I0905 06:25:29.882782 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-766c899479-rdpbp" podStartSLOduration=29.132151548 podStartE2EDuration="34.882762917s" podCreationTimestamp="2025-09-05 06:24:55 +0000 UTC" firstStartedPulling="2025-09-05 06:25:22.961335437 +0000 UTC m=+44.376328742" lastFinishedPulling="2025-09-05 06:25:28.711946796 +0000 UTC m=+50.126940111" observedRunningTime="2025-09-05 06:25:29.809822728 +0000 UTC m=+51.224816044" watchObservedRunningTime="2025-09-05 06:25:29.882762917 +0000 UTC m=+51.297756232" Sep 5 06:25:32.077052 systemd[1]: Started sshd@12-10.0.0.4:22-10.0.0.1:32946.service - OpenSSH per-connection server daemon (10.0.0.1:32946). Sep 5 06:25:32.146718 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 32946 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:32.148621 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:32.153185 systemd-logind[1573]: New session 13 of user core. Sep 5 06:25:32.160643 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 06:25:32.287415 containerd[1590]: time="2025-09-05T06:25:32.287109967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:32.287988 containerd[1590]: time="2025-09-05T06:25:32.287959766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 06:25:32.288925 containerd[1590]: time="2025-09-05T06:25:32.288906766Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:32.292551 containerd[1590]: time="2025-09-05T06:25:32.291601697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:32.292551 containerd[1590]: time="2025-09-05T06:25:32.292081520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.57991403s" Sep 5 06:25:32.292551 containerd[1590]: time="2025-09-05T06:25:32.292107731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 06:25:32.294097 containerd[1590]: time="2025-09-05T06:25:32.294070526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 06:25:32.295321 containerd[1590]: time="2025-09-05T06:25:32.295279931Z" level=info msg="CreateContainer within sandbox \"40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 06:25:32.306570 containerd[1590]: time="2025-09-05T06:25:32.306511257Z" level=info msg="Container e41c3e507d9b3b82dd58be9ea043f30b2cec185336b6fa4601e31e16143aabfc: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:32.318676 containerd[1590]: time="2025-09-05T06:25:32.318527695Z" level=info msg="CreateContainer within sandbox \"40ececb9fb8c20c9cb531100a2aae1111db881a3ab5500c01f29448fd7b25e10\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e41c3e507d9b3b82dd58be9ea043f30b2cec185336b6fa4601e31e16143aabfc\"" Sep 5 06:25:32.319732 containerd[1590]: time="2025-09-05T06:25:32.319703795Z" level=info msg="StartContainer for \"e41c3e507d9b3b82dd58be9ea043f30b2cec185336b6fa4601e31e16143aabfc\"" Sep 5 06:25:32.320729 containerd[1590]: time="2025-09-05T06:25:32.320698920Z" level=info msg="connecting to shim e41c3e507d9b3b82dd58be9ea043f30b2cec185336b6fa4601e31e16143aabfc" address="unix:///run/containerd/s/7fc36ba20de483c9ee91d5c17144d5e2c982efacaa9ebd292456cae8a21f8714" protocol=ttrpc version=3 Sep 5 06:25:32.349667 systemd[1]: Started cri-containerd-e41c3e507d9b3b82dd58be9ea043f30b2cec185336b6fa4601e31e16143aabfc.scope - libcontainer container e41c3e507d9b3b82dd58be9ea043f30b2cec185336b6fa4601e31e16143aabfc. Sep 5 06:25:32.350767 sshd[5145]: Connection closed by 10.0.0.1 port 32946 Sep 5 06:25:32.352518 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:32.356024 systemd[1]: sshd@12-10.0.0.4:22-10.0.0.1:32946.service: Deactivated successfully. Sep 5 06:25:32.356245 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Sep 5 06:25:32.358742 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 06:25:32.361850 systemd-logind[1573]: Removed session 13. Sep 5 06:25:32.396726 containerd[1590]: time="2025-09-05T06:25:32.396686144Z" level=info msg="StartContainer for \"e41c3e507d9b3b82dd58be9ea043f30b2cec185336b6fa4601e31e16143aabfc\" returns successfully" Sep 5 06:25:32.720141 containerd[1590]: time="2025-09-05T06:25:32.720092915Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:32.721299 containerd[1590]: time="2025-09-05T06:25:32.721278885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 06:25:32.723049 containerd[1590]: time="2025-09-05T06:25:32.723009312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 428.910161ms" Sep 5 06:25:32.723049 containerd[1590]: time="2025-09-05T06:25:32.723044281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 06:25:32.723813 containerd[1590]: time="2025-09-05T06:25:32.723791098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 06:25:32.725159 containerd[1590]: time="2025-09-05T06:25:32.725131090Z" level=info msg="CreateContainer within sandbox \"fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 06:25:32.733807 containerd[1590]: time="2025-09-05T06:25:32.733765225Z" level=info msg="Container 1856fdc71c963966689de2e606fc1039cbf35d40f7e1f10bfacecf8d8e8e7f78: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:32.744996 containerd[1590]: time="2025-09-05T06:25:32.744952775Z" level=info msg="CreateContainer within sandbox \"fc7d8b267d4cb2735e696b7566d4626b78e0247d9ff01c120ef7ebbf5ba8d9cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1856fdc71c963966689de2e606fc1039cbf35d40f7e1f10bfacecf8d8e8e7f78\"" Sep 5 06:25:32.745599 containerd[1590]: time="2025-09-05T06:25:32.745575218Z" level=info msg="StartContainer for \"1856fdc71c963966689de2e606fc1039cbf35d40f7e1f10bfacecf8d8e8e7f78\"" Sep 5 06:25:32.746809 containerd[1590]: time="2025-09-05T06:25:32.746776827Z" level=info msg="connecting to shim 1856fdc71c963966689de2e606fc1039cbf35d40f7e1f10bfacecf8d8e8e7f78" address="unix:///run/containerd/s/804f05ff80a33bd7f59a35e7b51cc653efa4dfd0e251427a533295a7777848e5" protocol=ttrpc version=3 Sep 5 06:25:32.770729 systemd[1]: Started cri-containerd-1856fdc71c963966689de2e606fc1039cbf35d40f7e1f10bfacecf8d8e8e7f78.scope - libcontainer container 1856fdc71c963966689de2e606fc1039cbf35d40f7e1f10bfacecf8d8e8e7f78. Sep 5 06:25:32.820466 kubelet[2720]: I0905 06:25:32.819722 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9c99b967c-5jd7t" podStartSLOduration=32.688974238 podStartE2EDuration="40.819700937s" podCreationTimestamp="2025-09-05 06:24:52 +0000 UTC" firstStartedPulling="2025-09-05 06:25:24.162400194 +0000 UTC m=+45.577393499" lastFinishedPulling="2025-09-05 06:25:32.293126883 +0000 UTC m=+53.708120198" observedRunningTime="2025-09-05 06:25:32.819624587 +0000 UTC m=+54.234617902" watchObservedRunningTime="2025-09-05 06:25:32.819700937 +0000 UTC m=+54.234694252" Sep 5 06:25:32.839747 containerd[1590]: time="2025-09-05T06:25:32.839634863Z" level=info msg="StartContainer for \"1856fdc71c963966689de2e606fc1039cbf35d40f7e1f10bfacecf8d8e8e7f78\" returns successfully" Sep 5 06:25:33.816486 kubelet[2720]: I0905 06:25:33.816354 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:25:34.818718 kubelet[2720]: I0905 06:25:34.818683 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:25:35.601338 containerd[1590]: time="2025-09-05T06:25:35.601286139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:35.602034 containerd[1590]: time="2025-09-05T06:25:35.602005019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 06:25:35.603225 containerd[1590]: time="2025-09-05T06:25:35.603171124Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:35.626488 containerd[1590]: time="2025-09-05T06:25:35.626461160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:35.626934 containerd[1590]: time="2025-09-05T06:25:35.626899669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.903078503s" Sep 5 06:25:35.626934 containerd[1590]: time="2025-09-05T06:25:35.626926882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 06:25:35.627978 containerd[1590]: time="2025-09-05T06:25:35.627926712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 06:25:35.629585 containerd[1590]: time="2025-09-05T06:25:35.629527550Z" level=info msg="CreateContainer within sandbox \"d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 06:25:35.660008 containerd[1590]: time="2025-09-05T06:25:35.659958701Z" level=info msg="Container 0e717705d8d4c42b721479ba977a9aec574d97d74535c7ba94ae5889ade35d4d: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:35.676644 containerd[1590]: time="2025-09-05T06:25:35.676611119Z" level=info msg="CreateContainer within sandbox \"d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0e717705d8d4c42b721479ba977a9aec574d97d74535c7ba94ae5889ade35d4d\"" Sep 5 06:25:35.680190 containerd[1590]: time="2025-09-05T06:25:35.680165684Z" level=info msg="StartContainer for \"0e717705d8d4c42b721479ba977a9aec574d97d74535c7ba94ae5889ade35d4d\"" Sep 5 06:25:35.681508 containerd[1590]: time="2025-09-05T06:25:35.681483598Z" level=info msg="connecting to shim 0e717705d8d4c42b721479ba977a9aec574d97d74535c7ba94ae5889ade35d4d" address="unix:///run/containerd/s/e37519f4ec3445f219a35d74c796d1524b066f39da1086cb33135101d66cd2d4" protocol=ttrpc version=3 Sep 5 06:25:35.706735 systemd[1]: Started cri-containerd-0e717705d8d4c42b721479ba977a9aec574d97d74535c7ba94ae5889ade35d4d.scope - libcontainer container 0e717705d8d4c42b721479ba977a9aec574d97d74535c7ba94ae5889ade35d4d. Sep 5 06:25:35.750201 containerd[1590]: time="2025-09-05T06:25:35.750160087Z" level=info msg="StartContainer for \"0e717705d8d4c42b721479ba977a9aec574d97d74535c7ba94ae5889ade35d4d\" returns successfully" Sep 5 06:25:37.365849 systemd[1]: Started sshd@13-10.0.0.4:22-10.0.0.1:32958.service - OpenSSH per-connection server daemon (10.0.0.1:32958). Sep 5 06:25:37.418554 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 32958 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:37.419869 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:37.423722 systemd-logind[1573]: New session 14 of user core. Sep 5 06:25:37.434644 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 06:25:37.687670 sshd[5283]: Connection closed by 10.0.0.1 port 32958 Sep 5 06:25:37.688083 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:37.692451 systemd[1]: sshd@13-10.0.0.4:22-10.0.0.1:32958.service: Deactivated successfully. Sep 5 06:25:37.694454 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 06:25:37.695195 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Sep 5 06:25:37.696614 systemd-logind[1573]: Removed session 14. Sep 5 06:25:39.341522 kubelet[2720]: I0905 06:25:39.341484 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:25:39.403261 kubelet[2720]: I0905 06:25:39.402571 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9c99b967c-9pckk" podStartSLOduration=38.942038186 podStartE2EDuration="47.402555345s" podCreationTimestamp="2025-09-05 06:24:52 +0000 UTC" firstStartedPulling="2025-09-05 06:25:24.263162019 +0000 UTC m=+45.678155334" lastFinishedPulling="2025-09-05 06:25:32.723679188 +0000 UTC m=+54.138672493" observedRunningTime="2025-09-05 06:25:34.147660434 +0000 UTC m=+55.562653749" watchObservedRunningTime="2025-09-05 06:25:39.402555345 +0000 UTC m=+60.817548660" Sep 5 06:25:40.342723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835896891.mount: Deactivated successfully. Sep 5 06:25:41.944857 containerd[1590]: time="2025-09-05T06:25:41.944816506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:41.946581 containerd[1590]: time="2025-09-05T06:25:41.946519825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 06:25:41.950799 containerd[1590]: time="2025-09-05T06:25:41.950766192Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:41.955095 containerd[1590]: time="2025-09-05T06:25:41.955051826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:41.955637 containerd[1590]: time="2025-09-05T06:25:41.955596420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.327642915s" Sep 5 06:25:41.955678 containerd[1590]: time="2025-09-05T06:25:41.955634835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 06:25:41.967571 containerd[1590]: time="2025-09-05T06:25:41.967521291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 06:25:41.975484 containerd[1590]: time="2025-09-05T06:25:41.975460545Z" level=info msg="CreateContainer within sandbox \"c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 06:25:41.990322 containerd[1590]: time="2025-09-05T06:25:41.990267537Z" level=info msg="Container 2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:42.003158 containerd[1590]: time="2025-09-05T06:25:42.003113296Z" level=info msg="CreateContainer within sandbox \"c71dd0b59f4cb88f42ce805bac0d06b0b22071259325f03833bf16b376289ed5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831\"" Sep 5 06:25:42.003752 containerd[1590]: time="2025-09-05T06:25:42.003698430Z" level=info msg="StartContainer for \"2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831\"" Sep 5 06:25:42.004860 containerd[1590]: time="2025-09-05T06:25:42.004834399Z" level=info msg="connecting to shim 2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831" address="unix:///run/containerd/s/e7a3457bd9834acb31f769ecc550dcbf6fc7203cb839c27f018fa350a565f372" protocol=ttrpc version=3 Sep 5 06:25:42.033692 systemd[1]: Started cri-containerd-2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831.scope - libcontainer container 2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831. Sep 5 06:25:42.095873 containerd[1590]: time="2025-09-05T06:25:42.095815585Z" level=info msg="StartContainer for \"2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831\" returns successfully" Sep 5 06:25:42.415007 containerd[1590]: time="2025-09-05T06:25:42.414973980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831\" id:\"aff9823107d7526dee2d16a6c70d30797ce83218946cb15329eb96f7369be43e\" pid:5369 exit_status:1 exited_at:{seconds:1757053542 nanos:414568238}" Sep 5 06:25:42.704250 systemd[1]: Started sshd@14-10.0.0.4:22-10.0.0.1:51784.service - OpenSSH per-connection server daemon (10.0.0.1:51784). Sep 5 06:25:42.774169 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 51784 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:42.775893 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:42.780184 systemd-logind[1573]: New session 15 of user core. Sep 5 06:25:42.790687 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 06:25:42.909753 sshd[5387]: Connection closed by 10.0.0.1 port 51784 Sep 5 06:25:42.910080 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:42.914761 systemd[1]: sshd@14-10.0.0.4:22-10.0.0.1:51784.service: Deactivated successfully. Sep 5 06:25:42.916873 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 06:25:42.917785 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Sep 5 06:25:42.918887 systemd-logind[1573]: Removed session 15. Sep 5 06:25:43.421518 containerd[1590]: time="2025-09-05T06:25:43.421464741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831\" id:\"f0ebfa6ac456bf13cc90cb31f07e178b9aee88ef6e7f6d839bf6aed1f4120f3e\" pid:5412 exit_status:1 exited_at:{seconds:1757053543 nanos:421198011}" Sep 5 06:25:44.618340 containerd[1590]: time="2025-09-05T06:25:44.618295979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:44.619168 containerd[1590]: time="2025-09-05T06:25:44.619139876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 06:25:44.620403 containerd[1590]: time="2025-09-05T06:25:44.620371890Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:44.622371 containerd[1590]: time="2025-09-05T06:25:44.622345252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:25:44.622947 containerd[1590]: time="2025-09-05T06:25:44.622904874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.655184294s" Sep 5 06:25:44.622947 containerd[1590]: time="2025-09-05T06:25:44.622943449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 06:25:44.631727 containerd[1590]: time="2025-09-05T06:25:44.631701354Z" level=info msg="CreateContainer within sandbox \"d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 06:25:44.639071 containerd[1590]: time="2025-09-05T06:25:44.639015298Z" level=info msg="Container 6a7a34d40afbfb3ab5c29e1fae5b92c866371c0a2b99b9ee54461f24954af659: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:25:44.648493 containerd[1590]: time="2025-09-05T06:25:44.648466907Z" level=info msg="CreateContainer within sandbox \"d46782d973addd80595ac391415927b86aa86ea49f3b71bae8f8adc89fe4a60f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6a7a34d40afbfb3ab5c29e1fae5b92c866371c0a2b99b9ee54461f24954af659\"" Sep 5 06:25:44.648936 containerd[1590]: time="2025-09-05T06:25:44.648911334Z" level=info msg="StartContainer for \"6a7a34d40afbfb3ab5c29e1fae5b92c866371c0a2b99b9ee54461f24954af659\"" Sep 5 06:25:44.650237 containerd[1590]: time="2025-09-05T06:25:44.650212193Z" level=info msg="connecting to shim 6a7a34d40afbfb3ab5c29e1fae5b92c866371c0a2b99b9ee54461f24954af659" address="unix:///run/containerd/s/e37519f4ec3445f219a35d74c796d1524b066f39da1086cb33135101d66cd2d4" protocol=ttrpc version=3 Sep 5 06:25:44.678715 systemd[1]: Started cri-containerd-6a7a34d40afbfb3ab5c29e1fae5b92c866371c0a2b99b9ee54461f24954af659.scope - libcontainer container 6a7a34d40afbfb3ab5c29e1fae5b92c866371c0a2b99b9ee54461f24954af659. Sep 5 06:25:44.768939 containerd[1590]: time="2025-09-05T06:25:44.768887753Z" level=info msg="StartContainer for \"6a7a34d40afbfb3ab5c29e1fae5b92c866371c0a2b99b9ee54461f24954af659\" returns successfully" Sep 5 06:25:45.321222 kubelet[2720]: I0905 06:25:45.321178 2720 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 06:25:45.321222 kubelet[2720]: I0905 06:25:45.321223 2720 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 06:25:45.435267 kubelet[2720]: I0905 06:25:45.435210 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wm52w" podStartSLOduration=31.356103067 podStartE2EDuration="50.435195488s" podCreationTimestamp="2025-09-05 06:24:55 +0000 UTC" firstStartedPulling="2025-09-05 06:25:25.544406953 +0000 UTC m=+46.959400268" lastFinishedPulling="2025-09-05 06:25:44.623499374 +0000 UTC m=+66.038492689" observedRunningTime="2025-09-05 06:25:45.434808271 +0000 UTC m=+66.849801596" watchObservedRunningTime="2025-09-05 06:25:45.435195488 +0000 UTC m=+66.850188793" Sep 5 06:25:45.435572 kubelet[2720]: I0905 06:25:45.435376 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-p7g92" podStartSLOduration=35.521398484 podStartE2EDuration="51.435372002s" podCreationTimestamp="2025-09-05 06:24:54 +0000 UTC" firstStartedPulling="2025-09-05 06:25:26.053441907 +0000 UTC m=+47.468435222" lastFinishedPulling="2025-09-05 06:25:41.967415425 +0000 UTC m=+63.382408740" observedRunningTime="2025-09-05 06:25:42.464439948 +0000 UTC m=+63.879433263" watchObservedRunningTime="2025-09-05 06:25:45.435372002 +0000 UTC m=+66.850365307" Sep 5 06:25:47.922464 systemd[1]: Started sshd@15-10.0.0.4:22-10.0.0.1:51810.service - OpenSSH per-connection server daemon (10.0.0.1:51810). Sep 5 06:25:47.979404 sshd[5464]: Accepted publickey for core from 10.0.0.1 port 51810 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:47.980787 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:47.984791 systemd-logind[1573]: New session 16 of user core. Sep 5 06:25:47.998699 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 06:25:48.131957 sshd[5467]: Connection closed by 10.0.0.1 port 51810 Sep 5 06:25:48.132270 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:48.136762 systemd[1]: sshd@15-10.0.0.4:22-10.0.0.1:51810.service: Deactivated successfully. Sep 5 06:25:48.138786 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 06:25:48.139454 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Sep 5 06:25:48.140456 systemd-logind[1573]: Removed session 16. Sep 5 06:25:50.858219 containerd[1590]: time="2025-09-05T06:25:50.858178571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9d6a03aeb0b16587c2864219684264a9761dd8e7731c0117481d6048db33fe9\" id:\"9fc4d5e8ab9c413a0501aed7b326d668c61700cae55540cb1f971915d3dd970f\" pid:5493 exited_at:{seconds:1757053550 nanos:857892908}" Sep 5 06:25:53.145547 systemd[1]: Started sshd@16-10.0.0.4:22-10.0.0.1:44896.service - OpenSSH per-connection server daemon (10.0.0.1:44896). Sep 5 06:25:53.206834 sshd[5506]: Accepted publickey for core from 10.0.0.1 port 44896 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:53.208783 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:53.213010 systemd-logind[1573]: New session 17 of user core. Sep 5 06:25:53.227678 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 06:25:53.372313 systemd[1]: Started sshd@17-10.0.0.4:22-10.0.0.1:44910.service - OpenSSH per-connection server daemon (10.0.0.1:44910). Sep 5 06:25:53.418276 sshd[5520]: Accepted publickey for core from 10.0.0.1 port 44910 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:53.419611 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:53.423891 systemd-logind[1573]: New session 18 of user core. Sep 5 06:25:53.433651 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 06:25:53.568705 sshd[5509]: Connection closed by 10.0.0.1 port 44896 Sep 5 06:25:53.569071 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:53.574971 systemd[1]: sshd@16-10.0.0.4:22-10.0.0.1:44896.service: Deactivated successfully. Sep 5 06:25:53.577503 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 06:25:53.578345 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Sep 5 06:25:53.580140 systemd-logind[1573]: Removed session 17. Sep 5 06:25:53.636610 systemd[1]: Started sshd@18-10.0.0.4:22-10.0.0.1:44924.service - OpenSSH per-connection server daemon (10.0.0.1:44924). Sep 5 06:25:53.701279 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 44924 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:53.702884 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:53.707125 systemd-logind[1573]: New session 19 of user core. Sep 5 06:25:53.714712 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 06:25:53.832655 sshd[5523]: Connection closed by 10.0.0.1 port 44910 Sep 5 06:25:53.834525 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:53.838984 systemd[1]: sshd@17-10.0.0.4:22-10.0.0.1:44910.service: Deactivated successfully. Sep 5 06:25:53.840983 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 06:25:53.842329 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Sep 5 06:25:53.844873 systemd-logind[1573]: Removed session 18. Sep 5 06:25:54.367219 sshd[5538]: Connection closed by 10.0.0.1 port 44924 Sep 5 06:25:54.369621 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:54.380803 systemd[1]: sshd@18-10.0.0.4:22-10.0.0.1:44924.service: Deactivated successfully. Sep 5 06:25:54.383233 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 06:25:54.386278 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Sep 5 06:25:54.389592 systemd[1]: Started sshd@19-10.0.0.4:22-10.0.0.1:44956.service - OpenSSH per-connection server daemon (10.0.0.1:44956). Sep 5 06:25:54.390267 systemd-logind[1573]: Removed session 19. Sep 5 06:25:54.438761 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 44956 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:54.440019 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:54.444107 systemd-logind[1573]: New session 20 of user core. Sep 5 06:25:54.452660 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 06:25:54.704440 sshd[5563]: Connection closed by 10.0.0.1 port 44956 Sep 5 06:25:54.705747 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:54.714904 systemd[1]: sshd@19-10.0.0.4:22-10.0.0.1:44956.service: Deactivated successfully. Sep 5 06:25:54.716917 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 06:25:54.717925 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Sep 5 06:25:54.721607 systemd[1]: Started sshd@20-10.0.0.4:22-10.0.0.1:44972.service - OpenSSH per-connection server daemon (10.0.0.1:44972). Sep 5 06:25:54.722351 systemd-logind[1573]: Removed session 20. Sep 5 06:25:54.772044 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 44972 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:25:54.773872 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:25:54.778857 systemd-logind[1573]: New session 21 of user core. Sep 5 06:25:54.787966 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 06:25:54.947258 sshd[5578]: Connection closed by 10.0.0.1 port 44972 Sep 5 06:25:54.947591 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Sep 5 06:25:54.952903 systemd[1]: sshd@20-10.0.0.4:22-10.0.0.1:44972.service: Deactivated successfully. Sep 5 06:25:54.954915 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 06:25:54.955901 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Sep 5 06:25:54.957073 systemd-logind[1573]: Removed session 21. Sep 5 06:25:59.841926 containerd[1590]: time="2025-09-05T06:25:59.841879312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efed62e9d6e5c832ce08610fbe3d56fe1a5817f2ba8257a7713df5044b81b859\" id:\"d127b57ac22e5e250cfcb36c83deddcb1d98e327537a6d21751eb22ea91815c9\" pid:5605 exited_at:{seconds:1757053559 nanos:841554842}" Sep 5 06:25:59.959772 systemd[1]: Started sshd@21-10.0.0.4:22-10.0.0.1:47900.service - OpenSSH per-connection server daemon (10.0.0.1:47900). Sep 5 06:26:00.038435 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 47900 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:26:00.040551 sshd-session[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:26:00.045127 systemd-logind[1573]: New session 22 of user core. Sep 5 06:26:00.056703 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 06:26:00.234435 sshd[5621]: Connection closed by 10.0.0.1 port 47900 Sep 5 06:26:00.234760 sshd-session[5616]: pam_unix(sshd:session): session closed for user core Sep 5 06:26:00.239134 systemd[1]: sshd@21-10.0.0.4:22-10.0.0.1:47900.service: Deactivated successfully. Sep 5 06:26:00.241478 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 06:26:00.242241 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Sep 5 06:26:00.243622 systemd-logind[1573]: Removed session 22. Sep 5 06:26:00.377551 kubelet[2720]: I0905 06:26:00.377478 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:26:01.665106 kubelet[2720]: E0905 06:26:01.665063 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:26:05.251865 systemd[1]: Started sshd@22-10.0.0.4:22-10.0.0.1:47930.service - OpenSSH per-connection server daemon (10.0.0.1:47930). Sep 5 06:26:05.303494 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 47930 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:26:05.304951 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:26:05.308964 systemd-logind[1573]: New session 23 of user core. Sep 5 06:26:05.317661 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 06:26:05.433820 sshd[5645]: Connection closed by 10.0.0.1 port 47930 Sep 5 06:26:05.434185 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Sep 5 06:26:05.439606 systemd[1]: sshd@22-10.0.0.4:22-10.0.0.1:47930.service: Deactivated successfully. Sep 5 06:26:05.441997 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 06:26:05.443097 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Sep 5 06:26:05.445069 systemd-logind[1573]: Removed session 23. Sep 5 06:26:05.665562 kubelet[2720]: E0905 06:26:05.665500 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:26:06.966919 containerd[1590]: time="2025-09-05T06:26:06.966855019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831\" id:\"c308eba16167313d43684337fb35a9d2f6e48bba1011994691124503ff56ef09\" pid:5671 exited_at:{seconds:1757053566 nanos:966587904}" Sep 5 06:26:10.450562 systemd[1]: Started sshd@23-10.0.0.4:22-10.0.0.1:34798.service - OpenSSH per-connection server daemon (10.0.0.1:34798). Sep 5 06:26:10.513512 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 34798 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:26:10.515041 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:26:10.529087 systemd-logind[1573]: New session 24 of user core. Sep 5 06:26:10.536598 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 06:26:10.708948 sshd[5686]: Connection closed by 10.0.0.1 port 34798 Sep 5 06:26:10.709205 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Sep 5 06:26:10.714341 systemd[1]: sshd@23-10.0.0.4:22-10.0.0.1:34798.service: Deactivated successfully. Sep 5 06:26:10.716455 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 06:26:10.717255 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Sep 5 06:26:10.718361 systemd-logind[1573]: Removed session 24. Sep 5 06:26:13.453053 containerd[1590]: time="2025-09-05T06:26:13.453011260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b6ebeb95b11557fd20216c7e5af2ac9d6d80707e5ef28e4dbfa54ca2149c831\" id:\"b3e4c327be6fb729b40fe5899423af3c4a02617b98990213c76416171c7400e5\" pid:5710 exited_at:{seconds:1757053573 nanos:452596652}" Sep 5 06:26:14.666610 kubelet[2720]: E0905 06:26:14.665617 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:26:14.666610 kubelet[2720]: E0905 06:26:14.665983 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:26:15.664882 kubelet[2720]: E0905 06:26:15.664829 2720 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 06:26:15.725246 systemd[1]: Started sshd@24-10.0.0.4:22-10.0.0.1:34874.service - OpenSSH per-connection server daemon (10.0.0.1:34874). Sep 5 06:26:15.780256 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 34874 ssh2: RSA SHA256:uqxRgYMnvXfKMaixGQcRJDbet9lnYJBMZN+CHFcul4M Sep 5 06:26:15.782075 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:26:15.786601 systemd-logind[1573]: New session 25 of user core. Sep 5 06:26:15.799667 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 06:26:15.923481 sshd[5729]: Connection closed by 10.0.0.1 port 34874 Sep 5 06:26:15.923794 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Sep 5 06:26:15.929949 systemd[1]: sshd@24-10.0.0.4:22-10.0.0.1:34874.service: Deactivated successfully. Sep 5 06:26:15.937592 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 06:26:15.940235 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Sep 5 06:26:15.942284 systemd-logind[1573]: Removed session 25.