Jul 11 00:22:07.985843 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:18:23 -00 2025 Jul 11 00:22:07.985876 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:22:07.985885 kernel: BIOS-provided physical RAM map: Jul 11 00:22:07.985892 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 00:22:07.985899 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 00:22:07.985906 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 00:22:07.985914 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 00:22:07.985923 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 00:22:07.985932 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:22:07.985939 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 00:22:07.985946 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:22:07.985953 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 00:22:07.985959 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:22:07.985966 kernel: NX (Execute Disable) protection: active Jul 11 00:22:07.985977 kernel: APIC: Static calls initialized Jul 11 00:22:07.985984 kernel: SMBIOS 2.8 present. Jul 11 00:22:07.985994 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 00:22:07.986002 kernel: DMI: Memory slots populated: 1/1 Jul 11 00:22:07.986009 kernel: Hypervisor detected: KVM Jul 11 00:22:07.986016 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:22:07.986024 kernel: kvm-clock: using sched offset of 5096362969 cycles Jul 11 00:22:07.986032 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:22:07.986040 kernel: tsc: Detected 2794.748 MHz processor Jul 11 00:22:07.986049 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:22:07.986057 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:22:07.986070 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 00:22:07.986078 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 00:22:07.986089 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:22:07.986107 kernel: Using GB pages for direct mapping Jul 11 00:22:07.986122 kernel: ACPI: Early table checksum verification disabled Jul 11 00:22:07.986136 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 00:22:07.986144 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:07.986154 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:07.986161 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:07.986169 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 00:22:07.986176 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:07.986184 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:07.986191 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:07.986199 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:07.986206 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 00:22:07.986219 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 00:22:07.986227 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 00:22:07.986234 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 00:22:07.986242 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 00:22:07.986250 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 00:22:07.986257 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 00:22:07.986267 kernel: No NUMA configuration found Jul 11 00:22:07.986275 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 00:22:07.986282 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 11 00:22:07.986290 kernel: Zone ranges: Jul 11 00:22:07.986298 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:22:07.986305 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 00:22:07.986313 kernel: Normal empty Jul 11 00:22:07.986321 kernel: Device empty Jul 11 00:22:07.986342 kernel: Movable zone start for each node Jul 11 00:22:07.986350 kernel: Early memory node ranges Jul 11 00:22:07.986360 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 00:22:07.986368 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 00:22:07.986375 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 00:22:07.986383 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:22:07.986391 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 00:22:07.986398 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 00:22:07.986406 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:22:07.986416 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:22:07.986424 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:22:07.986436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:22:07.986447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:22:07.986471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:22:07.986493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:22:07.986502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:22:07.986510 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:22:07.986517 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:22:07.986525 kernel: TSC deadline timer available Jul 11 00:22:07.986533 kernel: CPU topo: Max. logical packages: 1 Jul 11 00:22:07.986545 kernel: CPU topo: Max. logical dies: 1 Jul 11 00:22:07.986552 kernel: CPU topo: Max. dies per package: 1 Jul 11 00:22:07.986560 kernel: CPU topo: Max. threads per core: 1 Jul 11 00:22:07.986567 kernel: CPU topo: Num. cores per package: 4 Jul 11 00:22:07.986575 kernel: CPU topo: Num. threads per package: 4 Jul 11 00:22:07.986583 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 11 00:22:07.986590 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:22:07.986598 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:22:07.986606 kernel: kvm-guest: setup PV sched yield Jul 11 00:22:07.986613 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 00:22:07.986623 kernel: Booting paravirtualized kernel on KVM Jul 11 00:22:07.986631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:22:07.986639 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:22:07.986647 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 11 00:22:07.986663 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 11 00:22:07.986670 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:22:07.986678 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:22:07.986686 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:22:07.986695 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:22:07.986706 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:22:07.986714 kernel: random: crng init done Jul 11 00:22:07.986722 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:22:07.986729 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:22:07.986737 kernel: Fallback order for Node 0: 0 Jul 11 00:22:07.986745 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 11 00:22:07.986752 kernel: Policy zone: DMA32 Jul 11 00:22:07.986760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:22:07.986770 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:22:07.986778 kernel: ftrace: allocating 40095 entries in 157 pages Jul 11 00:22:07.986785 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 00:22:07.986793 kernel: Dynamic Preempt: voluntary Jul 11 00:22:07.986801 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:22:07.986809 kernel: rcu: RCU event tracing is enabled. Jul 11 00:22:07.986817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:22:07.986825 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:22:07.986835 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:22:07.986845 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:22:07.986853 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:22:07.986861 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:22:07.986868 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:07.986876 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:07.986885 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:07.986895 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:22:07.986905 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:22:07.986925 kernel: Console: colour VGA+ 80x25 Jul 11 00:22:07.986935 kernel: printk: legacy console [ttyS0] enabled Jul 11 00:22:07.986944 kernel: ACPI: Core revision 20240827 Jul 11 00:22:07.986952 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:22:07.986962 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:22:07.986970 kernel: x2apic enabled Jul 11 00:22:07.986981 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:22:07.986989 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:22:07.986998 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:22:07.987008 kernel: kvm-guest: setup PV IPIs Jul 11 00:22:07.987016 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:22:07.987024 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 00:22:07.987032 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 00:22:07.987040 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:22:07.987048 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:22:07.987056 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:22:07.987065 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:22:07.987074 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:22:07.987083 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:22:07.987091 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:22:07.987099 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:22:07.987107 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:22:07.987115 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:22:07.987123 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:22:07.987133 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:22:07.987141 kernel: x86/bugs: return thunk changed Jul 11 00:22:07.987151 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:22:07.987159 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:22:07.987167 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:22:07.987175 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:22:07.987183 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:22:07.987191 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:22:07.987200 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:22:07.987211 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:22:07.987224 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 00:22:07.987236 kernel: landlock: Up and running. Jul 11 00:22:07.987248 kernel: SELinux: Initializing. Jul 11 00:22:07.987258 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:22:07.987269 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:22:07.987278 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:22:07.987286 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:22:07.987294 kernel: ... version: 0 Jul 11 00:22:07.987302 kernel: ... bit width: 48 Jul 11 00:22:07.987313 kernel: ... generic registers: 6 Jul 11 00:22:07.987321 kernel: ... value mask: 0000ffffffffffff Jul 11 00:22:07.987348 kernel: ... max period: 00007fffffffffff Jul 11 00:22:07.987356 kernel: ... fixed-purpose events: 0 Jul 11 00:22:07.987365 kernel: ... event mask: 000000000000003f Jul 11 00:22:07.987372 kernel: signal: max sigframe size: 1776 Jul 11 00:22:07.987380 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:22:07.987388 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:22:07.987397 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 00:22:07.987405 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:22:07.987415 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:22:07.987423 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:22:07.987433 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:22:07.987444 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 00:22:07.987455 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 136904K reserved, 0K cma-reserved) Jul 11 00:22:07.987466 kernel: devtmpfs: initialized Jul 11 00:22:07.987474 kernel: x86/mm: Memory block size: 128MB Jul 11 00:22:07.987483 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:22:07.987491 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:22:07.987502 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:22:07.987510 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:22:07.987517 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:22:07.987526 kernel: audit: type=2000 audit(1752193323.769:1): state=initialized audit_enabled=0 res=1 Jul 11 00:22:07.987533 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:22:07.987541 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:22:07.987549 kernel: cpuidle: using governor menu Jul 11 00:22:07.987557 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:22:07.987565 kernel: dca service started, version 1.12.1 Jul 11 00:22:07.987575 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 11 00:22:07.987583 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:22:07.987591 kernel: PCI: Using configuration type 1 for base access Jul 11 00:22:07.987599 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:22:07.987607 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:22:07.987615 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:22:07.987623 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:22:07.987631 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:22:07.987642 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:22:07.987650 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:22:07.987666 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:22:07.987675 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:22:07.987682 kernel: ACPI: Interpreter enabled Jul 11 00:22:07.987691 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:22:07.987702 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:22:07.987713 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:22:07.987723 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:22:07.987734 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:22:07.987749 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:22:07.988067 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:22:07.988229 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:22:07.988387 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:22:07.988403 kernel: PCI host bridge to bus 0000:00 Jul 11 00:22:07.988567 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:22:07.988702 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:22:07.988820 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:22:07.989049 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:22:07.989170 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:22:07.989284 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 00:22:07.989416 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:22:07.989597 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 11 00:22:07.989793 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 11 00:22:07.989951 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 11 00:22:07.990105 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 11 00:22:07.990235 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 11 00:22:07.990419 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:22:07.990603 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 00:22:07.990782 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 11 00:22:07.990947 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 11 00:22:07.991086 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 00:22:07.991231 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 00:22:07.991390 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 11 00:22:07.991536 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 11 00:22:07.991682 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 00:22:07.991873 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 00:22:07.992039 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 11 00:22:07.992225 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 11 00:22:07.992429 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 00:22:07.992624 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 11 00:22:07.992795 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 11 00:22:07.994977 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:22:07.995203 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 11 00:22:07.995382 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 11 00:22:07.995532 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 11 00:22:07.995712 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 11 00:22:07.995866 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 11 00:22:07.995881 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:22:07.995893 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:22:07.995910 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:22:07.995921 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:22:07.995932 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:22:07.995944 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:22:07.995955 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:22:07.995966 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:22:07.995977 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:22:07.995988 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:22:07.995999 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:22:07.996013 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:22:07.996024 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:22:07.996035 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:22:07.996046 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:22:07.996057 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:22:07.996068 kernel: iommu: Default domain type: Translated Jul 11 00:22:07.996079 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:22:07.996090 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:22:07.996102 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:22:07.996116 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 00:22:07.996127 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 00:22:07.996277 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:22:07.996453 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:22:07.996601 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:22:07.996616 kernel: vgaarb: loaded Jul 11 00:22:07.996627 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:22:07.996638 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:22:07.996666 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:22:07.996677 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:22:07.996689 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:22:07.996701 kernel: pnp: PnP ACPI init Jul 11 00:22:07.996884 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:22:07.996900 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:22:07.996912 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:22:07.996923 kernel: NET: Registered PF_INET protocol family Jul 11 00:22:07.996938 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:22:07.996949 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:22:07.996961 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:22:07.996972 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:22:07.996983 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:22:07.996994 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:22:07.997006 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:22:07.997017 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:22:07.997028 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:22:07.997042 kernel: NET: Registered PF_XDP protocol family Jul 11 00:22:07.997188 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:22:07.997323 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:22:07.997479 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:22:07.997613 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:22:07.997761 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:22:07.997896 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 00:22:07.997911 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:22:07.997928 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 11 00:22:07.997939 kernel: Initialise system trusted keyrings Jul 11 00:22:07.997950 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:22:07.997961 kernel: Key type asymmetric registered Jul 11 00:22:07.997972 kernel: Asymmetric key parser 'x509' registered Jul 11 00:22:07.997983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:22:07.997994 kernel: io scheduler mq-deadline registered Jul 11 00:22:07.998005 kernel: io scheduler kyber registered Jul 11 00:22:07.998016 kernel: io scheduler bfq registered Jul 11 00:22:07.998027 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:22:07.998042 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:22:07.998054 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:22:07.998065 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:22:07.998076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:22:07.998087 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:22:07.998098 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:22:07.998109 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:22:07.998120 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:22:07.998314 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:22:07.998349 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:22:07.998512 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:22:07.998644 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:22:07 UTC (1752193327) Jul 11 00:22:07.998791 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:22:07.998804 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:22:07.998813 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:22:07.998821 kernel: Segment Routing with IPv6 Jul 11 00:22:07.998835 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:22:07.998843 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:22:07.998852 kernel: Key type dns_resolver registered Jul 11 00:22:07.998860 kernel: IPI shorthand broadcast: enabled Jul 11 00:22:07.998868 kernel: sched_clock: Marking stable (3696003786, 121108148)->(3844883198, -27771264) Jul 11 00:22:07.998877 kernel: registered taskstats version 1 Jul 11 00:22:07.998885 kernel: Loading compiled-in X.509 certificates Jul 11 00:22:07.998893 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: e2778f992738e32ced6c6a485d2ed31f29141742' Jul 11 00:22:07.998902 kernel: Demotion targets for Node 0: null Jul 11 00:22:07.998912 kernel: Key type .fscrypt registered Jul 11 00:22:07.998920 kernel: Key type fscrypt-provisioning registered Jul 11 00:22:07.998929 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:22:07.998937 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:22:07.998945 kernel: ima: No architecture policies found Jul 11 00:22:07.998953 kernel: clk: Disabling unused clocks Jul 11 00:22:07.998962 kernel: Warning: unable to open an initial console. Jul 11 00:22:07.998971 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 11 00:22:07.998979 kernel: Write protecting the kernel read-only data: 24576k Jul 11 00:22:07.998989 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 00:22:07.998997 kernel: Run /init as init process Jul 11 00:22:07.999006 kernel: with arguments: Jul 11 00:22:07.999014 kernel: /init Jul 11 00:22:07.999022 kernel: with environment: Jul 11 00:22:07.999030 kernel: HOME=/ Jul 11 00:22:07.999038 kernel: TERM=linux Jul 11 00:22:07.999046 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:22:07.999060 systemd[1]: Successfully made /usr/ read-only. Jul 11 00:22:07.999076 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 00:22:07.999098 systemd[1]: Detected virtualization kvm. Jul 11 00:22:07.999107 systemd[1]: Detected architecture x86-64. Jul 11 00:22:07.999116 systemd[1]: Running in initrd. Jul 11 00:22:07.999124 systemd[1]: No hostname configured, using default hostname. Jul 11 00:22:07.999136 systemd[1]: Hostname set to . Jul 11 00:22:07.999144 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:22:07.999153 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:22:07.999163 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:22:07.999172 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:22:07.999181 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:22:07.999191 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:22:07.999200 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:22:07.999212 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:22:07.999222 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:22:07.999231 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:22:07.999240 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:22:07.999249 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:22:07.999258 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:22:07.999266 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:22:07.999280 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:22:07.999291 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:22:07.999304 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:22:07.999317 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:22:07.999383 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:22:07.999394 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 00:22:07.999403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:22:07.999412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:22:07.999422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:22:07.999434 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:22:07.999443 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:22:07.999452 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:22:07.999461 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:22:07.999472 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 00:22:07.999485 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:22:07.999494 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:22:07.999503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:22:07.999512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:07.999521 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:22:07.999531 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:22:07.999542 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:22:07.999551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:22:07.999592 systemd-journald[221]: Collecting audit messages is disabled. Jul 11 00:22:07.999618 systemd-journald[221]: Journal started Jul 11 00:22:07.999640 systemd-journald[221]: Runtime Journal (/run/log/journal/932297faf5dc4469a51a347378b7dc0e) is 6M, max 48.6M, 42.5M free. Jul 11 00:22:07.986623 systemd-modules-load[222]: Inserted module 'overlay' Jul 11 00:22:08.001111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:22:08.001439 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:22:08.004927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:22:08.007448 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:22:08.036370 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:22:08.038906 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 11 00:22:08.065176 kernel: Bridge firewalling registered Jul 11 00:22:08.040808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:22:08.045229 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 00:22:08.065676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:22:08.070197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:08.073357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:22:08.079178 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:22:08.082830 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:22:08.106680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:08.109004 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:22:08.127592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:22:08.130507 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:22:08.171773 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:22:08.193716 systemd-resolved[258]: Positive Trust Anchors: Jul 11 00:22:08.193744 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:22:08.193779 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:22:08.197045 systemd-resolved[258]: Defaulting to hostname 'linux'. Jul 11 00:22:08.204462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:22:08.206015 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:22:08.315373 kernel: SCSI subsystem initialized Jul 11 00:22:08.326381 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:22:08.340463 kernel: iscsi: registered transport (tcp) Jul 11 00:22:08.374432 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:22:08.374539 kernel: QLogic iSCSI HBA Driver Jul 11 00:22:08.399531 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:22:08.435843 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:22:08.437249 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:22:08.517055 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:22:08.519441 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:22:08.588399 kernel: raid6: avx2x4 gen() 23084 MB/s Jul 11 00:22:08.605400 kernel: raid6: avx2x2 gen() 23671 MB/s Jul 11 00:22:08.622753 kernel: raid6: avx2x1 gen() 15840 MB/s Jul 11 00:22:08.622853 kernel: raid6: using algorithm avx2x2 gen() 23671 MB/s Jul 11 00:22:08.640775 kernel: raid6: .... xor() 17318 MB/s, rmw enabled Jul 11 00:22:08.640884 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:22:08.666407 kernel: xor: automatically using best checksumming function avx Jul 11 00:22:08.897376 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:22:08.906763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:22:08.909191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:22:08.941093 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 11 00:22:08.947882 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:22:08.949125 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:22:08.979218 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jul 11 00:22:09.013421 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:22:09.021240 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:22:09.113592 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:22:09.131798 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:22:09.168415 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:22:09.170407 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:22:09.174864 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:22:09.179668 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:22:09.179715 kernel: GPT:9289727 != 19775487 Jul 11 00:22:09.179730 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:22:09.179745 kernel: GPT:9289727 != 19775487 Jul 11 00:22:09.180842 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:22:09.180873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:22:09.186358 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 11 00:22:09.189357 kernel: AES CTR mode by8 optimization enabled Jul 11 00:22:09.204380 kernel: libata version 3.00 loaded. Jul 11 00:22:09.209153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:22:09.209286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:09.216505 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:09.221611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:09.227610 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:22:09.230404 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:22:09.233364 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:22:09.236521 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 11 00:22:09.236747 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 11 00:22:09.236931 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:22:09.244362 kernel: scsi host0: ahci Jul 11 00:22:09.258549 kernel: scsi host1: ahci Jul 11 00:22:09.258859 kernel: scsi host2: ahci Jul 11 00:22:09.260192 kernel: scsi host3: ahci Jul 11 00:22:09.275972 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:22:09.282701 kernel: scsi host4: ahci Jul 11 00:22:09.282948 kernel: scsi host5: ahci Jul 11 00:22:09.283107 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 11 00:22:09.296528 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 11 00:22:09.297381 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 11 00:22:09.297477 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 11 00:22:09.297492 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 11 00:22:09.297506 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 11 00:22:09.299904 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:22:09.344405 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:22:09.344595 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:22:09.349779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:09.364491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:22:09.366223 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:22:09.402152 disk-uuid[632]: Primary Header is updated. Jul 11 00:22:09.402152 disk-uuid[632]: Secondary Entries is updated. Jul 11 00:22:09.402152 disk-uuid[632]: Secondary Header is updated. Jul 11 00:22:09.407357 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:22:09.412362 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:22:09.607495 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:09.607597 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:09.607628 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:09.609372 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:22:09.610381 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:09.611397 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:22:09.611501 kernel: ata3.00: applying bridge limits Jul 11 00:22:09.612367 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:09.613377 kernel: ata3.00: configured for UDMA/100 Jul 11 00:22:09.614502 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:22:09.678664 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:22:09.679095 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:22:09.700598 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:22:10.113748 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:22:10.116782 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:22:10.119439 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:22:10.121923 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:22:10.125655 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:22:10.156780 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:22:10.414379 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:22:10.414480 disk-uuid[633]: The operation has completed successfully. Jul 11 00:22:10.452959 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:22:10.453143 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:22:10.502562 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:22:10.529750 sh[662]: Success Jul 11 00:22:10.551386 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:22:10.551467 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:22:10.553353 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 00:22:10.565376 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 11 00:22:10.624548 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:22:10.628684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:22:10.643562 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:22:10.654358 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 00:22:10.654435 kernel: BTRFS: device fsid 3f9b7830-c6a3-4ecb-9c03-fbe92ab5c328 devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (674) Jul 11 00:22:10.656845 kernel: BTRFS info (device dm-0): first mount of filesystem 3f9b7830-c6a3-4ecb-9c03-fbe92ab5c328 Jul 11 00:22:10.656879 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:22:10.656895 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 00:22:10.664917 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:22:10.666525 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 00:22:10.668028 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:22:10.669024 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:22:10.670943 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:22:10.712063 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Jul 11 00:22:10.712122 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:22:10.712135 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:22:10.712971 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:22:10.722369 kernel: BTRFS info (device vda6): last unmount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:22:10.723275 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:22:10.725934 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:22:10.928191 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:22:10.939190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:22:11.024435 ignition[750]: Ignition 2.21.0 Jul 11 00:22:11.024453 ignition[750]: Stage: fetch-offline Jul 11 00:22:11.024515 ignition[750]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:11.024529 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:11.024671 ignition[750]: parsed url from cmdline: "" Jul 11 00:22:11.024677 ignition[750]: no config URL provided Jul 11 00:22:11.024684 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:22:11.024697 ignition[750]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:22:11.024736 ignition[750]: op(1): [started] loading QEMU firmware config module Jul 11 00:22:11.024743 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:22:11.042526 ignition[750]: op(1): [finished] loading QEMU firmware config module Jul 11 00:22:11.044037 systemd-networkd[849]: lo: Link UP Jul 11 00:22:11.044042 systemd-networkd[849]: lo: Gained carrier Jul 11 00:22:11.045794 systemd-networkd[849]: Enumeration completed Jul 11 00:22:11.045943 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:22:11.046269 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:22:11.046273 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:22:11.047632 systemd-networkd[849]: eth0: Link UP Jul 11 00:22:11.047637 systemd-networkd[849]: eth0: Gained carrier Jul 11 00:22:11.047647 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:22:11.049395 systemd[1]: Reached target network.target - Network. Jul 11 00:22:11.074488 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:22:11.106205 ignition[750]: parsing config with SHA512: 9dfc536f7d7441a6a5c9c1071a79defc3e47423fe388cb7401edd1b973e1f11d62cfb54b2068118a9675300f8bef31beb4c2fdcdab09f358cfdf6453a7c8d617 Jul 11 00:22:11.113983 unknown[750]: fetched base config from "system" Jul 11 00:22:11.114258 unknown[750]: fetched user config from "qemu" Jul 11 00:22:11.114698 ignition[750]: fetch-offline: fetch-offline passed Jul 11 00:22:11.114795 ignition[750]: Ignition finished successfully Jul 11 00:22:11.120713 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:22:11.122379 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:22:11.125688 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:22:11.203768 ignition[856]: Ignition 2.21.0 Jul 11 00:22:11.203787 ignition[856]: Stage: kargs Jul 11 00:22:11.203962 ignition[856]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:11.203975 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:11.204926 ignition[856]: kargs: kargs passed Jul 11 00:22:11.205012 ignition[856]: Ignition finished successfully Jul 11 00:22:11.211185 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:22:11.215003 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:22:11.277253 ignition[864]: Ignition 2.21.0 Jul 11 00:22:11.277272 ignition[864]: Stage: disks Jul 11 00:22:11.277726 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:11.277761 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:11.284489 ignition[864]: disks: disks passed Jul 11 00:22:11.284562 ignition[864]: Ignition finished successfully Jul 11 00:22:11.289608 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:22:11.289962 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:22:11.293238 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:22:11.295838 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:22:11.298349 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:22:11.298628 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:22:11.303965 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:22:11.345518 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 00:22:11.356895 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:22:11.360893 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:22:11.492376 kernel: EXT4-fs (vda9): mounted filesystem b9a26173-6c72-4a5b-b1cb-ad71b806f75e r/w with ordered data mode. Quota mode: none. Jul 11 00:22:11.492920 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:22:11.493654 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:22:11.497901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:22:11.498843 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:22:11.501131 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:22:11.501184 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:22:11.501212 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:22:11.525912 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:22:11.527743 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:22:11.533364 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (882) Jul 11 00:22:11.535888 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:22:11.535919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:22:11.535934 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:22:11.541606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:22:11.578119 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:22:11.584947 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:22:11.589583 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:22:11.630413 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:22:11.756327 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:22:11.760314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:22:11.763866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:22:11.803592 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:22:11.805024 kernel: BTRFS info (device vda6): last unmount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:22:11.825626 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:22:11.878829 ignition[996]: INFO : Ignition 2.21.0 Jul 11 00:22:11.878829 ignition[996]: INFO : Stage: mount Jul 11 00:22:11.881827 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:11.881827 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:11.884282 ignition[996]: INFO : mount: mount passed Jul 11 00:22:11.884282 ignition[996]: INFO : Ignition finished successfully Jul 11 00:22:11.886465 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:22:11.888956 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:22:11.925783 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:22:11.954439 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Jul 11 00:22:11.956970 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:22:11.957003 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:22:11.957018 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:22:11.964909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:22:12.008083 ignition[1026]: INFO : Ignition 2.21.0 Jul 11 00:22:12.008083 ignition[1026]: INFO : Stage: files Jul 11 00:22:12.010282 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:12.010282 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:12.062628 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:22:12.064612 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:22:12.064612 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:22:12.068989 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:22:12.070815 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:22:12.070815 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:22:12.069797 unknown[1026]: wrote ssh authorized keys file for user: core Jul 11 00:22:12.075162 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 11 00:22:12.075162 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 11 00:22:12.227562 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:22:12.453281 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 11 00:22:12.453281 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:22:12.457923 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:22:12.457923 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:22:12.457923 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:22:12.457923 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:22:12.481360 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:22:12.481360 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:22:12.481360 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:22:12.578464 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:22:12.581087 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:22:12.581087 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:22:12.690220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:22:12.690220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:22:12.696528 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 11 00:22:12.967647 systemd-networkd[849]: eth0: Gained IPv6LL Jul 11 00:22:13.580478 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:22:14.396298 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:22:14.396298 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 00:22:14.401171 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:22:14.403319 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:22:14.403319 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 00:22:14.403319 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 00:22:14.403319 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:22:14.403319 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:22:14.403319 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 00:22:14.403319 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:22:14.429846 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:22:14.435192 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:22:14.437129 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:22:14.437129 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:22:14.437129 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:22:14.437129 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:22:14.437129 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:22:14.437129 ignition[1026]: INFO : files: files passed Jul 11 00:22:14.437129 ignition[1026]: INFO : Ignition finished successfully Jul 11 00:22:14.448364 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:22:14.454868 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:22:14.458180 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:22:14.483810 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:22:14.483932 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:22:14.488447 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:22:14.490606 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:22:14.492416 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:22:14.494224 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:22:14.498548 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:22:14.500482 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:22:14.503808 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:22:14.563441 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:22:14.563612 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:22:14.564920 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:22:14.567450 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:22:14.567826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:22:14.573047 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:22:14.605447 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:22:14.608580 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:22:14.634008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:22:14.635509 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:22:14.638064 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:22:14.639237 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:22:14.639427 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:22:14.642634 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:22:14.644528 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:22:14.646600 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:22:14.648654 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:22:14.650717 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:22:14.652927 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 00:22:14.655112 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:22:14.678976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:22:14.679366 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:22:14.679850 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:22:14.680160 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:22:14.680617 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:22:14.680749 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:22:14.690357 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:22:14.703392 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:22:14.703818 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:22:14.703953 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:22:14.708145 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:22:14.708276 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:22:14.712478 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:22:14.712637 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:22:14.713675 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:22:14.715618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:22:14.720512 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:22:14.723633 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:22:14.724919 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:22:14.727038 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:22:14.727190 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:22:14.729024 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:22:14.729127 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:22:14.729937 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:22:14.730091 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:22:14.731778 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:22:14.731899 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:22:14.738087 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:22:14.739035 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:22:14.739226 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:22:14.743205 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:22:14.745669 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:22:14.745845 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:22:14.750729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:22:14.750852 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:22:14.758696 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:22:14.758817 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:22:14.776036 ignition[1082]: INFO : Ignition 2.21.0 Jul 11 00:22:14.776036 ignition[1082]: INFO : Stage: umount Jul 11 00:22:14.778246 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:14.778246 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:14.778246 ignition[1082]: INFO : umount: umount passed Jul 11 00:22:14.778246 ignition[1082]: INFO : Ignition finished successfully Jul 11 00:22:14.784291 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:22:14.784472 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:22:14.789363 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:22:14.789991 systemd[1]: Stopped target network.target - Network. Jul 11 00:22:14.790507 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:22:14.790575 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:22:14.790860 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:22:14.790919 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:22:14.791201 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:22:14.791269 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:22:14.791601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:22:14.791659 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:22:14.792140 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:22:14.801844 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:22:14.810944 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:22:14.812136 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:22:14.814763 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:22:14.816031 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:22:14.821650 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 00:22:14.823375 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:22:14.823521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:22:14.827896 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 00:22:14.830513 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 00:22:14.830645 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:22:14.830688 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:22:14.831027 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:22:14.831076 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:22:14.832540 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:22:14.837619 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:22:14.837676 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:22:14.838245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:22:14.838292 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:14.844573 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:22:14.844623 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:22:14.845889 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:22:14.845952 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:22:14.854034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:22:14.859639 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:22:14.859732 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:22:14.881445 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:22:14.884572 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:22:14.886556 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:22:14.886623 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:22:14.888627 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:22:14.888681 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:22:14.889855 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:22:14.889928 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:22:14.890771 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:22:14.890834 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:22:14.896511 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:22:14.896572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:22:14.901031 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:22:14.901113 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 00:22:14.901181 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:22:14.905604 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:22:14.905673 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:22:14.909006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:22:14.909080 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:14.915674 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 00:22:14.915755 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 00:22:14.915824 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:22:14.916241 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:22:14.917652 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:22:14.931314 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:22:14.931544 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:22:14.932903 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:22:14.937272 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:22:14.969172 systemd[1]: Switching root. Jul 11 00:22:15.007770 systemd-journald[221]: Journal stopped Jul 11 00:22:16.304904 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Jul 11 00:22:16.304968 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:22:16.304988 kernel: SELinux: policy capability open_perms=1 Jul 11 00:22:16.304999 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:22:16.305015 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:22:16.305026 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:22:16.305038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:22:16.305053 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:22:16.305064 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:22:16.305076 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 00:22:16.305088 kernel: audit: type=1403 audit(1752193335.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:22:16.305100 systemd[1]: Successfully loaded SELinux policy in 55.223ms. Jul 11 00:22:16.305120 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.798ms. Jul 11 00:22:16.305133 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 00:22:16.305146 systemd[1]: Detected virtualization kvm. Jul 11 00:22:16.305157 systemd[1]: Detected architecture x86-64. Jul 11 00:22:16.305172 systemd[1]: Detected first boot. Jul 11 00:22:16.305185 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:22:16.305197 zram_generator::config[1127]: No configuration found. Jul 11 00:22:16.305210 kernel: Guest personality initialized and is inactive Jul 11 00:22:16.305227 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 00:22:16.305238 kernel: Initialized host personality Jul 11 00:22:16.305250 kernel: NET: Registered PF_VSOCK protocol family Jul 11 00:22:16.305261 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:22:16.305274 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 00:22:16.305289 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:22:16.305301 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:22:16.305313 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:22:16.305339 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:22:16.305352 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:22:16.305364 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:22:16.305376 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:22:16.305389 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:22:16.305405 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:22:16.305422 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:22:16.305434 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:22:16.305454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:22:16.305467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:22:16.305479 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:22:16.305491 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:22:16.305504 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:22:16.305520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:22:16.305532 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:22:16.305544 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:22:16.305556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:22:16.305568 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:22:16.305580 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:22:16.305595 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:22:16.305608 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:22:16.305621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:22:16.305636 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:22:16.305648 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:22:16.305660 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:22:16.305673 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:22:16.305685 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:22:16.305697 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 00:22:16.305709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:22:16.305722 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:22:16.305735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:22:16.305749 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:22:16.305762 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:22:16.305774 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:22:16.305786 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:22:16.305798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:16.305811 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:22:16.305823 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:22:16.305836 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:22:16.305849 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:22:16.305863 systemd[1]: Reached target machines.target - Containers. Jul 11 00:22:16.305876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:22:16.305917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:22:16.305930 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:22:16.305942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:22:16.305954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:22:16.305966 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:22:16.305978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:22:16.305994 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:22:16.306006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:22:16.306018 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:22:16.306031 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:22:16.306047 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:22:16.306060 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:22:16.306071 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:22:16.306085 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:22:16.306100 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:22:16.306112 kernel: loop: module loaded Jul 11 00:22:16.306123 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:22:16.306135 kernel: fuse: init (API version 7.41) Jul 11 00:22:16.306147 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:22:16.306160 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:22:16.306172 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 00:22:16.306192 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:22:16.306208 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:22:16.306220 systemd[1]: Stopped verity-setup.service. Jul 11 00:22:16.306232 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:16.306245 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:22:16.306257 kernel: ACPI: bus type drm_connector registered Jul 11 00:22:16.306268 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:22:16.306307 systemd-journald[1197]: Collecting audit messages is disabled. Jul 11 00:22:16.306353 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:22:16.306366 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:22:16.306378 systemd-journald[1197]: Journal started Jul 11 00:22:16.306404 systemd-journald[1197]: Runtime Journal (/run/log/journal/932297faf5dc4469a51a347378b7dc0e) is 6M, max 48.6M, 42.5M free. Jul 11 00:22:15.984782 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:22:16.011517 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:22:16.012024 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:22:16.308378 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:22:16.310303 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:22:16.312078 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:22:16.313818 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:22:16.315857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:22:16.317905 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:22:16.318198 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:22:16.320158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:22:16.320476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:22:16.322412 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:22:16.322714 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:22:16.324633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:22:16.324849 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:22:16.326736 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:22:16.326953 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:22:16.328676 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:22:16.328882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:22:16.331501 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:22:16.333265 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:22:16.335208 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:22:16.337137 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 00:22:16.355577 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:22:16.358586 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:22:16.361480 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:22:16.369742 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:22:16.369785 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:22:16.372507 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 00:22:16.380461 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:22:16.400197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:22:16.402556 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:22:16.405077 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:22:16.406500 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:22:16.416094 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:22:16.417469 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:22:16.419982 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:22:16.429057 systemd-journald[1197]: Time spent on flushing to /var/log/journal/932297faf5dc4469a51a347378b7dc0e is 23.330ms for 975 entries. Jul 11 00:22:16.429057 systemd-journald[1197]: System Journal (/var/log/journal/932297faf5dc4469a51a347378b7dc0e) is 8M, max 195.6M, 187.6M free. Jul 11 00:22:16.563720 systemd-journald[1197]: Received client request to flush runtime journal. Jul 11 00:22:16.563779 kernel: loop0: detected capacity change from 0 to 229808 Jul 11 00:22:16.563795 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:22:16.563808 kernel: loop1: detected capacity change from 0 to 113872 Jul 11 00:22:16.563821 kernel: loop2: detected capacity change from 0 to 146240 Jul 11 00:22:16.423949 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:22:16.426541 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:22:16.429773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:22:16.433100 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:22:16.434388 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:22:16.452897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:16.541417 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:22:16.543063 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:22:16.550131 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 00:22:16.552190 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:22:16.558392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:22:16.578686 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:22:16.599371 kernel: loop3: detected capacity change from 0 to 229808 Jul 11 00:22:16.640448 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jul 11 00:22:16.640467 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jul 11 00:22:16.643374 kernel: loop4: detected capacity change from 0 to 113872 Jul 11 00:22:16.647759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:22:16.661367 kernel: loop5: detected capacity change from 0 to 146240 Jul 11 00:22:16.673571 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:22:16.674212 (sd-merge)[1267]: Merged extensions into '/usr'. Jul 11 00:22:16.699616 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:22:16.699793 systemd[1]: Reloading... Jul 11 00:22:16.777744 zram_generator::config[1295]: No configuration found. Jul 11 00:22:16.910378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:16.914663 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:22:17.001021 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:22:17.001590 systemd[1]: Reloading finished in 301 ms. Jul 11 00:22:17.034406 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:22:17.074214 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:22:17.075928 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 00:22:17.102467 systemd[1]: Starting ensure-sysext.service... Jul 11 00:22:17.104653 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:22:17.201073 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:22:17.201094 systemd[1]: Reloading... Jul 11 00:22:17.205051 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 00:22:17.205493 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 00:22:17.205907 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:22:17.206243 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:22:17.207269 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:22:17.207676 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jul 11 00:22:17.207835 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jul 11 00:22:17.214737 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:22:17.214821 systemd-tmpfiles[1334]: Skipping /boot Jul 11 00:22:17.230760 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:22:17.230897 systemd-tmpfiles[1334]: Skipping /boot Jul 11 00:22:17.266361 zram_generator::config[1364]: No configuration found. Jul 11 00:22:17.474617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:17.580292 systemd[1]: Reloading finished in 378 ms. Jul 11 00:22:17.625298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:22:17.640167 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:22:17.643739 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:22:17.662715 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:22:17.667858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:22:17.671954 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:22:17.677751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:17.677997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:22:17.682679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:22:17.691749 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:22:17.697813 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:22:17.699599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:22:17.699753 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:22:17.699897 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:17.701976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:22:17.703403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:22:17.714866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:17.715307 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:22:17.718030 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:22:17.719486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:22:17.719635 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:22:17.722456 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:22:17.723760 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:17.724985 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:22:17.733672 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:22:17.736059 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:22:17.738427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:22:17.738727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:22:17.740825 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:22:17.741051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:22:17.743184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:22:17.743487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:22:17.756921 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:22:17.759861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:17.760135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:22:17.761719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:22:17.763572 augenrules[1437]: No rules Jul 11 00:22:17.765953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:22:17.780111 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:22:17.783695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:22:17.785718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:22:17.786032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:22:17.789322 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:22:17.791993 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:22:17.793213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:22:17.793427 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:17.795114 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:22:17.797605 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:22:17.798551 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:22:17.803175 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:22:17.803460 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:22:17.805163 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:22:17.805783 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:22:17.807605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:22:17.808038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:22:17.810056 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:22:17.810363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:22:17.812347 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:22:17.818594 systemd[1]: Finished ensure-sysext.service. Jul 11 00:22:17.824544 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:22:17.824716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:22:17.827157 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:22:17.831893 systemd-udevd[1449]: Using default interface naming scheme 'v255'. Jul 11 00:22:17.858099 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:22:17.865544 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:22:17.892916 systemd-resolved[1403]: Positive Trust Anchors: Jul 11 00:22:17.893321 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:22:17.893391 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:22:17.898762 systemd-resolved[1403]: Defaulting to hostname 'linux'. Jul 11 00:22:17.901860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:22:17.904225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:22:17.985186 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:22:17.986655 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:22:17.988194 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:22:17.989495 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:22:17.990775 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 00:22:17.991967 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:22:17.993298 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:22:17.993348 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:22:18.007424 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:22:18.010650 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:22:18.012520 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:22:18.013886 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:22:18.015622 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:22:18.018660 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:22:18.025322 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 00:22:18.027235 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 00:22:18.028652 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 00:22:18.041732 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:22:18.043484 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 00:22:18.045616 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:22:18.056984 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:22:18.062161 systemd-networkd[1466]: lo: Link UP Jul 11 00:22:18.062179 systemd-networkd[1466]: lo: Gained carrier Jul 11 00:22:18.063987 systemd-networkd[1466]: Enumeration completed Jul 11 00:22:18.064450 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:22:18.064463 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:22:18.065073 systemd-networkd[1466]: eth0: Link UP Jul 11 00:22:18.065223 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:22:18.065293 systemd-networkd[1466]: eth0: Gained carrier Jul 11 00:22:18.065311 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:22:18.066973 systemd[1]: Reached target network.target - Network. Jul 11 00:22:18.068353 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:22:18.069392 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:22:18.070391 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:22:18.070436 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:22:18.071911 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:22:18.074162 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:22:18.076530 systemd-networkd[1466]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:22:18.078272 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Jul 11 00:22:18.079301 systemd-timesyncd[1459]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:22:18.079383 systemd-timesyncd[1459]: Initial clock synchronization to Fri 2025-07-11 00:22:18.060280 UTC. Jul 11 00:22:18.085297 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:22:18.086489 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:22:18.089491 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:22:18.122367 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 00:22:18.130576 jq[1505]: false Jul 11 00:22:18.132381 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:22:18.136032 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:22:18.137344 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:22:18.138803 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 00:22:18.142537 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:22:18.145995 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:22:18.146268 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:22:18.146836 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:22:18.151464 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:22:18.160767 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:22:18.174673 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:22:18.178369 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 00:22:18.181440 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:22:18.183546 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:22:18.184027 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:22:18.186593 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:22:18.188920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:22:18.192002 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing passwd entry cache Jul 11 00:22:18.192012 oslogin_cache_refresh[1516]: Refreshing passwd entry cache Jul 11 00:22:18.194974 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:22:18.218445 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting users, quitting Jul 11 00:22:18.218445 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 00:22:18.218445 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing group entry cache Jul 11 00:22:18.218445 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting groups, quitting Jul 11 00:22:18.218445 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 00:22:18.201190 oslogin_cache_refresh[1516]: Failure getting users, quitting Jul 11 00:22:18.201208 oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 00:22:18.201263 oslogin_cache_refresh[1516]: Refreshing group entry cache Jul 11 00:22:18.207628 oslogin_cache_refresh[1516]: Failure getting groups, quitting Jul 11 00:22:18.207640 oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 00:22:18.283263 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:22:18.303750 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:22:18.304200 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 00:22:18.304490 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 00:22:18.305526 extend-filesystems[1515]: Found /dev/vda6 Jul 11 00:22:18.308823 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:22:18.309117 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:22:18.313671 jq[1533]: true Jul 11 00:22:18.314548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:22:18.317036 extend-filesystems[1515]: Found /dev/vda9 Jul 11 00:22:18.320929 extend-filesystems[1515]: Checking size of /dev/vda9 Jul 11 00:22:18.326385 update_engine[1531]: I20250711 00:22:18.325743 1531 main.cc:92] Flatcar Update Engine starting Jul 11 00:22:18.348943 jq[1543]: true Jul 11 00:22:18.349194 extend-filesystems[1515]: Resized partition /dev/vda9 Jul 11 00:22:18.355528 extend-filesystems[1562]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 00:22:18.360664 tar[1540]: linux-amd64/LICENSE Jul 11 00:22:18.360998 tar[1540]: linux-amd64/helm Jul 11 00:22:18.363361 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:22:18.363913 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:22:18.367216 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:22:18.374670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:18.404600 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:22:18.486051 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 00:22:18.490067 extend-filesystems[1562]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:22:18.490067 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:22:18.490067 extend-filesystems[1562]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:22:18.489475 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:22:18.505612 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Jul 11 00:22:18.504860 dbus-daemon[1502]: [system] SELinux support is enabled Jul 11 00:22:18.490216 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:22:18.494166 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:22:18.494885 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:22:18.508251 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:22:18.511437 update_engine[1531]: I20250711 00:22:18.511263 1531 update_check_scheduler.cc:74] Next update check in 2m55s Jul 11 00:22:18.528709 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:22:18.540223 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:22:18.540268 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:22:18.543503 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:22:18.543524 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:22:18.548488 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:22:18.611721 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:22:18.650385 kernel: kvm_amd: TSC scaling supported Jul 11 00:22:18.650496 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:22:18.650510 kernel: kvm_amd: Nested Paging enabled Jul 11 00:22:18.650549 kernel: kvm_amd: LBR virtualization supported Jul 11 00:22:18.650562 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:22:18.650574 kernel: kvm_amd: Virtual GIF supported Jul 11 00:22:18.668014 systemd-logind[1525]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 00:22:18.668047 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:22:18.705325 systemd-logind[1525]: New seat seat0. Jul 11 00:22:18.726469 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:22:18.733733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:18.808381 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:22:18.813809 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:22:18.900054 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:22:18.938169 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:22:18.942651 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:22:18.977997 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:22:18.978341 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:22:18.981983 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:22:19.021856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:22:19.050448 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:22:19.054575 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:22:19.055904 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:22:19.183525 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:22:19.186301 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:22:19.188711 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:22:19.196467 containerd[1550]: time="2025-07-11T00:22:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 00:22:19.200938 containerd[1550]: time="2025-07-11T00:22:19.200827641Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 11 00:22:19.220219 containerd[1550]: time="2025-07-11T00:22:19.220133938Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.482µs" Jul 11 00:22:19.220219 containerd[1550]: time="2025-07-11T00:22:19.220199943Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 00:22:19.220219 containerd[1550]: time="2025-07-11T00:22:19.220227849Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 00:22:19.220698 containerd[1550]: time="2025-07-11T00:22:19.220592537Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 00:22:19.220698 containerd[1550]: time="2025-07-11T00:22:19.220625379Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 00:22:19.220698 containerd[1550]: time="2025-07-11T00:22:19.220661806Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 00:22:19.220779 containerd[1550]: time="2025-07-11T00:22:19.220763316Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 00:22:19.220816 containerd[1550]: time="2025-07-11T00:22:19.220782692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 00:22:19.221203 containerd[1550]: time="2025-07-11T00:22:19.221166584Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 00:22:19.221203 containerd[1550]: time="2025-07-11T00:22:19.221188032Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 00:22:19.221203 containerd[1550]: time="2025-07-11T00:22:19.221200178Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 00:22:19.221203 containerd[1550]: time="2025-07-11T00:22:19.221210351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 00:22:19.221389 containerd[1550]: time="2025-07-11T00:22:19.221358361Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 00:22:19.221839 containerd[1550]: time="2025-07-11T00:22:19.221702803Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 00:22:19.221839 containerd[1550]: time="2025-07-11T00:22:19.221743676Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 00:22:19.221839 containerd[1550]: time="2025-07-11T00:22:19.221753739Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 00:22:19.221839 containerd[1550]: time="2025-07-11T00:22:19.221790686Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 00:22:19.222024 containerd[1550]: time="2025-07-11T00:22:19.222004081Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 00:22:19.222104 containerd[1550]: time="2025-07-11T00:22:19.222081430Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:22:19.237827 containerd[1550]: time="2025-07-11T00:22:19.237745079Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 00:22:19.237827 containerd[1550]: time="2025-07-11T00:22:19.237843785Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237860767Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237874565Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237892908Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237906095Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237954568Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237967434Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237977948Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237987540Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.237997102Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 00:22:19.238017 containerd[1550]: time="2025-07-11T00:22:19.238010459Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 00:22:19.238224 containerd[1550]: time="2025-07-11T00:22:19.238213189Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 00:22:19.238258 containerd[1550]: time="2025-07-11T00:22:19.238243779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 00:22:19.238284 containerd[1550]: time="2025-07-11T00:22:19.238260430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 00:22:19.238284 containerd[1550]: time="2025-07-11T00:22:19.238277203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 00:22:19.238359 containerd[1550]: time="2025-07-11T00:22:19.238288657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 00:22:19.238359 containerd[1550]: time="2025-07-11T00:22:19.238300122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 00:22:19.238359 containerd[1550]: time="2025-07-11T00:22:19.238311386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 00:22:19.238359 containerd[1550]: time="2025-07-11T00:22:19.238321699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 00:22:19.238453 containerd[1550]: time="2025-07-11T00:22:19.238364975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 00:22:19.238453 containerd[1550]: time="2025-07-11T00:22:19.238377992Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 00:22:19.238453 containerd[1550]: time="2025-07-11T00:22:19.238388955Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 00:22:19.238510 containerd[1550]: time="2025-07-11T00:22:19.238489224Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 00:22:19.238510 containerd[1550]: time="2025-07-11T00:22:19.238506557Z" level=info msg="Start snapshots syncer" Jul 11 00:22:19.238559 containerd[1550]: time="2025-07-11T00:22:19.238533792Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 00:22:19.238968 containerd[1550]: time="2025-07-11T00:22:19.238919918Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 00:22:19.239182 containerd[1550]: time="2025-07-11T00:22:19.238978412Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 00:22:19.239182 containerd[1550]: time="2025-07-11T00:22:19.239079873Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 00:22:19.239246 containerd[1550]: time="2025-07-11T00:22:19.239195001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 00:22:19.239246 containerd[1550]: time="2025-07-11T00:22:19.239215297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 00:22:19.239246 containerd[1550]: time="2025-07-11T00:22:19.239225390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 00:22:19.239319 containerd[1550]: time="2025-07-11T00:22:19.239247899Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 00:22:19.239319 containerd[1550]: time="2025-07-11T00:22:19.239260384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 00:22:19.239319 containerd[1550]: time="2025-07-11T00:22:19.239272620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 00:22:19.239319 containerd[1550]: time="2025-07-11T00:22:19.239283755Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 00:22:19.239413 containerd[1550]: time="2025-07-11T00:22:19.239386287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 00:22:19.239533 systemd-networkd[1466]: eth0: Gained IPv6LL Jul 11 00:22:19.239869 containerd[1550]: time="2025-07-11T00:22:19.239781364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 00:22:19.239869 containerd[1550]: time="2025-07-11T00:22:19.239818702Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 00:22:19.239869 containerd[1550]: time="2025-07-11T00:22:19.239854868Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 00:22:19.239869 containerd[1550]: time="2025-07-11T00:22:19.239869808Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 00:22:19.239950 containerd[1550]: time="2025-07-11T00:22:19.239879210Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 00:22:19.239950 containerd[1550]: time="2025-07-11T00:22:19.239889924Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 00:22:19.239950 containerd[1550]: time="2025-07-11T00:22:19.239897593Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 00:22:19.239950 containerd[1550]: time="2025-07-11T00:22:19.239907716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 00:22:19.239950 containerd[1550]: time="2025-07-11T00:22:19.239918009Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 00:22:19.239950 containerd[1550]: time="2025-07-11T00:22:19.239938756Z" level=info msg="runtime interface created" Jul 11 00:22:19.239950 containerd[1550]: time="2025-07-11T00:22:19.239944264Z" level=info msg="created NRI interface" Jul 11 00:22:19.239950 containerd[1550]: time="2025-07-11T00:22:19.239954397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 00:22:19.240131 containerd[1550]: time="2025-07-11T00:22:19.239967263Z" level=info msg="Connect containerd service" Jul 11 00:22:19.240131 containerd[1550]: time="2025-07-11T00:22:19.239992595Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:22:19.241016 containerd[1550]: time="2025-07-11T00:22:19.240971813Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:22:19.334302 tar[1540]: linux-amd64/README.md Jul 11 00:22:19.334493 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:22:19.339022 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:22:19.342163 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:22:19.361197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:19.364370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:22:19.366798 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:22:19.396757 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:22:19.422002 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:22:19.422325 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:22:19.424413 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:22:19.530543 containerd[1550]: time="2025-07-11T00:22:19.530201520Z" level=info msg="Start subscribing containerd event" Jul 11 00:22:19.530543 containerd[1550]: time="2025-07-11T00:22:19.530410819Z" level=info msg="Start recovering state" Jul 11 00:22:19.530685 containerd[1550]: time="2025-07-11T00:22:19.530653020Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:22:19.530729 containerd[1550]: time="2025-07-11T00:22:19.530690188Z" level=info msg="Start event monitor" Jul 11 00:22:19.530729 containerd[1550]: time="2025-07-11T00:22:19.530723691Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:22:19.531168 containerd[1550]: time="2025-07-11T00:22:19.530745318Z" level=info msg="Start streaming server" Jul 11 00:22:19.531168 containerd[1550]: time="2025-07-11T00:22:19.530763702Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 00:22:19.531168 containerd[1550]: time="2025-07-11T00:22:19.530781405Z" level=info msg="runtime interface starting up..." Jul 11 00:22:19.531168 containerd[1550]: time="2025-07-11T00:22:19.530792329Z" level=info msg="starting plugins..." Jul 11 00:22:19.531168 containerd[1550]: time="2025-07-11T00:22:19.530825873Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 00:22:19.531168 containerd[1550]: time="2025-07-11T00:22:19.530874875Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:22:19.531168 containerd[1550]: time="2025-07-11T00:22:19.531150029Z" level=info msg="containerd successfully booted in 0.335916s" Jul 11 00:22:19.531445 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:22:20.848706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:20.864126 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:22:20.864499 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:22:20.866129 systemd[1]: Startup finished in 3.882s (kernel) + 7.628s (initrd) + 5.538s (userspace) = 17.049s. Jul 11 00:22:21.763819 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:22:21.765762 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:35474.service - OpenSSH per-connection server daemon (10.0.0.1:35474). Jul 11 00:22:21.975982 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 35474 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:22:22.018891 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:22.026091 kubelet[1665]: E0711 00:22:22.025965 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:22:22.028293 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:22:22.030135 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:22:22.030539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:22:22.030872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:22:22.031293 systemd[1]: kubelet.service: Consumed 2.439s CPU time, 267.9M memory peak. Jul 11 00:22:22.042208 systemd-logind[1525]: New session 1 of user core. Jul 11 00:22:22.069975 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:22:22.074325 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:22:22.105761 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:22.110926 systemd-logind[1525]: New session c1 of user core. Jul 11 00:22:22.309863 systemd[1682]: Queued start job for default target default.target. Jul 11 00:22:22.330606 systemd[1682]: Created slice app.slice - User Application Slice. Jul 11 00:22:22.330645 systemd[1682]: Reached target paths.target - Paths. Jul 11 00:22:22.330699 systemd[1682]: Reached target timers.target - Timers. Jul 11 00:22:22.332617 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:22:22.356364 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:22:22.356543 systemd[1682]: Reached target sockets.target - Sockets. Jul 11 00:22:22.356596 systemd[1682]: Reached target basic.target - Basic System. Jul 11 00:22:22.356649 systemd[1682]: Reached target default.target - Main User Target. Jul 11 00:22:22.356692 systemd[1682]: Startup finished in 235ms. Jul 11 00:22:22.357171 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:22:22.364665 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:22:22.432309 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:35484.service - OpenSSH per-connection server daemon (10.0.0.1:35484). Jul 11 00:22:22.490568 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 35484 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:22:22.492445 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:22.497272 systemd-logind[1525]: New session 2 of user core. Jul 11 00:22:22.517664 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:22:22.577278 sshd[1695]: Connection closed by 10.0.0.1 port 35484 Jul 11 00:22:22.578109 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:22.588662 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:35484.service: Deactivated successfully. Jul 11 00:22:22.590510 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:22:22.591347 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:22:22.594401 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:35500.service - OpenSSH per-connection server daemon (10.0.0.1:35500). Jul 11 00:22:22.595435 systemd-logind[1525]: Removed session 2. Jul 11 00:22:22.645127 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 35500 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:22:22.646550 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:22.651610 systemd-logind[1525]: New session 3 of user core. Jul 11 00:22:22.665551 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:22:22.716350 sshd[1703]: Connection closed by 10.0.0.1 port 35500 Jul 11 00:22:22.716682 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:22.729468 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:35500.service: Deactivated successfully. Jul 11 00:22:22.731610 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:22:22.732480 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:22:22.735817 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:35502.service - OpenSSH per-connection server daemon (10.0.0.1:35502). Jul 11 00:22:22.736766 systemd-logind[1525]: Removed session 3. Jul 11 00:22:22.790486 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 35502 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:22:22.791831 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:22.796453 systemd-logind[1525]: New session 4 of user core. Jul 11 00:22:22.803467 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:22:22.858129 sshd[1711]: Connection closed by 10.0.0.1 port 35502 Jul 11 00:22:22.859482 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:22.868403 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:35502.service: Deactivated successfully. Jul 11 00:22:22.870390 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:22:22.871140 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:22:22.874123 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:35518.service - OpenSSH per-connection server daemon (10.0.0.1:35518). Jul 11 00:22:22.874982 systemd-logind[1525]: Removed session 4. Jul 11 00:22:22.950774 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 35518 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:22:22.952761 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:22.958462 systemd-logind[1525]: New session 5 of user core. Jul 11 00:22:22.967525 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:22:23.032908 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:22:23.033306 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:22:23.052379 sudo[1720]: pam_unix(sudo:session): session closed for user root Jul 11 00:22:23.054241 sshd[1719]: Connection closed by 10.0.0.1 port 35518 Jul 11 00:22:23.054771 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:23.064496 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:35518.service: Deactivated successfully. Jul 11 00:22:23.066810 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:22:23.067656 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:22:23.070882 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:35530.service - OpenSSH per-connection server daemon (10.0.0.1:35530). Jul 11 00:22:23.071756 systemd-logind[1525]: Removed session 5. Jul 11 00:22:23.137597 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 35530 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:22:23.139245 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:23.144268 systemd-logind[1525]: New session 6 of user core. Jul 11 00:22:23.154473 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:22:23.210324 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:22:23.210672 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:22:23.261692 sudo[1730]: pam_unix(sudo:session): session closed for user root Jul 11 00:22:23.270472 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 00:22:23.270870 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:22:23.283391 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:22:23.331176 augenrules[1752]: No rules Jul 11 00:22:23.333387 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:22:23.333770 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:22:23.335068 sudo[1729]: pam_unix(sudo:session): session closed for user root Jul 11 00:22:23.336822 sshd[1728]: Connection closed by 10.0.0.1 port 35530 Jul 11 00:22:23.337205 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:23.350115 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:35530.service: Deactivated successfully. Jul 11 00:22:23.352068 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:22:23.353052 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:22:23.356109 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:35540.service - OpenSSH per-connection server daemon (10.0.0.1:35540). Jul 11 00:22:23.356904 systemd-logind[1525]: Removed session 6. Jul 11 00:22:23.413182 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 35540 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:22:23.414963 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:23.420486 systemd-logind[1525]: New session 7 of user core. Jul 11 00:22:23.431511 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:22:23.488542 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:22:23.489015 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:22:24.452117 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:22:24.464828 (dockerd)[1785]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:22:25.209535 dockerd[1785]: time="2025-07-11T00:22:25.209418847Z" level=info msg="Starting up" Jul 11 00:22:25.211607 dockerd[1785]: time="2025-07-11T00:22:25.211565594Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 00:22:25.849162 dockerd[1785]: time="2025-07-11T00:22:25.849058394Z" level=info msg="Loading containers: start." Jul 11 00:22:25.860382 kernel: Initializing XFRM netlink socket Jul 11 00:22:26.148252 systemd-networkd[1466]: docker0: Link UP Jul 11 00:22:26.155741 dockerd[1785]: time="2025-07-11T00:22:26.155693163Z" level=info msg="Loading containers: done." Jul 11 00:22:26.213242 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2314721889-merged.mount: Deactivated successfully. Jul 11 00:22:26.215497 dockerd[1785]: time="2025-07-11T00:22:26.215444018Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:22:26.215850 dockerd[1785]: time="2025-07-11T00:22:26.215564637Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 11 00:22:26.215850 dockerd[1785]: time="2025-07-11T00:22:26.215717823Z" level=info msg="Initializing buildkit" Jul 11 00:22:26.250150 dockerd[1785]: time="2025-07-11T00:22:26.250094587Z" level=info msg="Completed buildkit initialization" Jul 11 00:22:26.256981 dockerd[1785]: time="2025-07-11T00:22:26.256929782Z" level=info msg="Daemon has completed initialization" Jul 11 00:22:26.257146 dockerd[1785]: time="2025-07-11T00:22:26.257063199Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:22:26.257255 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:22:27.331358 containerd[1550]: time="2025-07-11T00:22:27.331255114Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 11 00:22:28.460380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176633677.mount: Deactivated successfully. Jul 11 00:22:32.281414 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:22:32.286588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:32.626543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:32.640906 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:22:32.792868 kubelet[2060]: E0711 00:22:32.792795 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:22:32.815930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:22:32.816297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:22:32.816966 systemd[1]: kubelet.service: Consumed 467ms CPU time, 111.1M memory peak. Jul 11 00:22:33.355064 containerd[1550]: time="2025-07-11T00:22:33.354945358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:33.355780 containerd[1550]: time="2025-07-11T00:22:33.355679227Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 11 00:22:33.359464 containerd[1550]: time="2025-07-11T00:22:33.359402609Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:33.363471 containerd[1550]: time="2025-07-11T00:22:33.363417477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:33.364619 containerd[1550]: time="2025-07-11T00:22:33.364567459Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 6.033229396s" Jul 11 00:22:33.364690 containerd[1550]: time="2025-07-11T00:22:33.364624223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 11 00:22:33.365535 containerd[1550]: time="2025-07-11T00:22:33.365314556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 11 00:22:35.052260 containerd[1550]: time="2025-07-11T00:22:35.052179696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:35.053131 containerd[1550]: time="2025-07-11T00:22:35.053088948Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 11 00:22:35.054411 containerd[1550]: time="2025-07-11T00:22:35.054373030Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:35.057205 containerd[1550]: time="2025-07-11T00:22:35.057138233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:35.058474 containerd[1550]: time="2025-07-11T00:22:35.058404758Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.692927447s" Jul 11 00:22:35.058474 containerd[1550]: time="2025-07-11T00:22:35.058467914Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 11 00:22:35.059111 containerd[1550]: time="2025-07-11T00:22:35.059070079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 11 00:22:36.381948 containerd[1550]: time="2025-07-11T00:22:36.381854670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:36.382909 containerd[1550]: time="2025-07-11T00:22:36.382874481Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 11 00:22:36.384453 containerd[1550]: time="2025-07-11T00:22:36.384417844Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:36.387516 containerd[1550]: time="2025-07-11T00:22:36.387448304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:36.388302 containerd[1550]: time="2025-07-11T00:22:36.388249100Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.329145019s" Jul 11 00:22:36.388302 containerd[1550]: time="2025-07-11T00:22:36.388286427Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 11 00:22:36.389084 containerd[1550]: time="2025-07-11T00:22:36.388866445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 11 00:22:38.388729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729786587.mount: Deactivated successfully. Jul 11 00:22:39.040204 containerd[1550]: time="2025-07-11T00:22:39.040124958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:39.041369 containerd[1550]: time="2025-07-11T00:22:39.041318813Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 11 00:22:39.042763 containerd[1550]: time="2025-07-11T00:22:39.042706060Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:39.045455 containerd[1550]: time="2025-07-11T00:22:39.045410726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:39.046207 containerd[1550]: time="2025-07-11T00:22:39.046151063Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.657245227s" Jul 11 00:22:39.046207 containerd[1550]: time="2025-07-11T00:22:39.046195041Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 11 00:22:39.046863 containerd[1550]: time="2025-07-11T00:22:39.046832567Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 11 00:22:39.894652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999843585.mount: Deactivated successfully. Jul 11 00:22:41.901220 containerd[1550]: time="2025-07-11T00:22:41.901128242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:41.902428 containerd[1550]: time="2025-07-11T00:22:41.902396707Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 11 00:22:41.904353 containerd[1550]: time="2025-07-11T00:22:41.904265533Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:41.907524 containerd[1550]: time="2025-07-11T00:22:41.907465043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:41.908465 containerd[1550]: time="2025-07-11T00:22:41.908428577Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.861564159s" Jul 11 00:22:41.908465 containerd[1550]: time="2025-07-11T00:22:41.908464374Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 11 00:22:41.908946 containerd[1550]: time="2025-07-11T00:22:41.908921707Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:22:42.456050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630089530.mount: Deactivated successfully. Jul 11 00:22:42.465666 containerd[1550]: time="2025-07-11T00:22:42.465589159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:42.466531 containerd[1550]: time="2025-07-11T00:22:42.466444440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:22:42.468140 containerd[1550]: time="2025-07-11T00:22:42.468057116Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:42.470555 containerd[1550]: time="2025-07-11T00:22:42.470496315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:42.471297 containerd[1550]: time="2025-07-11T00:22:42.471230203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 562.279329ms" Jul 11 00:22:42.471297 containerd[1550]: time="2025-07-11T00:22:42.471284800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:22:42.472109 containerd[1550]: time="2025-07-11T00:22:42.472063699Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 11 00:22:42.928797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:22:42.930518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:43.166485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:43.170772 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:22:43.227116 kubelet[2148]: E0711 00:22:43.226948 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:22:43.231208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:22:43.231430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:22:43.231815 systemd[1]: kubelet.service: Consumed 253ms CPU time, 110.8M memory peak. Jul 11 00:22:43.677325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302873731.mount: Deactivated successfully. Jul 11 00:22:46.368820 containerd[1550]: time="2025-07-11T00:22:46.368702796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:46.369732 containerd[1550]: time="2025-07-11T00:22:46.369648171Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 11 00:22:46.371087 containerd[1550]: time="2025-07-11T00:22:46.371027653Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:46.374043 containerd[1550]: time="2025-07-11T00:22:46.374002917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:46.375025 containerd[1550]: time="2025-07-11T00:22:46.374972672Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.902874319s" Jul 11 00:22:46.375025 containerd[1550]: time="2025-07-11T00:22:46.375016483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 11 00:22:50.372164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:50.372426 systemd[1]: kubelet.service: Consumed 253ms CPU time, 110.8M memory peak. Jul 11 00:22:50.375287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:50.407303 systemd[1]: Reload requested from client PID 2242 ('systemctl') (unit session-7.scope)... Jul 11 00:22:50.407324 systemd[1]: Reloading... Jul 11 00:22:50.489392 zram_generator::config[2285]: No configuration found. Jul 11 00:22:50.803366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:50.927278 systemd[1]: Reloading finished in 519 ms. Jul 11 00:22:50.994190 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:22:50.994290 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:22:50.994628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:50.994672 systemd[1]: kubelet.service: Consumed 171ms CPU time, 98.2M memory peak. Jul 11 00:22:50.996415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:51.183318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:51.198777 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:22:52.524273 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:52.524273 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:22:52.524273 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:52.524825 kubelet[2333]: I0711 00:22:52.524322 2333 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:22:53.119128 kubelet[2333]: I0711 00:22:53.119061 2333 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:22:53.119128 kubelet[2333]: I0711 00:22:53.119100 2333 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:22:53.119430 kubelet[2333]: I0711 00:22:53.119400 2333 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:22:53.168321 kubelet[2333]: E0711 00:22:53.168251 2333 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 11 00:22:53.168321 kubelet[2333]: I0711 00:22:53.168245 2333 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:22:53.178235 kubelet[2333]: I0711 00:22:53.178200 2333 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 00:22:53.185916 kubelet[2333]: I0711 00:22:53.185858 2333 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:22:53.186275 kubelet[2333]: I0711 00:22:53.186211 2333 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:22:53.186566 kubelet[2333]: I0711 00:22:53.186261 2333 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:22:53.186566 kubelet[2333]: I0711 00:22:53.186567 2333 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:22:53.187056 kubelet[2333]: I0711 00:22:53.186581 2333 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:22:53.187056 kubelet[2333]: I0711 00:22:53.186775 2333 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:53.191036 kubelet[2333]: I0711 00:22:53.190989 2333 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:22:53.191079 kubelet[2333]: I0711 00:22:53.191054 2333 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:22:53.191114 kubelet[2333]: I0711 00:22:53.191100 2333 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:22:53.191139 kubelet[2333]: I0711 00:22:53.191125 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:22:53.244303 kubelet[2333]: E0711 00:22:53.244216 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:22:53.244303 kubelet[2333]: E0711 00:22:53.244220 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:22:53.244658 kubelet[2333]: I0711 00:22:53.244635 2333 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 11 00:22:53.246088 kubelet[2333]: I0711 00:22:53.246054 2333 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:22:53.247378 kubelet[2333]: W0711 00:22:53.247351 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:22:53.252399 kubelet[2333]: I0711 00:22:53.252358 2333 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:22:53.252553 kubelet[2333]: I0711 00:22:53.252443 2333 server.go:1289] "Started kubelet" Jul 11 00:22:53.252667 kubelet[2333]: I0711 00:22:53.252587 2333 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:22:53.254759 kubelet[2333]: I0711 00:22:53.253680 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:22:53.254759 kubelet[2333]: I0711 00:22:53.254164 2333 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:22:53.254759 kubelet[2333]: I0711 00:22:53.254591 2333 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:22:53.257915 kubelet[2333]: I0711 00:22:53.257034 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:22:53.260586 kubelet[2333]: I0711 00:22:53.257752 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:22:53.260936 kubelet[2333]: I0711 00:22:53.260907 2333 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:22:53.261453 kubelet[2333]: I0711 00:22:53.261422 2333 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:22:53.261638 kubelet[2333]: E0711 00:22:53.259997 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a96e680a28c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:22:53.25239566 +0000 UTC m=+2.049226751,LastTimestamp:2025-07-11 00:22:53.25239566 +0000 UTC m=+2.049226751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:22:53.261842 kubelet[2333]: I0711 00:22:53.261825 2333 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:22:53.262226 kubelet[2333]: E0711 00:22:53.262165 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:53.262499 kubelet[2333]: E0711 00:22:53.262457 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:22:53.262607 kubelet[2333]: E0711 00:22:53.262497 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Jul 11 00:22:53.262893 kubelet[2333]: E0711 00:22:53.262861 2333 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:22:53.263702 kubelet[2333]: I0711 00:22:53.263677 2333 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:22:53.263826 kubelet[2333]: I0711 00:22:53.263784 2333 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:22:53.263917 kubelet[2333]: I0711 00:22:53.263887 2333 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:22:53.295148 kubelet[2333]: I0711 00:22:53.295109 2333 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:22:53.295319 kubelet[2333]: I0711 00:22:53.295306 2333 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:22:53.295420 kubelet[2333]: I0711 00:22:53.295408 2333 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:53.296466 kubelet[2333]: I0711 00:22:53.296274 2333 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:22:53.298069 kubelet[2333]: I0711 00:22:53.297888 2333 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:22:53.298069 kubelet[2333]: I0711 00:22:53.297928 2333 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:22:53.298069 kubelet[2333]: I0711 00:22:53.297976 2333 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:22:53.298069 kubelet[2333]: I0711 00:22:53.297995 2333 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:22:53.298069 kubelet[2333]: E0711 00:22:53.298051 2333 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:22:53.308695 kubelet[2333]: E0711 00:22:53.299368 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:22:53.363396 kubelet[2333]: E0711 00:22:53.363308 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:53.398749 kubelet[2333]: E0711 00:22:53.398547 2333 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:22:53.433586 kubelet[2333]: I0711 00:22:53.433533 2333 policy_none.go:49] "None policy: Start" Jul 11 00:22:53.433586 kubelet[2333]: I0711 00:22:53.433599 2333 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:22:53.433586 kubelet[2333]: I0711 00:22:53.433619 2333 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:22:53.463571 kubelet[2333]: E0711 00:22:53.463488 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:53.463571 kubelet[2333]: E0711 00:22:53.463519 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Jul 11 00:22:53.519394 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:22:53.539061 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:22:53.551549 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:22:53.552942 kubelet[2333]: E0711 00:22:53.552892 2333 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:22:53.553287 kubelet[2333]: I0711 00:22:53.553165 2333 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:22:53.553287 kubelet[2333]: I0711 00:22:53.553179 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:22:53.553668 kubelet[2333]: I0711 00:22:53.553457 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:22:53.554199 kubelet[2333]: E0711 00:22:53.554174 2333 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:22:53.554248 kubelet[2333]: E0711 00:22:53.554224 2333 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:22:53.636273 systemd[1]: Created slice kubepods-burstable-pod4d3ae416c2c00f8b9bcd1c3af857345c.slice - libcontainer container kubepods-burstable-pod4d3ae416c2c00f8b9bcd1c3af857345c.slice. Jul 11 00:22:53.651737 kubelet[2333]: E0711 00:22:53.651633 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:53.654939 kubelet[2333]: I0711 00:22:53.654899 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:53.655310 kubelet[2333]: E0711 00:22:53.655271 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jul 11 00:22:53.664095 kubelet[2333]: I0711 00:22:53.664054 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d3ae416c2c00f8b9bcd1c3af857345c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d3ae416c2c00f8b9bcd1c3af857345c\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:53.664145 kubelet[2333]: I0711 00:22:53.664096 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d3ae416c2c00f8b9bcd1c3af857345c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d3ae416c2c00f8b9bcd1c3af857345c\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:53.664145 kubelet[2333]: I0711 00:22:53.664126 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:53.664197 kubelet[2333]: I0711 00:22:53.664151 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:53.664197 kubelet[2333]: I0711 00:22:53.664172 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:53.664388 kubelet[2333]: I0711 00:22:53.664289 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:53.664448 kubelet[2333]: I0711 00:22:53.664423 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:53.664484 kubelet[2333]: I0711 00:22:53.664457 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d3ae416c2c00f8b9bcd1c3af857345c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d3ae416c2c00f8b9bcd1c3af857345c\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:53.736284 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 11 00:22:53.738284 kubelet[2333]: E0711 00:22:53.738244 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:53.765041 kubelet[2333]: I0711 00:22:53.764999 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:53.857636 kubelet[2333]: I0711 00:22:53.857601 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:53.858134 kubelet[2333]: E0711 00:22:53.858081 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jul 11 00:22:53.864685 kubelet[2333]: E0711 00:22:53.864644 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Jul 11 00:22:53.937572 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 11 00:22:53.940000 kubelet[2333]: E0711 00:22:53.939963 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:53.940393 kubelet[2333]: E0711 00:22:53.940356 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:53.941117 containerd[1550]: time="2025-07-11T00:22:53.941049527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:53.952407 kubelet[2333]: E0711 00:22:53.952360 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:53.953054 containerd[1550]: time="2025-07-11T00:22:53.952993629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d3ae416c2c00f8b9bcd1c3af857345c,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:54.038860 kubelet[2333]: E0711 00:22:54.038782 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:54.039521 containerd[1550]: time="2025-07-11T00:22:54.039474645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:54.049386 kubelet[2333]: E0711 00:22:54.049310 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:22:54.058087 kubelet[2333]: E0711 00:22:54.058026 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:22:54.127552 kubelet[2333]: E0711 00:22:54.127466 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:22:54.148460 containerd[1550]: time="2025-07-11T00:22:54.148314065Z" level=info msg="connecting to shim 9af7e25299d2f99e112b67624e5a62c9d7c8b44431edfcc5406e88633e5e9c04" address="unix:///run/containerd/s/968940d68b20ce7a99cfe669895d5a067fd76e8de2ac1269d08942ff024d97c7" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:22:54.149345 containerd[1550]: time="2025-07-11T00:22:54.149282624Z" level=info msg="connecting to shim 861e166575111371f112c27ff5792f9c605a907cdfda23b461738c398d19b17d" address="unix:///run/containerd/s/130c171973b11736a1d621d9122254a1cf98a1396305c6f2438a3642dd58b74f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:22:54.159693 containerd[1550]: time="2025-07-11T00:22:54.159636488Z" level=info msg="connecting to shim 457496cac6b211f5fe4002bbe1088c81871d7eb57073106a690db8f8135ac399" address="unix:///run/containerd/s/abd175afe1f4480509c4b8600a79d4897cb4c94ae6334af6344bc3dc2609191e" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:22:54.185161 systemd[1]: Started cri-containerd-861e166575111371f112c27ff5792f9c605a907cdfda23b461738c398d19b17d.scope - libcontainer container 861e166575111371f112c27ff5792f9c605a907cdfda23b461738c398d19b17d. Jul 11 00:22:54.255488 systemd[1]: Started cri-containerd-457496cac6b211f5fe4002bbe1088c81871d7eb57073106a690db8f8135ac399.scope - libcontainer container 457496cac6b211f5fe4002bbe1088c81871d7eb57073106a690db8f8135ac399. Jul 11 00:22:54.259573 systemd[1]: Started cri-containerd-9af7e25299d2f99e112b67624e5a62c9d7c8b44431edfcc5406e88633e5e9c04.scope - libcontainer container 9af7e25299d2f99e112b67624e5a62c9d7c8b44431edfcc5406e88633e5e9c04. Jul 11 00:22:54.261406 kubelet[2333]: I0711 00:22:54.260899 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:54.261406 kubelet[2333]: E0711 00:22:54.261374 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jul 11 00:22:54.306992 containerd[1550]: time="2025-07-11T00:22:54.306942518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"861e166575111371f112c27ff5792f9c605a907cdfda23b461738c398d19b17d\"" Jul 11 00:22:54.308654 kubelet[2333]: E0711 00:22:54.308623 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:54.317166 containerd[1550]: time="2025-07-11T00:22:54.317100473Z" level=info msg="CreateContainer within sandbox \"861e166575111371f112c27ff5792f9c605a907cdfda23b461738c398d19b17d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:22:54.498566 kubelet[2333]: E0711 00:22:54.498490 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:22:54.499020 kubelet[2333]: E0711 00:22:54.498859 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a96e680a28c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:22:53.25239566 +0000 UTC m=+2.049226751,LastTimestamp:2025-07-11 00:22:53.25239566 +0000 UTC m=+2.049226751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:22:54.526883 containerd[1550]: time="2025-07-11T00:22:54.526385064Z" level=info msg="Container 3c7d49af68e40a7f8f3f813d01ab35868f9deaf70acfac592ea66f4f9e3aa63d: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:22:54.665853 kubelet[2333]: E0711 00:22:54.665781 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" Jul 11 00:22:54.752920 containerd[1550]: time="2025-07-11T00:22:54.752851327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"457496cac6b211f5fe4002bbe1088c81871d7eb57073106a690db8f8135ac399\"" Jul 11 00:22:54.753780 kubelet[2333]: E0711 00:22:54.753702 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:54.782818 containerd[1550]: time="2025-07-11T00:22:54.782677373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d3ae416c2c00f8b9bcd1c3af857345c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9af7e25299d2f99e112b67624e5a62c9d7c8b44431edfcc5406e88633e5e9c04\"" Jul 11 00:22:54.782818 containerd[1550]: time="2025-07-11T00:22:54.782685026Z" level=info msg="CreateContainer within sandbox \"457496cac6b211f5fe4002bbe1088c81871d7eb57073106a690db8f8135ac399\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:22:54.783873 kubelet[2333]: E0711 00:22:54.783817 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:54.789803 containerd[1550]: time="2025-07-11T00:22:54.789759437Z" level=info msg="CreateContainer within sandbox \"861e166575111371f112c27ff5792f9c605a907cdfda23b461738c398d19b17d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c7d49af68e40a7f8f3f813d01ab35868f9deaf70acfac592ea66f4f9e3aa63d\"" Jul 11 00:22:54.790416 containerd[1550]: time="2025-07-11T00:22:54.790375573Z" level=info msg="StartContainer for \"3c7d49af68e40a7f8f3f813d01ab35868f9deaf70acfac592ea66f4f9e3aa63d\"" Jul 11 00:22:54.790693 containerd[1550]: time="2025-07-11T00:22:54.790620635Z" level=info msg="CreateContainer within sandbox \"9af7e25299d2f99e112b67624e5a62c9d7c8b44431edfcc5406e88633e5e9c04\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:22:54.791887 containerd[1550]: time="2025-07-11T00:22:54.791841757Z" level=info msg="connecting to shim 3c7d49af68e40a7f8f3f813d01ab35868f9deaf70acfac592ea66f4f9e3aa63d" address="unix:///run/containerd/s/130c171973b11736a1d621d9122254a1cf98a1396305c6f2438a3642dd58b74f" protocol=ttrpc version=3 Jul 11 00:22:54.799673 containerd[1550]: time="2025-07-11T00:22:54.799576048Z" level=info msg="Container 3348e30d6f59f0901679c256a560669017be502f7275685ec30c4850651c4d99: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:22:54.812782 containerd[1550]: time="2025-07-11T00:22:54.812718089Z" level=info msg="Container 7ed7b6d7234772d99b543f384ba2aab5456aa33b8916785db1d11e4676741052: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:22:54.816545 systemd[1]: Started cri-containerd-3c7d49af68e40a7f8f3f813d01ab35868f9deaf70acfac592ea66f4f9e3aa63d.scope - libcontainer container 3c7d49af68e40a7f8f3f813d01ab35868f9deaf70acfac592ea66f4f9e3aa63d. Jul 11 00:22:54.817161 containerd[1550]: time="2025-07-11T00:22:54.817114459Z" level=info msg="CreateContainer within sandbox \"457496cac6b211f5fe4002bbe1088c81871d7eb57073106a690db8f8135ac399\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3348e30d6f59f0901679c256a560669017be502f7275685ec30c4850651c4d99\"" Jul 11 00:22:54.817834 containerd[1550]: time="2025-07-11T00:22:54.817795493Z" level=info msg="StartContainer for \"3348e30d6f59f0901679c256a560669017be502f7275685ec30c4850651c4d99\"" Jul 11 00:22:54.820163 containerd[1550]: time="2025-07-11T00:22:54.820128956Z" level=info msg="connecting to shim 3348e30d6f59f0901679c256a560669017be502f7275685ec30c4850651c4d99" address="unix:///run/containerd/s/abd175afe1f4480509c4b8600a79d4897cb4c94ae6334af6344bc3dc2609191e" protocol=ttrpc version=3 Jul 11 00:22:54.822184 containerd[1550]: time="2025-07-11T00:22:54.822122116Z" level=info msg="CreateContainer within sandbox \"9af7e25299d2f99e112b67624e5a62c9d7c8b44431edfcc5406e88633e5e9c04\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7ed7b6d7234772d99b543f384ba2aab5456aa33b8916785db1d11e4676741052\"" Jul 11 00:22:54.822974 containerd[1550]: time="2025-07-11T00:22:54.822933951Z" level=info msg="StartContainer for \"7ed7b6d7234772d99b543f384ba2aab5456aa33b8916785db1d11e4676741052\"" Jul 11 00:22:54.824253 containerd[1550]: time="2025-07-11T00:22:54.824223489Z" level=info msg="connecting to shim 7ed7b6d7234772d99b543f384ba2aab5456aa33b8916785db1d11e4676741052" address="unix:///run/containerd/s/968940d68b20ce7a99cfe669895d5a067fd76e8de2ac1269d08942ff024d97c7" protocol=ttrpc version=3 Jul 11 00:22:54.850526 systemd[1]: Started cri-containerd-3348e30d6f59f0901679c256a560669017be502f7275685ec30c4850651c4d99.scope - libcontainer container 3348e30d6f59f0901679c256a560669017be502f7275685ec30c4850651c4d99. Jul 11 00:22:54.856913 systemd[1]: Started cri-containerd-7ed7b6d7234772d99b543f384ba2aab5456aa33b8916785db1d11e4676741052.scope - libcontainer container 7ed7b6d7234772d99b543f384ba2aab5456aa33b8916785db1d11e4676741052. Jul 11 00:22:54.886647 containerd[1550]: time="2025-07-11T00:22:54.886525153Z" level=info msg="StartContainer for \"3c7d49af68e40a7f8f3f813d01ab35868f9deaf70acfac592ea66f4f9e3aa63d\" returns successfully" Jul 11 00:22:54.941209 containerd[1550]: time="2025-07-11T00:22:54.941081758Z" level=info msg="StartContainer for \"7ed7b6d7234772d99b543f384ba2aab5456aa33b8916785db1d11e4676741052\" returns successfully" Jul 11 00:22:54.967170 containerd[1550]: time="2025-07-11T00:22:54.967113026Z" level=info msg="StartContainer for \"3348e30d6f59f0901679c256a560669017be502f7275685ec30c4850651c4d99\" returns successfully" Jul 11 00:22:55.065411 kubelet[2333]: I0711 00:22:55.065230 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:22:55.308760 kubelet[2333]: E0711 00:22:55.308519 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:55.308760 kubelet[2333]: E0711 00:22:55.308671 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:55.310935 kubelet[2333]: E0711 00:22:55.310795 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:55.310935 kubelet[2333]: E0711 00:22:55.310884 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:55.314933 kubelet[2333]: E0711 00:22:55.314604 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:55.314933 kubelet[2333]: E0711 00:22:55.314876 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:56.463638 kubelet[2333]: E0711 00:22:56.463585 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:56.464290 kubelet[2333]: E0711 00:22:56.463748 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:56.464717 kubelet[2333]: E0711 00:22:56.464687 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:22:56.464837 kubelet[2333]: E0711 00:22:56.464811 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:56.970654 kubelet[2333]: E0711 00:22:56.970604 2333 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:22:57.050609 kubelet[2333]: I0711 00:22:57.050505 2333 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:22:57.064189 kubelet[2333]: I0711 00:22:57.064109 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:57.244016 kubelet[2333]: I0711 00:22:57.243848 2333 apiserver.go:52] "Watching apiserver" Jul 11 00:22:57.262731 kubelet[2333]: I0711 00:22:57.262670 2333 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:22:57.285358 kubelet[2333]: E0711 00:22:57.285277 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:57.285358 kubelet[2333]: I0711 00:22:57.285348 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:57.287288 kubelet[2333]: E0711 00:22:57.287261 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:57.287288 kubelet[2333]: I0711 00:22:57.287283 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:57.289252 kubelet[2333]: E0711 00:22:57.289220 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:57.462920 kubelet[2333]: I0711 00:22:57.462871 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:57.462920 kubelet[2333]: I0711 00:22:57.462935 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:57.465162 kubelet[2333]: E0711 00:22:57.465115 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:57.465779 kubelet[2333]: E0711 00:22:57.465219 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:57.465779 kubelet[2333]: E0711 00:22:57.465287 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:57.465779 kubelet[2333]: E0711 00:22:57.465362 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:59.147200 systemd[1]: Reload requested from client PID 2619 ('systemctl') (unit session-7.scope)... Jul 11 00:22:59.147220 systemd[1]: Reloading... Jul 11 00:22:59.252510 zram_generator::config[2665]: No configuration found. Jul 11 00:22:59.376778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:59.528667 systemd[1]: Reloading finished in 380 ms. Jul 11 00:22:59.558414 kubelet[2333]: I0711 00:22:59.558342 2333 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:22:59.558565 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:59.583168 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:22:59.583643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:59.583739 systemd[1]: kubelet.service: Consumed 1.718s CPU time, 133.7M memory peak. Jul 11 00:22:59.588652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:59.815127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:59.829166 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:22:59.866779 kubelet[2707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:59.866779 kubelet[2707]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:22:59.866779 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:59.867262 kubelet[2707]: I0711 00:22:59.866795 2707 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:22:59.873216 kubelet[2707]: I0711 00:22:59.873169 2707 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:22:59.873216 kubelet[2707]: I0711 00:22:59.873202 2707 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:22:59.873462 kubelet[2707]: I0711 00:22:59.873435 2707 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:22:59.874637 kubelet[2707]: I0711 00:22:59.874599 2707 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 11 00:22:59.876918 kubelet[2707]: I0711 00:22:59.876871 2707 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:22:59.882197 kubelet[2707]: I0711 00:22:59.882161 2707 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 00:22:59.887111 kubelet[2707]: I0711 00:22:59.887084 2707 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:22:59.887355 kubelet[2707]: I0711 00:22:59.887296 2707 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:22:59.887528 kubelet[2707]: I0711 00:22:59.887345 2707 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:22:59.887657 kubelet[2707]: I0711 00:22:59.887530 2707 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:22:59.887657 kubelet[2707]: I0711 00:22:59.887541 2707 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:22:59.887657 kubelet[2707]: I0711 00:22:59.887595 2707 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:59.887795 kubelet[2707]: I0711 00:22:59.887776 2707 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:22:59.887843 kubelet[2707]: I0711 00:22:59.887810 2707 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:22:59.887843 kubelet[2707]: I0711 00:22:59.887835 2707 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:22:59.887909 kubelet[2707]: I0711 00:22:59.887850 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:22:59.888793 kubelet[2707]: I0711 00:22:59.888621 2707 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 11 00:22:59.889106 kubelet[2707]: I0711 00:22:59.889077 2707 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:22:59.892672 kubelet[2707]: I0711 00:22:59.892642 2707 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:22:59.895436 kubelet[2707]: I0711 00:22:59.895400 2707 server.go:1289] "Started kubelet" Jul 11 00:22:59.896932 kubelet[2707]: I0711 00:22:59.896904 2707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:22:59.900009 kubelet[2707]: I0711 00:22:59.899954 2707 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:22:59.901065 kubelet[2707]: I0711 00:22:59.901039 2707 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:22:59.903384 kubelet[2707]: I0711 00:22:59.902797 2707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:22:59.903384 kubelet[2707]: I0711 00:22:59.903243 2707 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:22:59.903384 kubelet[2707]: I0711 00:22:59.903308 2707 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:22:59.905529 kubelet[2707]: E0711 00:22:59.905500 2707 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:22:59.906460 kubelet[2707]: I0711 00:22:59.906432 2707 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:22:59.907742 kubelet[2707]: I0711 00:22:59.907710 2707 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:22:59.908156 kubelet[2707]: I0711 00:22:59.908118 2707 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:22:59.909379 kubelet[2707]: I0711 00:22:59.909356 2707 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:22:59.909506 kubelet[2707]: I0711 00:22:59.909470 2707 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:22:59.912578 kubelet[2707]: I0711 00:22:59.912520 2707 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:22:59.922214 kubelet[2707]: I0711 00:22:59.922153 2707 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:22:59.923778 kubelet[2707]: I0711 00:22:59.923749 2707 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:22:59.923778 kubelet[2707]: I0711 00:22:59.923770 2707 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:22:59.923857 kubelet[2707]: I0711 00:22:59.923806 2707 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:22:59.923857 kubelet[2707]: I0711 00:22:59.923815 2707 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:22:59.923907 kubelet[2707]: E0711 00:22:59.923863 2707 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:22:59.951614 kubelet[2707]: I0711 00:22:59.951559 2707 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:22:59.951614 kubelet[2707]: I0711 00:22:59.951606 2707 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:22:59.951767 kubelet[2707]: I0711 00:22:59.951636 2707 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:59.951811 kubelet[2707]: I0711 00:22:59.951794 2707 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:22:59.951847 kubelet[2707]: I0711 00:22:59.951810 2707 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:22:59.951847 kubelet[2707]: I0711 00:22:59.951833 2707 policy_none.go:49] "None policy: Start" Jul 11 00:22:59.951892 kubelet[2707]: I0711 00:22:59.951860 2707 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:22:59.951892 kubelet[2707]: I0711 00:22:59.951875 2707 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:22:59.951994 kubelet[2707]: I0711 00:22:59.951980 2707 state_mem.go:75] "Updated machine memory state" Jul 11 00:22:59.960446 kubelet[2707]: E0711 00:22:59.960348 2707 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:22:59.960584 kubelet[2707]: I0711 00:22:59.960562 2707 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:22:59.960881 kubelet[2707]: I0711 00:22:59.960641 2707 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:22:59.961133 kubelet[2707]: I0711 00:22:59.960959 2707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:22:59.961862 kubelet[2707]: E0711 00:22:59.961826 2707 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:23:00.025451 kubelet[2707]: I0711 00:23:00.025379 2707 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:00.025774 kubelet[2707]: I0711 00:23:00.025746 2707 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:00.025928 kubelet[2707]: I0711 00:23:00.025881 2707 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:23:00.069406 kubelet[2707]: I0711 00:23:00.069237 2707 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:23:00.109209 kubelet[2707]: I0711 00:23:00.109137 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d3ae416c2c00f8b9bcd1c3af857345c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d3ae416c2c00f8b9bcd1c3af857345c\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:00.109209 kubelet[2707]: I0711 00:23:00.109191 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:00.109209 kubelet[2707]: I0711 00:23:00.109216 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:00.109515 kubelet[2707]: I0711 00:23:00.109238 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:00.109515 kubelet[2707]: I0711 00:23:00.109258 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:00.109515 kubelet[2707]: I0711 00:23:00.109279 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:23:00.109515 kubelet[2707]: I0711 00:23:00.109295 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d3ae416c2c00f8b9bcd1c3af857345c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d3ae416c2c00f8b9bcd1c3af857345c\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:00.109515 kubelet[2707]: I0711 00:23:00.109311 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:00.109679 kubelet[2707]: I0711 00:23:00.109363 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d3ae416c2c00f8b9bcd1c3af857345c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d3ae416c2c00f8b9bcd1c3af857345c\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:00.398906 kubelet[2707]: E0711 00:23:00.398750 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:00.398906 kubelet[2707]: E0711 00:23:00.398841 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:00.399085 kubelet[2707]: E0711 00:23:00.398986 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:00.642118 kubelet[2707]: I0711 00:23:00.642049 2707 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:23:00.642302 kubelet[2707]: I0711 00:23:00.642185 2707 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:23:00.888796 kubelet[2707]: I0711 00:23:00.888717 2707 apiserver.go:52] "Watching apiserver" Jul 11 00:23:00.908263 kubelet[2707]: I0711 00:23:00.908183 2707 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:23:00.936236 kubelet[2707]: I0711 00:23:00.936178 2707 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:00.936582 kubelet[2707]: I0711 00:23:00.936560 2707 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:00.936783 kubelet[2707]: I0711 00:23:00.936742 2707 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:23:01.010663 kubelet[2707]: E0711 00:23:01.010491 2707 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:01.010822 kubelet[2707]: E0711 00:23:01.010750 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:01.010897 kubelet[2707]: E0711 00:23:01.010873 2707 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:23:01.011018 kubelet[2707]: E0711 00:23:01.010989 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:01.011102 kubelet[2707]: E0711 00:23:01.011009 2707 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:01.011496 kubelet[2707]: E0711 00:23:01.011368 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:01.052814 kubelet[2707]: I0711 00:23:01.052477 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.052431427 podStartE2EDuration="1.052431427s" podCreationTimestamp="2025-07-11 00:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:01.035526176 +0000 UTC m=+1.200721868" watchObservedRunningTime="2025-07-11 00:23:01.052431427 +0000 UTC m=+1.217627120" Jul 11 00:23:01.052814 kubelet[2707]: I0711 00:23:01.052648 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.052642631 podStartE2EDuration="1.052642631s" podCreationTimestamp="2025-07-11 00:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:01.051140407 +0000 UTC m=+1.216336099" watchObservedRunningTime="2025-07-11 00:23:01.052642631 +0000 UTC m=+1.217838323" Jul 11 00:23:01.081751 kubelet[2707]: I0711 00:23:01.081283 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.081266024 podStartE2EDuration="1.081266024s" podCreationTimestamp="2025-07-11 00:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:01.065995854 +0000 UTC m=+1.231191546" watchObservedRunningTime="2025-07-11 00:23:01.081266024 +0000 UTC m=+1.246461716" Jul 11 00:23:01.938237 kubelet[2707]: E0711 00:23:01.938189 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:01.938918 kubelet[2707]: E0711 00:23:01.938300 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:01.938918 kubelet[2707]: E0711 00:23:01.938695 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:02.939523 kubelet[2707]: E0711 00:23:02.939479 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:03.095160 kubelet[2707]: E0711 00:23:03.095096 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:03.818391 update_engine[1531]: I20250711 00:23:03.818166 1531 update_attempter.cc:509] Updating boot flags... Jul 11 00:23:05.274692 kubelet[2707]: E0711 00:23:05.274624 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:05.940443 kubelet[2707]: I0711 00:23:05.940388 2707 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:23:05.941038 containerd[1550]: time="2025-07-11T00:23:05.940980341Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:23:05.941463 kubelet[2707]: I0711 00:23:05.941250 2707 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:23:05.943709 kubelet[2707]: E0711 00:23:05.943633 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:06.640203 systemd[1]: Created slice kubepods-besteffort-pod0474da0a_a2ff_4d6c_a627_6a448c71178e.slice - libcontainer container kubepods-besteffort-pod0474da0a_a2ff_4d6c_a627_6a448c71178e.slice. Jul 11 00:23:06.651573 kubelet[2707]: I0711 00:23:06.651381 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0474da0a-a2ff-4d6c-a627-6a448c71178e-lib-modules\") pod \"kube-proxy-7nt8p\" (UID: \"0474da0a-a2ff-4d6c-a627-6a448c71178e\") " pod="kube-system/kube-proxy-7nt8p" Jul 11 00:23:06.651573 kubelet[2707]: I0711 00:23:06.651489 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0474da0a-a2ff-4d6c-a627-6a448c71178e-kube-proxy\") pod \"kube-proxy-7nt8p\" (UID: \"0474da0a-a2ff-4d6c-a627-6a448c71178e\") " pod="kube-system/kube-proxy-7nt8p" Jul 11 00:23:06.651573 kubelet[2707]: I0711 00:23:06.651540 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0474da0a-a2ff-4d6c-a627-6a448c71178e-xtables-lock\") pod \"kube-proxy-7nt8p\" (UID: \"0474da0a-a2ff-4d6c-a627-6a448c71178e\") " pod="kube-system/kube-proxy-7nt8p" Jul 11 00:23:06.651573 kubelet[2707]: I0711 00:23:06.651579 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdx9\" (UniqueName: \"kubernetes.io/projected/0474da0a-a2ff-4d6c-a627-6a448c71178e-kube-api-access-8vdx9\") pod \"kube-proxy-7nt8p\" (UID: \"0474da0a-a2ff-4d6c-a627-6a448c71178e\") " pod="kube-system/kube-proxy-7nt8p" Jul 11 00:23:06.945691 kubelet[2707]: E0711 00:23:06.945462 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:07.250080 kubelet[2707]: E0711 00:23:07.249899 2707 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 11 00:23:07.250080 kubelet[2707]: E0711 00:23:07.249964 2707 projected.go:194] Error preparing data for projected volume kube-api-access-8vdx9 for pod kube-system/kube-proxy-7nt8p: configmap "kube-root-ca.crt" not found Jul 11 00:23:07.250080 kubelet[2707]: E0711 00:23:07.250065 2707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0474da0a-a2ff-4d6c-a627-6a448c71178e-kube-api-access-8vdx9 podName:0474da0a-a2ff-4d6c-a627-6a448c71178e nodeName:}" failed. No retries permitted until 2025-07-11 00:23:07.750031236 +0000 UTC m=+7.915226918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8vdx9" (UniqueName: "kubernetes.io/projected/0474da0a-a2ff-4d6c-a627-6a448c71178e-kube-api-access-8vdx9") pod "kube-proxy-7nt8p" (UID: "0474da0a-a2ff-4d6c-a627-6a448c71178e") : configmap "kube-root-ca.crt" not found Jul 11 00:23:07.859552 kubelet[2707]: E0711 00:23:07.859466 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:07.860406 containerd[1550]: time="2025-07-11T00:23:07.860357312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7nt8p,Uid:0474da0a-a2ff-4d6c-a627-6a448c71178e,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:08.890576 containerd[1550]: time="2025-07-11T00:23:08.890515791Z" level=info msg="connecting to shim c6cd867f07c3f267179c899dd30bf25643c8dddf40e0a64cd0c20c4ee75942ca" address="unix:///run/containerd/s/0aa3830e1296d2b4ea7ce18ae6b6cbe87eb8a2ca32d1e07fd34444d69522ed34" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:23:08.924706 systemd[1]: Started cri-containerd-c6cd867f07c3f267179c899dd30bf25643c8dddf40e0a64cd0c20c4ee75942ca.scope - libcontainer container c6cd867f07c3f267179c899dd30bf25643c8dddf40e0a64cd0c20c4ee75942ca. Jul 11 00:23:09.291388 containerd[1550]: time="2025-07-11T00:23:09.291275628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7nt8p,Uid:0474da0a-a2ff-4d6c-a627-6a448c71178e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6cd867f07c3f267179c899dd30bf25643c8dddf40e0a64cd0c20c4ee75942ca\"" Jul 11 00:23:09.293053 kubelet[2707]: E0711 00:23:09.292684 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:09.306082 systemd[1]: Created slice kubepods-besteffort-pod70f202c9_0844_4bb3_b6c5_cd1d9156ed8c.slice - libcontainer container kubepods-besteffort-pod70f202c9_0844_4bb3_b6c5_cd1d9156ed8c.slice. Jul 11 00:23:09.317840 containerd[1550]: time="2025-07-11T00:23:09.317764675Z" level=info msg="CreateContainer within sandbox \"c6cd867f07c3f267179c899dd30bf25643c8dddf40e0a64cd0c20c4ee75942ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:23:09.393683 kubelet[2707]: I0711 00:23:09.393582 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xnf\" (UniqueName: \"kubernetes.io/projected/70f202c9-0844-4bb3-b6c5-cd1d9156ed8c-kube-api-access-x7xnf\") pod \"tigera-operator-747864d56d-tjz87\" (UID: \"70f202c9-0844-4bb3-b6c5-cd1d9156ed8c\") " pod="tigera-operator/tigera-operator-747864d56d-tjz87" Jul 11 00:23:09.393683 kubelet[2707]: I0711 00:23:09.393671 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/70f202c9-0844-4bb3-b6c5-cd1d9156ed8c-var-lib-calico\") pod \"tigera-operator-747864d56d-tjz87\" (UID: \"70f202c9-0844-4bb3-b6c5-cd1d9156ed8c\") " pod="tigera-operator/tigera-operator-747864d56d-tjz87" Jul 11 00:23:09.537427 kubelet[2707]: E0711 00:23:09.536974 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:09.554422 containerd[1550]: time="2025-07-11T00:23:09.554061282Z" level=info msg="Container 8271a7a0da4d84c458ce7ac82039e315104e0d754eb8beb6fe67442175a90b5e: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:23:09.611241 containerd[1550]: time="2025-07-11T00:23:09.611029578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-tjz87,Uid:70f202c9-0844-4bb3-b6c5-cd1d9156ed8c,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:23:09.795371 containerd[1550]: time="2025-07-11T00:23:09.792553410Z" level=info msg="CreateContainer within sandbox \"c6cd867f07c3f267179c899dd30bf25643c8dddf40e0a64cd0c20c4ee75942ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8271a7a0da4d84c458ce7ac82039e315104e0d754eb8beb6fe67442175a90b5e\"" Jul 11 00:23:09.796747 containerd[1550]: time="2025-07-11T00:23:09.796699895Z" level=info msg="StartContainer for \"8271a7a0da4d84c458ce7ac82039e315104e0d754eb8beb6fe67442175a90b5e\"" Jul 11 00:23:09.798713 containerd[1550]: time="2025-07-11T00:23:09.798593047Z" level=info msg="connecting to shim 8271a7a0da4d84c458ce7ac82039e315104e0d754eb8beb6fe67442175a90b5e" address="unix:///run/containerd/s/0aa3830e1296d2b4ea7ce18ae6b6cbe87eb8a2ca32d1e07fd34444d69522ed34" protocol=ttrpc version=3 Jul 11 00:23:09.866756 systemd[1]: Started cri-containerd-8271a7a0da4d84c458ce7ac82039e315104e0d754eb8beb6fe67442175a90b5e.scope - libcontainer container 8271a7a0da4d84c458ce7ac82039e315104e0d754eb8beb6fe67442175a90b5e. Jul 11 00:23:09.963177 containerd[1550]: time="2025-07-11T00:23:09.963112282Z" level=info msg="connecting to shim 99de9052a3f9beed9e0fe09a4e821c1956f375186c2834c0f083a0c6cfe0002a" address="unix:///run/containerd/s/fb5969810392fbd4c5e221b0ba462a4e33df544ce0bbb931508a6d2c8b0279de" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:23:10.001672 systemd[1]: Started cri-containerd-99de9052a3f9beed9e0fe09a4e821c1956f375186c2834c0f083a0c6cfe0002a.scope - libcontainer container 99de9052a3f9beed9e0fe09a4e821c1956f375186c2834c0f083a0c6cfe0002a. Jul 11 00:23:10.062717 containerd[1550]: time="2025-07-11T00:23:10.062620080Z" level=info msg="StartContainer for \"8271a7a0da4d84c458ce7ac82039e315104e0d754eb8beb6fe67442175a90b5e\" returns successfully" Jul 11 00:23:10.069466 kubelet[2707]: E0711 00:23:10.069391 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:10.132734 containerd[1550]: time="2025-07-11T00:23:10.132351003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-tjz87,Uid:70f202c9-0844-4bb3-b6c5-cd1d9156ed8c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"99de9052a3f9beed9e0fe09a4e821c1956f375186c2834c0f083a0c6cfe0002a\"" Jul 11 00:23:10.134973 containerd[1550]: time="2025-07-11T00:23:10.134637804Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:23:11.072022 kubelet[2707]: E0711 00:23:11.071982 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:11.072633 kubelet[2707]: E0711 00:23:11.072075 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:12.726923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492741694.mount: Deactivated successfully. Jul 11 00:23:13.101050 kubelet[2707]: E0711 00:23:13.100802 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:13.299168 kubelet[2707]: I0711 00:23:13.299101 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7nt8p" podStartSLOduration=7.299081349 podStartE2EDuration="7.299081349s" podCreationTimestamp="2025-07-11 00:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:11.147668246 +0000 UTC m=+11.312863948" watchObservedRunningTime="2025-07-11 00:23:13.299081349 +0000 UTC m=+13.464277041" Jul 11 00:23:13.583833 containerd[1550]: time="2025-07-11T00:23:13.583681218Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:13.637302 containerd[1550]: time="2025-07-11T00:23:13.637184174Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 00:23:13.678445 containerd[1550]: time="2025-07-11T00:23:13.678199281Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:13.685681 containerd[1550]: time="2025-07-11T00:23:13.685581780Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:13.686546 containerd[1550]: time="2025-07-11T00:23:13.686480630Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.551791274s" Jul 11 00:23:13.686546 containerd[1550]: time="2025-07-11T00:23:13.686541889Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 00:23:13.705782 containerd[1550]: time="2025-07-11T00:23:13.705703866Z" level=info msg="CreateContainer within sandbox \"99de9052a3f9beed9e0fe09a4e821c1956f375186c2834c0f083a0c6cfe0002a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:23:13.721262 containerd[1550]: time="2025-07-11T00:23:13.721107756Z" level=info msg="Container f0c7e57ca868e75422ebab431eab1d58396c3e07f24e5704f1bb99bf334d4230: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:23:13.733048 containerd[1550]: time="2025-07-11T00:23:13.732978635Z" level=info msg="CreateContainer within sandbox \"99de9052a3f9beed9e0fe09a4e821c1956f375186c2834c0f083a0c6cfe0002a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0c7e57ca868e75422ebab431eab1d58396c3e07f24e5704f1bb99bf334d4230\"" Jul 11 00:23:13.733778 containerd[1550]: time="2025-07-11T00:23:13.733736476Z" level=info msg="StartContainer for \"f0c7e57ca868e75422ebab431eab1d58396c3e07f24e5704f1bb99bf334d4230\"" Jul 11 00:23:13.734994 containerd[1550]: time="2025-07-11T00:23:13.734946606Z" level=info msg="connecting to shim f0c7e57ca868e75422ebab431eab1d58396c3e07f24e5704f1bb99bf334d4230" address="unix:///run/containerd/s/fb5969810392fbd4c5e221b0ba462a4e33df544ce0bbb931508a6d2c8b0279de" protocol=ttrpc version=3 Jul 11 00:23:13.799662 systemd[1]: Started cri-containerd-f0c7e57ca868e75422ebab431eab1d58396c3e07f24e5704f1bb99bf334d4230.scope - libcontainer container f0c7e57ca868e75422ebab431eab1d58396c3e07f24e5704f1bb99bf334d4230. Jul 11 00:23:13.837319 containerd[1550]: time="2025-07-11T00:23:13.837179367Z" level=info msg="StartContainer for \"f0c7e57ca868e75422ebab431eab1d58396c3e07f24e5704f1bb99bf334d4230\" returns successfully" Jul 11 00:23:20.013829 sudo[1764]: pam_unix(sudo:session): session closed for user root Jul 11 00:23:20.015879 sshd[1763]: Connection closed by 10.0.0.1 port 35540 Jul 11 00:23:20.016687 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:20.050366 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:23:20.051361 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:35540.service: Deactivated successfully. Jul 11 00:23:20.056246 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:23:20.057198 systemd[1]: session-7.scope: Consumed 7.657s CPU time, 228.9M memory peak. Jul 11 00:23:20.062872 systemd-logind[1525]: Removed session 7. Jul 11 00:23:23.030473 kubelet[2707]: I0711 00:23:23.029874 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-tjz87" podStartSLOduration=11.47648054 podStartE2EDuration="15.029842847s" podCreationTimestamp="2025-07-11 00:23:08 +0000 UTC" firstStartedPulling="2025-07-11 00:23:10.134102263 +0000 UTC m=+10.299297955" lastFinishedPulling="2025-07-11 00:23:13.68746457 +0000 UTC m=+13.852660262" observedRunningTime="2025-07-11 00:23:14.123377558 +0000 UTC m=+14.288573250" watchObservedRunningTime="2025-07-11 00:23:23.029842847 +0000 UTC m=+23.195038549" Jul 11 00:23:23.091118 systemd[1]: Created slice kubepods-besteffort-podfe53a38b_12d9_4a2e_b2e3_e4eaf28f10bf.slice - libcontainer container kubepods-besteffort-podfe53a38b_12d9_4a2e_b2e3_e4eaf28f10bf.slice. Jul 11 00:23:23.185237 kubelet[2707]: I0711 00:23:23.185148 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe53a38b-12d9-4a2e-b2e3-e4eaf28f10bf-tigera-ca-bundle\") pod \"calico-typha-866cd797c4-glmpd\" (UID: \"fe53a38b-12d9-4a2e-b2e3-e4eaf28f10bf\") " pod="calico-system/calico-typha-866cd797c4-glmpd" Jul 11 00:23:23.185237 kubelet[2707]: I0711 00:23:23.185213 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nd8s\" (UniqueName: \"kubernetes.io/projected/fe53a38b-12d9-4a2e-b2e3-e4eaf28f10bf-kube-api-access-2nd8s\") pod \"calico-typha-866cd797c4-glmpd\" (UID: \"fe53a38b-12d9-4a2e-b2e3-e4eaf28f10bf\") " pod="calico-system/calico-typha-866cd797c4-glmpd" Jul 11 00:23:23.185237 kubelet[2707]: I0711 00:23:23.185237 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fe53a38b-12d9-4a2e-b2e3-e4eaf28f10bf-typha-certs\") pod \"calico-typha-866cd797c4-glmpd\" (UID: \"fe53a38b-12d9-4a2e-b2e3-e4eaf28f10bf\") " pod="calico-system/calico-typha-866cd797c4-glmpd" Jul 11 00:23:23.213543 systemd[1]: Created slice kubepods-besteffort-pod0985ade0_b1c7_42fe_8176_ff5804402265.slice - libcontainer container kubepods-besteffort-pod0985ade0_b1c7_42fe_8176_ff5804402265.slice. Jul 11 00:23:23.286179 kubelet[2707]: I0711 00:23:23.285926 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-cni-log-dir\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.286179 kubelet[2707]: I0711 00:23:23.285997 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-flexvol-driver-host\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.286179 kubelet[2707]: I0711 00:23:23.286065 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-lib-modules\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.287052 kubelet[2707]: I0711 00:23:23.286556 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-var-lib-calico\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.287052 kubelet[2707]: I0711 00:23:23.286811 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0985ade0-b1c7-42fe-8176-ff5804402265-tigera-ca-bundle\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.287052 kubelet[2707]: I0711 00:23:23.286844 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-policysync\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.287052 kubelet[2707]: I0711 00:23:23.287012 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-xtables-lock\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.288769 kubelet[2707]: I0711 00:23:23.288728 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-var-run-calico\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.288849 kubelet[2707]: I0711 00:23:23.288778 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-cni-bin-dir\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.288849 kubelet[2707]: I0711 00:23:23.288798 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0985ade0-b1c7-42fe-8176-ff5804402265-cni-net-dir\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.288899 kubelet[2707]: I0711 00:23:23.288860 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2qp7\" (UniqueName: \"kubernetes.io/projected/0985ade0-b1c7-42fe-8176-ff5804402265-kube-api-access-c2qp7\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.288899 kubelet[2707]: I0711 00:23:23.288885 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0985ade0-b1c7-42fe-8176-ff5804402265-node-certs\") pod \"calico-node-pt7tk\" (UID: \"0985ade0-b1c7-42fe-8176-ff5804402265\") " pod="calico-system/calico-node-pt7tk" Jul 11 00:23:23.398023 kubelet[2707]: E0711 00:23:23.395014 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:23.398164 kubelet[2707]: E0711 00:23:23.398087 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.398164 kubelet[2707]: W0711 00:23:23.398106 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.398269 kubelet[2707]: E0711 00:23:23.398239 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.400596 containerd[1550]: time="2025-07-11T00:23:23.400500525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866cd797c4-glmpd,Uid:fe53a38b-12d9-4a2e-b2e3-e4eaf28f10bf,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:23.401424 kubelet[2707]: E0711 00:23:23.401091 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.401424 kubelet[2707]: W0711 00:23:23.401113 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.401424 kubelet[2707]: E0711 00:23:23.401128 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.405170 kubelet[2707]: E0711 00:23:23.405139 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.405170 kubelet[2707]: W0711 00:23:23.405161 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.405256 kubelet[2707]: E0711 00:23:23.405176 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.446825 containerd[1550]: time="2025-07-11T00:23:23.446761500Z" level=info msg="connecting to shim 6d4711a732a5e201db782cfb1abbb657344bcb1c2bc5781161ea32aec6f11a44" address="unix:///run/containerd/s/b9e0f6e39c1185ca74adcf7982df9531a97f137e77f952e82e8eb3792c6c3dec" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:23:23.483654 systemd[1]: Started cri-containerd-6d4711a732a5e201db782cfb1abbb657344bcb1c2bc5781161ea32aec6f11a44.scope - libcontainer container 6d4711a732a5e201db782cfb1abbb657344bcb1c2bc5781161ea32aec6f11a44. Jul 11 00:23:23.518115 containerd[1550]: time="2025-07-11T00:23:23.518068405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pt7tk,Uid:0985ade0-b1c7-42fe-8176-ff5804402265,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:23.675491 containerd[1550]: time="2025-07-11T00:23:23.675180072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866cd797c4-glmpd,Uid:fe53a38b-12d9-4a2e-b2e3-e4eaf28f10bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d4711a732a5e201db782cfb1abbb657344bcb1c2bc5781161ea32aec6f11a44\"" Jul 11 00:23:23.678370 kubelet[2707]: E0711 00:23:23.678306 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:23.681070 containerd[1550]: time="2025-07-11T00:23:23.681023367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:23:23.954733 kubelet[2707]: E0711 00:23:23.954413 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cwbxt" podUID="d158cd74-abdf-48f8-9025-ccae8e128169" Jul 11 00:23:23.956818 containerd[1550]: time="2025-07-11T00:23:23.956750427Z" level=info msg="connecting to shim a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13" address="unix:///run/containerd/s/c6c73bd6594454cff446bfee80d34408fd282f5c0ba23ea9f80c1fbfbbdd3d95" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:23:23.987856 kubelet[2707]: E0711 00:23:23.987815 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.988321 kubelet[2707]: W0711 00:23:23.988052 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.988321 kubelet[2707]: E0711 00:23:23.988090 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.988802 kubelet[2707]: E0711 00:23:23.988772 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.988940 kubelet[2707]: W0711 00:23:23.988912 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.989073 kubelet[2707]: E0711 00:23:23.989056 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.991823 kubelet[2707]: E0711 00:23:23.991753 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.992154 kubelet[2707]: W0711 00:23:23.991954 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.992154 kubelet[2707]: E0711 00:23:23.991977 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.992804 kubelet[2707]: E0711 00:23:23.992788 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.993044 kubelet[2707]: W0711 00:23:23.992893 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.993044 kubelet[2707]: E0711 00:23:23.992913 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.993687 kubelet[2707]: E0711 00:23:23.993655 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.993806 kubelet[2707]: W0711 00:23:23.993789 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.993979 kubelet[2707]: E0711 00:23:23.993909 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.994607 kubelet[2707]: E0711 00:23:23.994497 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.994607 kubelet[2707]: W0711 00:23:23.994511 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.994607 kubelet[2707]: E0711 00:23:23.994524 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.995195 kubelet[2707]: E0711 00:23:23.995130 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.995507 kubelet[2707]: W0711 00:23:23.995145 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.995507 kubelet[2707]: E0711 00:23:23.995429 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.996166 kubelet[2707]: E0711 00:23:23.996150 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.996300 kubelet[2707]: W0711 00:23:23.996283 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.996436 kubelet[2707]: E0711 00:23:23.996420 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.996992 kubelet[2707]: E0711 00:23:23.996976 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.997260 kubelet[2707]: W0711 00:23:23.997181 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.997260 kubelet[2707]: E0711 00:23:23.997199 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:23.998574 kubelet[2707]: E0711 00:23:23.998071 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:23.998574 kubelet[2707]: W0711 00:23:23.998090 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:23.998574 kubelet[2707]: E0711 00:23:23.998102 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.001173 kubelet[2707]: E0711 00:23:24.000661 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.001173 kubelet[2707]: W0711 00:23:24.000688 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.001173 kubelet[2707]: E0711 00:23:24.000711 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.002498 kubelet[2707]: E0711 00:23:24.001470 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.002498 kubelet[2707]: W0711 00:23:24.001485 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.002498 kubelet[2707]: E0711 00:23:24.001498 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.002780 kubelet[2707]: E0711 00:23:24.002687 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.002780 kubelet[2707]: W0711 00:23:24.002703 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.002780 kubelet[2707]: E0711 00:23:24.002717 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.003183 kubelet[2707]: E0711 00:23:24.003168 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.003277 kubelet[2707]: W0711 00:23:24.003261 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.003395 kubelet[2707]: E0711 00:23:24.003378 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.004846 kubelet[2707]: E0711 00:23:24.004568 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.004947 kubelet[2707]: W0711 00:23:24.004911 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.004947 kubelet[2707]: E0711 00:23:24.004931 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.005444 kubelet[2707]: E0711 00:23:24.005428 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.005543 kubelet[2707]: W0711 00:23:24.005528 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.005662 kubelet[2707]: E0711 00:23:24.005630 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.006052 kubelet[2707]: I0711 00:23:24.005996 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d158cd74-abdf-48f8-9025-ccae8e128169-registration-dir\") pod \"csi-node-driver-cwbxt\" (UID: \"d158cd74-abdf-48f8-9025-ccae8e128169\") " pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:24.006915 kubelet[2707]: E0711 00:23:24.006866 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.006915 kubelet[2707]: W0711 00:23:24.006883 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.006915 kubelet[2707]: E0711 00:23:24.006897 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.007757 kubelet[2707]: E0711 00:23:24.007716 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.007892 kubelet[2707]: W0711 00:23:24.007836 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.007892 kubelet[2707]: E0711 00:23:24.007851 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.008815 kubelet[2707]: E0711 00:23:24.008573 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.008815 kubelet[2707]: W0711 00:23:24.008591 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.008815 kubelet[2707]: E0711 00:23:24.008606 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.009021 kubelet[2707]: I0711 00:23:24.008988 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d158cd74-abdf-48f8-9025-ccae8e128169-socket-dir\") pod \"csi-node-driver-cwbxt\" (UID: \"d158cd74-abdf-48f8-9025-ccae8e128169\") " pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:24.010593 kubelet[2707]: E0711 00:23:24.010543 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.010593 kubelet[2707]: W0711 00:23:24.010559 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.010593 kubelet[2707]: E0711 00:23:24.010571 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.010888 kubelet[2707]: E0711 00:23:24.010866 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.010888 kubelet[2707]: W0711 00:23:24.010881 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.010960 kubelet[2707]: E0711 00:23:24.010893 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.012200 kubelet[2707]: E0711 00:23:24.011158 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.012200 kubelet[2707]: W0711 00:23:24.011173 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.012200 kubelet[2707]: E0711 00:23:24.011182 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.012200 kubelet[2707]: I0711 00:23:24.011275 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d158cd74-abdf-48f8-9025-ccae8e128169-kubelet-dir\") pod \"csi-node-driver-cwbxt\" (UID: \"d158cd74-abdf-48f8-9025-ccae8e128169\") " pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:24.012200 kubelet[2707]: E0711 00:23:24.011632 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.012200 kubelet[2707]: W0711 00:23:24.011676 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.012200 kubelet[2707]: E0711 00:23:24.011688 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.012200 kubelet[2707]: E0711 00:23:24.011932 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.012200 kubelet[2707]: W0711 00:23:24.011940 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.012552 kubelet[2707]: E0711 00:23:24.011950 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.012552 kubelet[2707]: E0711 00:23:24.012263 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.012552 kubelet[2707]: W0711 00:23:24.012271 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.012552 kubelet[2707]: E0711 00:23:24.012280 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.012766 kubelet[2707]: E0711 00:23:24.012568 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.012766 kubelet[2707]: W0711 00:23:24.012576 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.012766 kubelet[2707]: E0711 00:23:24.012584 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.014045 kubelet[2707]: E0711 00:23:24.012878 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.014045 kubelet[2707]: W0711 00:23:24.012890 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.014045 kubelet[2707]: E0711 00:23:24.012900 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.014045 kubelet[2707]: E0711 00:23:24.013143 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.014045 kubelet[2707]: W0711 00:23:24.013150 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.014045 kubelet[2707]: E0711 00:23:24.013158 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.014045 kubelet[2707]: E0711 00:23:24.013524 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.014045 kubelet[2707]: W0711 00:23:24.013532 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.014045 kubelet[2707]: E0711 00:23:24.013541 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.018792 systemd[1]: Started cri-containerd-a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13.scope - libcontainer container a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13. Jul 11 00:23:24.076885 containerd[1550]: time="2025-07-11T00:23:24.076825868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pt7tk,Uid:0985ade0-b1c7-42fe-8176-ff5804402265,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13\"" Jul 11 00:23:24.114009 kubelet[2707]: E0711 00:23:24.113940 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.114009 kubelet[2707]: W0711 00:23:24.113976 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.114009 kubelet[2707]: E0711 00:23:24.114002 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.114805 kubelet[2707]: I0711 00:23:24.114038 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d158cd74-abdf-48f8-9025-ccae8e128169-varrun\") pod \"csi-node-driver-cwbxt\" (UID: \"d158cd74-abdf-48f8-9025-ccae8e128169\") " pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:24.114805 kubelet[2707]: E0711 00:23:24.114478 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.114805 kubelet[2707]: W0711 00:23:24.114514 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.114805 kubelet[2707]: E0711 00:23:24.114544 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.114975 kubelet[2707]: E0711 00:23:24.114952 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.114975 kubelet[2707]: W0711 00:23:24.114967 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.115043 kubelet[2707]: E0711 00:23:24.114978 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.115430 kubelet[2707]: E0711 00:23:24.115390 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.115430 kubelet[2707]: W0711 00:23:24.115413 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.115430 kubelet[2707]: E0711 00:23:24.115425 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.115699 kubelet[2707]: E0711 00:23:24.115668 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.115699 kubelet[2707]: W0711 00:23:24.115683 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.115699 kubelet[2707]: E0711 00:23:24.115694 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.116022 kubelet[2707]: E0711 00:23:24.116002 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.116022 kubelet[2707]: W0711 00:23:24.116017 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.116101 kubelet[2707]: E0711 00:23:24.116030 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.116295 kubelet[2707]: E0711 00:23:24.116274 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.116295 kubelet[2707]: W0711 00:23:24.116288 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.116392 kubelet[2707]: E0711 00:23:24.116298 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.116547 kubelet[2707]: E0711 00:23:24.116529 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.116547 kubelet[2707]: W0711 00:23:24.116543 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.116594 kubelet[2707]: E0711 00:23:24.116553 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.116828 kubelet[2707]: E0711 00:23:24.116806 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.116828 kubelet[2707]: W0711 00:23:24.116820 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.116910 kubelet[2707]: E0711 00:23:24.116832 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.117108 kubelet[2707]: E0711 00:23:24.117087 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.117108 kubelet[2707]: W0711 00:23:24.117101 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.117165 kubelet[2707]: E0711 00:23:24.117114 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.117404 kubelet[2707]: E0711 00:23:24.117386 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.117404 kubelet[2707]: W0711 00:23:24.117400 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.117491 kubelet[2707]: E0711 00:23:24.117411 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.117656 kubelet[2707]: E0711 00:23:24.117639 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.117656 kubelet[2707]: W0711 00:23:24.117652 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.117699 kubelet[2707]: E0711 00:23:24.117663 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.117721 kubelet[2707]: I0711 00:23:24.117694 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxk7\" (UniqueName: \"kubernetes.io/projected/d158cd74-abdf-48f8-9025-ccae8e128169-kube-api-access-kbxk7\") pod \"csi-node-driver-cwbxt\" (UID: \"d158cd74-abdf-48f8-9025-ccae8e128169\") " pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:24.118055 kubelet[2707]: E0711 00:23:24.118012 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.118055 kubelet[2707]: W0711 00:23:24.118039 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.118134 kubelet[2707]: E0711 00:23:24.118060 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.118375 kubelet[2707]: E0711 00:23:24.118353 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.118375 kubelet[2707]: W0711 00:23:24.118372 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.118462 kubelet[2707]: E0711 00:23:24.118385 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.118629 kubelet[2707]: E0711 00:23:24.118596 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.118629 kubelet[2707]: W0711 00:23:24.118623 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.118629 kubelet[2707]: E0711 00:23:24.118632 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.118834 kubelet[2707]: E0711 00:23:24.118815 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.118834 kubelet[2707]: W0711 00:23:24.118827 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.118898 kubelet[2707]: E0711 00:23:24.118836 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.119034 kubelet[2707]: E0711 00:23:24.119017 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.119034 kubelet[2707]: W0711 00:23:24.119028 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.119034 kubelet[2707]: E0711 00:23:24.119038 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.119272 kubelet[2707]: E0711 00:23:24.119249 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.119272 kubelet[2707]: W0711 00:23:24.119265 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.119356 kubelet[2707]: E0711 00:23:24.119276 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.119502 kubelet[2707]: E0711 00:23:24.119482 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.119502 kubelet[2707]: W0711 00:23:24.119496 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.119571 kubelet[2707]: E0711 00:23:24.119506 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.119794 kubelet[2707]: E0711 00:23:24.119772 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.119794 kubelet[2707]: W0711 00:23:24.119790 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.119862 kubelet[2707]: E0711 00:23:24.119804 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.120520 kubelet[2707]: E0711 00:23:24.120493 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.120520 kubelet[2707]: W0711 00:23:24.120509 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.120520 kubelet[2707]: E0711 00:23:24.120521 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.218969 kubelet[2707]: E0711 00:23:24.218831 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.218969 kubelet[2707]: W0711 00:23:24.218858 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.218969 kubelet[2707]: E0711 00:23:24.218881 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.219151 kubelet[2707]: E0711 00:23:24.219134 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.219151 kubelet[2707]: W0711 00:23:24.219145 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.219240 kubelet[2707]: E0711 00:23:24.219154 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.219543 kubelet[2707]: E0711 00:23:24.219449 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.219543 kubelet[2707]: W0711 00:23:24.219464 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.219543 kubelet[2707]: E0711 00:23:24.219473 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.219789 kubelet[2707]: E0711 00:23:24.219719 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.219789 kubelet[2707]: W0711 00:23:24.219746 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.219789 kubelet[2707]: E0711 00:23:24.219755 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.220012 kubelet[2707]: E0711 00:23:24.219986 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.220012 kubelet[2707]: W0711 00:23:24.219999 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.220012 kubelet[2707]: E0711 00:23:24.220007 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.220285 kubelet[2707]: E0711 00:23:24.220250 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.220285 kubelet[2707]: W0711 00:23:24.220262 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.220285 kubelet[2707]: E0711 00:23:24.220271 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.220502 kubelet[2707]: E0711 00:23:24.220485 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.220502 kubelet[2707]: W0711 00:23:24.220497 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.220555 kubelet[2707]: E0711 00:23:24.220505 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.220729 kubelet[2707]: E0711 00:23:24.220710 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.220729 kubelet[2707]: W0711 00:23:24.220723 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.220799 kubelet[2707]: E0711 00:23:24.220732 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.220994 kubelet[2707]: E0711 00:23:24.220966 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.221039 kubelet[2707]: W0711 00:23:24.220991 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.221039 kubelet[2707]: E0711 00:23:24.221021 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.221567 kubelet[2707]: E0711 00:23:24.221546 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.221567 kubelet[2707]: W0711 00:23:24.221562 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.221688 kubelet[2707]: E0711 00:23:24.221573 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.232082 kubelet[2707]: E0711 00:23:24.232039 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.232082 kubelet[2707]: W0711 00:23:24.232068 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.232082 kubelet[2707]: E0711 00:23:24.232093 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:25.421077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485034826.mount: Deactivated successfully. Jul 11 00:23:25.859777 containerd[1550]: time="2025-07-11T00:23:25.859551321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:25.861069 containerd[1550]: time="2025-07-11T00:23:25.860978143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 00:23:25.863519 containerd[1550]: time="2025-07-11T00:23:25.863408914Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:25.866255 containerd[1550]: time="2025-07-11T00:23:25.866203252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:25.866751 containerd[1550]: time="2025-07-11T00:23:25.866711759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.185644463s" Jul 11 00:23:25.866751 containerd[1550]: time="2025-07-11T00:23:25.866746010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 00:23:25.868063 containerd[1550]: time="2025-07-11T00:23:25.867973353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:23:25.884776 containerd[1550]: time="2025-07-11T00:23:25.884683746Z" level=info msg="CreateContainer within sandbox \"6d4711a732a5e201db782cfb1abbb657344bcb1c2bc5781161ea32aec6f11a44\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:23:25.904359 containerd[1550]: time="2025-07-11T00:23:25.902587241Z" level=info msg="Container a624b8795067df32cfc01dd01015b91b40a844878e5700fcabd9af818e12da1e: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:23:25.924612 kubelet[2707]: E0711 00:23:25.924553 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cwbxt" podUID="d158cd74-abdf-48f8-9025-ccae8e128169" Jul 11 00:23:26.009384 containerd[1550]: time="2025-07-11T00:23:26.008116557Z" level=info msg="CreateContainer within sandbox \"6d4711a732a5e201db782cfb1abbb657344bcb1c2bc5781161ea32aec6f11a44\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a624b8795067df32cfc01dd01015b91b40a844878e5700fcabd9af818e12da1e\"" Jul 11 00:23:26.009384 containerd[1550]: time="2025-07-11T00:23:26.009149671Z" level=info msg="StartContainer for \"a624b8795067df32cfc01dd01015b91b40a844878e5700fcabd9af818e12da1e\"" Jul 11 00:23:26.010685 containerd[1550]: time="2025-07-11T00:23:26.010641594Z" level=info msg="connecting to shim a624b8795067df32cfc01dd01015b91b40a844878e5700fcabd9af818e12da1e" address="unix:///run/containerd/s/b9e0f6e39c1185ca74adcf7982df9531a97f137e77f952e82e8eb3792c6c3dec" protocol=ttrpc version=3 Jul 11 00:23:26.050716 systemd[1]: Started cri-containerd-a624b8795067df32cfc01dd01015b91b40a844878e5700fcabd9af818e12da1e.scope - libcontainer container a624b8795067df32cfc01dd01015b91b40a844878e5700fcabd9af818e12da1e. Jul 11 00:23:26.292445 containerd[1550]: time="2025-07-11T00:23:26.292372400Z" level=info msg="StartContainer for \"a624b8795067df32cfc01dd01015b91b40a844878e5700fcabd9af818e12da1e\" returns successfully" Jul 11 00:23:27.123775 kubelet[2707]: E0711 00:23:27.123343 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:27.136325 kubelet[2707]: E0711 00:23:27.136275 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.136325 kubelet[2707]: W0711 00:23:27.136299 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.182259 kubelet[2707]: E0711 00:23:27.182190 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.182666 kubelet[2707]: E0711 00:23:27.182640 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.182666 kubelet[2707]: W0711 00:23:27.182657 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.182722 kubelet[2707]: E0711 00:23:27.182677 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.182940 kubelet[2707]: E0711 00:23:27.182902 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.182940 kubelet[2707]: W0711 00:23:27.182916 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.182940 kubelet[2707]: E0711 00:23:27.182939 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.183149 kubelet[2707]: E0711 00:23:27.183121 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.183149 kubelet[2707]: W0711 00:23:27.183131 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.183149 kubelet[2707]: E0711 00:23:27.183140 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.183420 kubelet[2707]: E0711 00:23:27.183396 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.183420 kubelet[2707]: W0711 00:23:27.183407 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.183420 kubelet[2707]: E0711 00:23:27.183415 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.183616 kubelet[2707]: E0711 00:23:27.183592 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.183616 kubelet[2707]: W0711 00:23:27.183602 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.183616 kubelet[2707]: E0711 00:23:27.183613 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.183788 kubelet[2707]: E0711 00:23:27.183773 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.183788 kubelet[2707]: W0711 00:23:27.183781 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.183846 kubelet[2707]: E0711 00:23:27.183790 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.183994 kubelet[2707]: E0711 00:23:27.183971 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.183994 kubelet[2707]: W0711 00:23:27.183979 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.183994 kubelet[2707]: E0711 00:23:27.183988 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.184309 kubelet[2707]: E0711 00:23:27.184286 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.184309 kubelet[2707]: W0711 00:23:27.184303 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.184404 kubelet[2707]: E0711 00:23:27.184317 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.184579 kubelet[2707]: E0711 00:23:27.184560 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.184579 kubelet[2707]: W0711 00:23:27.184573 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.184634 kubelet[2707]: E0711 00:23:27.184584 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.184780 kubelet[2707]: E0711 00:23:27.184764 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.184780 kubelet[2707]: W0711 00:23:27.184775 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.184838 kubelet[2707]: E0711 00:23:27.184784 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.185016 kubelet[2707]: E0711 00:23:27.184999 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.185016 kubelet[2707]: W0711 00:23:27.185010 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.185093 kubelet[2707]: E0711 00:23:27.185021 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.185262 kubelet[2707]: E0711 00:23:27.185242 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.185262 kubelet[2707]: W0711 00:23:27.185253 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.185315 kubelet[2707]: E0711 00:23:27.185263 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.185487 kubelet[2707]: E0711 00:23:27.185473 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.185487 kubelet[2707]: W0711 00:23:27.185484 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.185553 kubelet[2707]: E0711 00:23:27.185494 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.185675 kubelet[2707]: E0711 00:23:27.185662 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.185675 kubelet[2707]: W0711 00:23:27.185673 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.185720 kubelet[2707]: E0711 00:23:27.185682 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.186024 kubelet[2707]: E0711 00:23:27.186002 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.186024 kubelet[2707]: W0711 00:23:27.186014 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.186081 kubelet[2707]: E0711 00:23:27.186025 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.186241 kubelet[2707]: E0711 00:23:27.186227 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.186241 kubelet[2707]: W0711 00:23:27.186239 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.186297 kubelet[2707]: E0711 00:23:27.186249 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.186733 kubelet[2707]: E0711 00:23:27.186697 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.186787 kubelet[2707]: W0711 00:23:27.186732 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.186787 kubelet[2707]: E0711 00:23:27.186762 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.187053 kubelet[2707]: E0711 00:23:27.187027 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.187053 kubelet[2707]: W0711 00:23:27.187043 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.187107 kubelet[2707]: E0711 00:23:27.187054 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.187304 kubelet[2707]: E0711 00:23:27.187288 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.187304 kubelet[2707]: W0711 00:23:27.187301 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.187372 kubelet[2707]: E0711 00:23:27.187313 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.187594 kubelet[2707]: E0711 00:23:27.187577 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.187594 kubelet[2707]: W0711 00:23:27.187591 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.187648 kubelet[2707]: E0711 00:23:27.187602 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.187847 kubelet[2707]: E0711 00:23:27.187829 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.187847 kubelet[2707]: W0711 00:23:27.187843 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.187898 kubelet[2707]: E0711 00:23:27.187855 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.188123 kubelet[2707]: E0711 00:23:27.188105 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.188123 kubelet[2707]: W0711 00:23:27.188122 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.188183 kubelet[2707]: E0711 00:23:27.188134 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.188374 kubelet[2707]: E0711 00:23:27.188356 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.188374 kubelet[2707]: W0711 00:23:27.188369 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.188439 kubelet[2707]: E0711 00:23:27.188380 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.188594 kubelet[2707]: E0711 00:23:27.188578 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.188594 kubelet[2707]: W0711 00:23:27.188589 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.188639 kubelet[2707]: E0711 00:23:27.188598 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.188837 kubelet[2707]: E0711 00:23:27.188821 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.188837 kubelet[2707]: W0711 00:23:27.188831 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.188837 kubelet[2707]: E0711 00:23:27.188839 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.189140 kubelet[2707]: E0711 00:23:27.189122 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.189140 kubelet[2707]: W0711 00:23:27.189136 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.189218 kubelet[2707]: E0711 00:23:27.189147 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.189394 kubelet[2707]: E0711 00:23:27.189378 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.189394 kubelet[2707]: W0711 00:23:27.189391 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.189461 kubelet[2707]: E0711 00:23:27.189401 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.189611 kubelet[2707]: E0711 00:23:27.189597 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.189611 kubelet[2707]: W0711 00:23:27.189608 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.189665 kubelet[2707]: E0711 00:23:27.189619 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.189802 kubelet[2707]: E0711 00:23:27.189788 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.189802 kubelet[2707]: W0711 00:23:27.189798 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.189866 kubelet[2707]: E0711 00:23:27.189808 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.190062 kubelet[2707]: E0711 00:23:27.190049 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.190062 kubelet[2707]: W0711 00:23:27.190060 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.190113 kubelet[2707]: E0711 00:23:27.190071 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.190351 kubelet[2707]: E0711 00:23:27.190319 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.190351 kubelet[2707]: W0711 00:23:27.190345 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.190417 kubelet[2707]: E0711 00:23:27.190356 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.190539 kubelet[2707]: E0711 00:23:27.190527 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.191172 kubelet[2707]: W0711 00:23:27.190600 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.191172 kubelet[2707]: E0711 00:23:27.190746 2707 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.203784 kubelet[2707]: I0711 00:23:27.203624 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-866cd797c4-glmpd" podStartSLOduration=2.016377604 podStartE2EDuration="4.203603504s" podCreationTimestamp="2025-07-11 00:23:23 +0000 UTC" firstStartedPulling="2025-07-11 00:23:23.680455416 +0000 UTC m=+23.845651108" lastFinishedPulling="2025-07-11 00:23:25.867681316 +0000 UTC m=+26.032877008" observedRunningTime="2025-07-11 00:23:27.203493875 +0000 UTC m=+27.368689577" watchObservedRunningTime="2025-07-11 00:23:27.203603504 +0000 UTC m=+27.368799196" Jul 11 00:23:27.600249 containerd[1550]: time="2025-07-11T00:23:27.600165799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:27.601390 containerd[1550]: time="2025-07-11T00:23:27.601354508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 00:23:27.602814 containerd[1550]: time="2025-07-11T00:23:27.602759487Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:27.605603 containerd[1550]: time="2025-07-11T00:23:27.605555750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:27.606488 containerd[1550]: time="2025-07-11T00:23:27.606460525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.738455465s" Jul 11 00:23:27.606558 containerd[1550]: time="2025-07-11T00:23:27.606490740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 00:23:27.613125 containerd[1550]: time="2025-07-11T00:23:27.613057296Z" level=info msg="CreateContainer within sandbox \"a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:23:27.625595 containerd[1550]: time="2025-07-11T00:23:27.625545535Z" level=info msg="Container bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:23:27.637465 containerd[1550]: time="2025-07-11T00:23:27.637389619Z" level=info msg="CreateContainer within sandbox \"a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82\"" Jul 11 00:23:27.638167 containerd[1550]: time="2025-07-11T00:23:27.638072252Z" level=info msg="StartContainer for \"bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82\"" Jul 11 00:23:27.640057 containerd[1550]: time="2025-07-11T00:23:27.640019732Z" level=info msg="connecting to shim bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82" address="unix:///run/containerd/s/c6c73bd6594454cff446bfee80d34408fd282f5c0ba23ea9f80c1fbfbbdd3d95" protocol=ttrpc version=3 Jul 11 00:23:27.666631 systemd[1]: Started cri-containerd-bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82.scope - libcontainer container bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82. Jul 11 00:23:27.720264 containerd[1550]: time="2025-07-11T00:23:27.720205812Z" level=info msg="StartContainer for \"bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82\" returns successfully" Jul 11 00:23:27.732122 systemd[1]: cri-containerd-bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82.scope: Deactivated successfully. Jul 11 00:23:27.735971 containerd[1550]: time="2025-07-11T00:23:27.735934226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82\" id:\"bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82\" pid:3420 exited_at:{seconds:1752193407 nanos:735411051}" Jul 11 00:23:27.736064 containerd[1550]: time="2025-07-11T00:23:27.736025461Z" level=info msg="received exit event container_id:\"bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82\" id:\"bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82\" pid:3420 exited_at:{seconds:1752193407 nanos:735411051}" Jul 11 00:23:27.762431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfb8081eba93cb0d4b53c5b45a8210b595c8535532c122ec56cd4e5b45026c82-rootfs.mount: Deactivated successfully. Jul 11 00:23:27.924927 kubelet[2707]: E0711 00:23:27.924635 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cwbxt" podUID="d158cd74-abdf-48f8-9025-ccae8e128169" Jul 11 00:23:28.127248 kubelet[2707]: I0711 00:23:28.127187 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:23:28.128319 kubelet[2707]: E0711 00:23:28.127898 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:29.133365 containerd[1550]: time="2025-07-11T00:23:29.132889009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:23:29.925147 kubelet[2707]: E0711 00:23:29.925072 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cwbxt" podUID="d158cd74-abdf-48f8-9025-ccae8e128169" Jul 11 00:23:31.924476 kubelet[2707]: E0711 00:23:31.924385 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cwbxt" podUID="d158cd74-abdf-48f8-9025-ccae8e128169" Jul 11 00:23:32.010494 containerd[1550]: time="2025-07-11T00:23:32.010410483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:32.012591 containerd[1550]: time="2025-07-11T00:23:32.012522742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 00:23:32.014802 containerd[1550]: time="2025-07-11T00:23:32.014747174Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:32.019580 containerd[1550]: time="2025-07-11T00:23:32.019290010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:32.021237 containerd[1550]: time="2025-07-11T00:23:32.021178330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.88824455s" Jul 11 00:23:32.021237 containerd[1550]: time="2025-07-11T00:23:32.021218293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 00:23:32.029735 containerd[1550]: time="2025-07-11T00:23:32.029672385Z" level=info msg="CreateContainer within sandbox \"a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:23:32.043819 containerd[1550]: time="2025-07-11T00:23:32.043731584Z" level=info msg="Container f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:23:32.057067 containerd[1550]: time="2025-07-11T00:23:32.057007839Z" level=info msg="CreateContainer within sandbox \"a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7\"" Jul 11 00:23:32.057796 containerd[1550]: time="2025-07-11T00:23:32.057745890Z" level=info msg="StartContainer for \"f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7\"" Jul 11 00:23:32.059777 containerd[1550]: time="2025-07-11T00:23:32.059733853Z" level=info msg="connecting to shim f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7" address="unix:///run/containerd/s/c6c73bd6594454cff446bfee80d34408fd282f5c0ba23ea9f80c1fbfbbdd3d95" protocol=ttrpc version=3 Jul 11 00:23:32.098779 systemd[1]: Started cri-containerd-f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7.scope - libcontainer container f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7. Jul 11 00:23:32.291435 containerd[1550]: time="2025-07-11T00:23:32.291380627Z" level=info msg="StartContainer for \"f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7\" returns successfully" Jul 11 00:23:33.924578 kubelet[2707]: E0711 00:23:33.924512 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cwbxt" podUID="d158cd74-abdf-48f8-9025-ccae8e128169" Jul 11 00:23:34.785821 systemd[1]: cri-containerd-f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7.scope: Deactivated successfully. Jul 11 00:23:34.786828 systemd[1]: cri-containerd-f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7.scope: Consumed 749ms CPU time, 176.7M memory peak, 3.1M read from disk, 171.2M written to disk. Jul 11 00:23:34.810980 containerd[1550]: time="2025-07-11T00:23:34.810828047Z" level=info msg="received exit event container_id:\"f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7\" id:\"f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7\" pid:3481 exited_at:{seconds:1752193414 nanos:788039214}" Jul 11 00:23:34.815667 containerd[1550]: time="2025-07-11T00:23:34.815582083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7\" id:\"f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7\" pid:3481 exited_at:{seconds:1752193414 nanos:788039214}" Jul 11 00:23:34.852107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6515de14c6e31ec66d2010f5ffad1c74923c6aa1529c2eb43f09045990027d7-rootfs.mount: Deactivated successfully. Jul 11 00:23:34.987031 kubelet[2707]: I0711 00:23:34.986977 2707 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:23:35.406089 systemd[1]: Created slice kubepods-burstable-pod0a35ba07_09e7_4ff9_a5d7_62f1c6d02ff8.slice - libcontainer container kubepods-burstable-pod0a35ba07_09e7_4ff9_a5d7_62f1c6d02ff8.slice. Jul 11 00:23:35.487066 systemd[1]: Created slice kubepods-besteffort-pod9ffa1684_c024_4b5d_b78a_3599ed95de14.slice - libcontainer container kubepods-besteffort-pod9ffa1684_c024_4b5d_b78a_3599ed95de14.slice. Jul 11 00:23:35.499623 systemd[1]: Created slice kubepods-burstable-pod8c6e936e_c0ab_46d4_ab44_49e09c4576a1.slice - libcontainer container kubepods-burstable-pod8c6e936e_c0ab_46d4_ab44_49e09c4576a1.slice. Jul 11 00:23:35.507929 kubelet[2707]: I0711 00:23:35.507877 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8-config-volume\") pod \"coredns-674b8bbfcf-frxnl\" (UID: \"0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8\") " pod="kube-system/coredns-674b8bbfcf-frxnl" Jul 11 00:23:35.507929 kubelet[2707]: I0711 00:23:35.507925 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlnk\" (UniqueName: \"kubernetes.io/projected/0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8-kube-api-access-njlnk\") pod \"coredns-674b8bbfcf-frxnl\" (UID: \"0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8\") " pod="kube-system/coredns-674b8bbfcf-frxnl" Jul 11 00:23:35.511367 systemd[1]: Created slice kubepods-besteffort-pod4d47097b_30e2_4e83_bfe1_30bdae2e8116.slice - libcontainer container kubepods-besteffort-pod4d47097b_30e2_4e83_bfe1_30bdae2e8116.slice. Jul 11 00:23:35.522386 systemd[1]: Created slice kubepods-besteffort-pod8c8df196_6cec_44ee_8ef2_38e60eef6990.slice - libcontainer container kubepods-besteffort-pod8c8df196_6cec_44ee_8ef2_38e60eef6990.slice. Jul 11 00:23:35.534030 systemd[1]: Created slice kubepods-besteffort-pod1421f787_a419_4d08_9ae2_a92e4a3e603a.slice - libcontainer container kubepods-besteffort-pod1421f787_a419_4d08_9ae2_a92e4a3e603a.slice. Jul 11 00:23:35.541400 systemd[1]: Created slice kubepods-besteffort-pod5feacf03_5a7f_49a2_9aad_9dd21cd054c6.slice - libcontainer container kubepods-besteffort-pod5feacf03_5a7f_49a2_9aad_9dd21cd054c6.slice. Jul 11 00:23:35.608670 kubelet[2707]: I0711 00:23:35.608559 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1421f787-a419-4d08-9ae2-a92e4a3e603a-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-xb2lq\" (UID: \"1421f787-a419-4d08-9ae2-a92e4a3e603a\") " pod="calico-system/goldmane-768f4c5c69-xb2lq" Jul 11 00:23:35.608670 kubelet[2707]: I0711 00:23:35.608643 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4h7\" (UniqueName: \"kubernetes.io/projected/1421f787-a419-4d08-9ae2-a92e4a3e603a-kube-api-access-wc4h7\") pod \"goldmane-768f4c5c69-xb2lq\" (UID: \"1421f787-a419-4d08-9ae2-a92e4a3e603a\") " pod="calico-system/goldmane-768f4c5c69-xb2lq" Jul 11 00:23:35.608670 kubelet[2707]: I0711 00:23:35.608670 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7td\" (UniqueName: \"kubernetes.io/projected/9ffa1684-c024-4b5d-b78a-3599ed95de14-kube-api-access-gx7td\") pod \"whisker-5f969fd689-r27dt\" (UID: \"9ffa1684-c024-4b5d-b78a-3599ed95de14\") " pod="calico-system/whisker-5f969fd689-r27dt" Jul 11 00:23:35.608670 kubelet[2707]: I0711 00:23:35.608689 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bp7\" (UniqueName: \"kubernetes.io/projected/8c6e936e-c0ab-46d4-ab44-49e09c4576a1-kube-api-access-q8bp7\") pod \"coredns-674b8bbfcf-pbmkq\" (UID: \"8c6e936e-c0ab-46d4-ab44-49e09c4576a1\") " pod="kube-system/coredns-674b8bbfcf-pbmkq" Jul 11 00:23:35.609084 kubelet[2707]: I0711 00:23:35.608710 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d47097b-30e2-4e83-bfe1-30bdae2e8116-calico-apiserver-certs\") pod \"calico-apiserver-6cc5d4c775-jnkmt\" (UID: \"4d47097b-30e2-4e83-bfe1-30bdae2e8116\") " pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" Jul 11 00:23:35.609084 kubelet[2707]: I0711 00:23:35.608732 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrh7\" (UniqueName: \"kubernetes.io/projected/4d47097b-30e2-4e83-bfe1-30bdae2e8116-kube-api-access-nkrh7\") pod \"calico-apiserver-6cc5d4c775-jnkmt\" (UID: \"4d47097b-30e2-4e83-bfe1-30bdae2e8116\") " pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" Jul 11 00:23:35.609084 kubelet[2707]: I0711 00:23:35.608789 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1421f787-a419-4d08-9ae2-a92e4a3e603a-config\") pod \"goldmane-768f4c5c69-xb2lq\" (UID: \"1421f787-a419-4d08-9ae2-a92e4a3e603a\") " pod="calico-system/goldmane-768f4c5c69-xb2lq" Jul 11 00:23:35.609084 kubelet[2707]: I0711 00:23:35.608813 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1421f787-a419-4d08-9ae2-a92e4a3e603a-goldmane-key-pair\") pod \"goldmane-768f4c5c69-xb2lq\" (UID: \"1421f787-a419-4d08-9ae2-a92e4a3e603a\") " pod="calico-system/goldmane-768f4c5c69-xb2lq" Jul 11 00:23:35.609084 kubelet[2707]: I0711 00:23:35.608841 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c6e936e-c0ab-46d4-ab44-49e09c4576a1-config-volume\") pod \"coredns-674b8bbfcf-pbmkq\" (UID: \"8c6e936e-c0ab-46d4-ab44-49e09c4576a1\") " pod="kube-system/coredns-674b8bbfcf-pbmkq" Jul 11 00:23:35.610459 kubelet[2707]: I0711 00:23:35.608868 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c8df196-6cec-44ee-8ef2-38e60eef6990-tigera-ca-bundle\") pod \"calico-kube-controllers-687ddfff9-hskm4\" (UID: \"8c8df196-6cec-44ee-8ef2-38e60eef6990\") " pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" Jul 11 00:23:35.610459 kubelet[2707]: I0711 00:23:35.608894 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ffa1684-c024-4b5d-b78a-3599ed95de14-whisker-backend-key-pair\") pod \"whisker-5f969fd689-r27dt\" (UID: \"9ffa1684-c024-4b5d-b78a-3599ed95de14\") " pod="calico-system/whisker-5f969fd689-r27dt" Jul 11 00:23:35.610459 kubelet[2707]: I0711 00:23:35.608915 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ffa1684-c024-4b5d-b78a-3599ed95de14-whisker-ca-bundle\") pod \"whisker-5f969fd689-r27dt\" (UID: \"9ffa1684-c024-4b5d-b78a-3599ed95de14\") " pod="calico-system/whisker-5f969fd689-r27dt" Jul 11 00:23:35.610459 kubelet[2707]: I0711 00:23:35.608933 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4pl6\" (UniqueName: \"kubernetes.io/projected/8c8df196-6cec-44ee-8ef2-38e60eef6990-kube-api-access-t4pl6\") pod \"calico-kube-controllers-687ddfff9-hskm4\" (UID: \"8c8df196-6cec-44ee-8ef2-38e60eef6990\") " pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" Jul 11 00:23:35.709948 kubelet[2707]: E0711 00:23:35.709722 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:35.710866 containerd[1550]: time="2025-07-11T00:23:35.710579278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frxnl,Uid:0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:35.710992 kubelet[2707]: I0711 00:23:35.710915 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5feacf03-5a7f-49a2-9aad-9dd21cd054c6-calico-apiserver-certs\") pod \"calico-apiserver-6cc5d4c775-dmmqk\" (UID: \"5feacf03-5a7f-49a2-9aad-9dd21cd054c6\") " pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" Jul 11 00:23:35.711047 kubelet[2707]: I0711 00:23:35.711001 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9rg9\" (UniqueName: \"kubernetes.io/projected/5feacf03-5a7f-49a2-9aad-9dd21cd054c6-kube-api-access-h9rg9\") pod \"calico-apiserver-6cc5d4c775-dmmqk\" (UID: \"5feacf03-5a7f-49a2-9aad-9dd21cd054c6\") " pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" Jul 11 00:23:35.791768 containerd[1550]: time="2025-07-11T00:23:35.791371900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f969fd689-r27dt,Uid:9ffa1684-c024-4b5d-b78a-3599ed95de14,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:35.806002 kubelet[2707]: E0711 00:23:35.804890 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:35.806696 containerd[1550]: time="2025-07-11T00:23:35.806637651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbmkq,Uid:8c6e936e-c0ab-46d4-ab44-49e09c4576a1,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:35.820574 containerd[1550]: time="2025-07-11T00:23:35.820511646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-jnkmt,Uid:4d47097b-30e2-4e83-bfe1-30bdae2e8116,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:23:35.830423 containerd[1550]: time="2025-07-11T00:23:35.830318248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687ddfff9-hskm4,Uid:8c8df196-6cec-44ee-8ef2-38e60eef6990,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:35.838473 containerd[1550]: time="2025-07-11T00:23:35.838398254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xb2lq,Uid:1421f787-a419-4d08-9ae2-a92e4a3e603a,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:35.858367 containerd[1550]: time="2025-07-11T00:23:35.858016978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-dmmqk,Uid:5feacf03-5a7f-49a2-9aad-9dd21cd054c6,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:23:35.905799 containerd[1550]: time="2025-07-11T00:23:35.905615755Z" level=error msg="Failed to destroy network for sandbox \"ff8f7ef2e20c6717e03f7a48df9c1e22713a4aea6dce8e34675349e2636a78e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:35.910142 systemd[1]: run-netns-cni\x2d340f0b49\x2d42fe\x2dccd2\x2ddeae\x2d4191e57c8f26.mount: Deactivated successfully. Jul 11 00:23:35.936027 systemd[1]: Created slice kubepods-besteffort-podd158cd74_abdf_48f8_9025_ccae8e128169.slice - libcontainer container kubepods-besteffort-podd158cd74_abdf_48f8_9025_ccae8e128169.slice. Jul 11 00:23:35.939575 containerd[1550]: time="2025-07-11T00:23:35.939370462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frxnl,Uid:0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8f7ef2e20c6717e03f7a48df9c1e22713a4aea6dce8e34675349e2636a78e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:35.942231 containerd[1550]: time="2025-07-11T00:23:35.941846785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cwbxt,Uid:d158cd74-abdf-48f8-9025-ccae8e128169,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:35.952538 containerd[1550]: time="2025-07-11T00:23:35.952463493Z" level=error msg="Failed to destroy network for sandbox \"d30f947d5e9565f4d351948c5c8361e26b105e9014674cbc14f21a20f003aa91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:35.956452 systemd[1]: run-netns-cni\x2d5528271e\x2d01b0\x2d4da6\x2d91d2\x2deadc4a3e4cf3.mount: Deactivated successfully. Jul 11 00:23:35.983790 containerd[1550]: time="2025-07-11T00:23:35.983605308Z" level=error msg="Failed to destroy network for sandbox \"d5992af42d45515553163c1e3178f7029f8c23a66bba00891a8aa3dbe9bf1a10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:35.995350 containerd[1550]: time="2025-07-11T00:23:35.995211210Z" level=error msg="Failed to destroy network for sandbox \"4cbcec3e0fbde9bd89560668181cf7b4defbebcbd59023d58b74ebebf1cbee98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.016697 containerd[1550]: time="2025-07-11T00:23:36.016630961Z" level=error msg="Failed to destroy network for sandbox \"04812a3ad4aaf0ec97ac3374bd1f23789f3c818e09e0ad44e41d9204d8e43143\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.073793 kubelet[2707]: E0711 00:23:36.073616 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8f7ef2e20c6717e03f7a48df9c1e22713a4aea6dce8e34675349e2636a78e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.074427 kubelet[2707]: E0711 00:23:36.073841 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8f7ef2e20c6717e03f7a48df9c1e22713a4aea6dce8e34675349e2636a78e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-frxnl" Jul 11 00:23:36.074427 kubelet[2707]: E0711 00:23:36.073865 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8f7ef2e20c6717e03f7a48df9c1e22713a4aea6dce8e34675349e2636a78e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-frxnl" Jul 11 00:23:36.074427 kubelet[2707]: E0711 00:23:36.073954 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-frxnl_kube-system(0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-frxnl_kube-system(0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff8f7ef2e20c6717e03f7a48df9c1e22713a4aea6dce8e34675349e2636a78e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-frxnl" podUID="0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8" Jul 11 00:23:36.080778 containerd[1550]: time="2025-07-11T00:23:36.080724496Z" level=error msg="Failed to destroy network for sandbox \"1dc4f746c2e3b0906d1735c68ef808647d14707e42961a366273fd87b20e68a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.091172 containerd[1550]: time="2025-07-11T00:23:36.091087499Z" level=error msg="Failed to destroy network for sandbox \"46686d495b32372dbe074f9dcbf2a6af2aa73760e8e96b6d163231ca5fcb2927\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.160498 containerd[1550]: time="2025-07-11T00:23:36.160445968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:23:36.181914 containerd[1550]: time="2025-07-11T00:23:36.181846094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f969fd689-r27dt,Uid:9ffa1684-c024-4b5d-b78a-3599ed95de14,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30f947d5e9565f4d351948c5c8361e26b105e9014674cbc14f21a20f003aa91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.182433 kubelet[2707]: E0711 00:23:36.182237 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30f947d5e9565f4d351948c5c8361e26b105e9014674cbc14f21a20f003aa91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.182433 kubelet[2707]: E0711 00:23:36.182319 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30f947d5e9565f4d351948c5c8361e26b105e9014674cbc14f21a20f003aa91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f969fd689-r27dt" Jul 11 00:23:36.182433 kubelet[2707]: E0711 00:23:36.182374 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30f947d5e9565f4d351948c5c8361e26b105e9014674cbc14f21a20f003aa91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f969fd689-r27dt" Jul 11 00:23:36.182677 kubelet[2707]: E0711 00:23:36.182445 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f969fd689-r27dt_calico-system(9ffa1684-c024-4b5d-b78a-3599ed95de14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f969fd689-r27dt_calico-system(9ffa1684-c024-4b5d-b78a-3599ed95de14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d30f947d5e9565f4d351948c5c8361e26b105e9014674cbc14f21a20f003aa91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f969fd689-r27dt" podUID="9ffa1684-c024-4b5d-b78a-3599ed95de14" Jul 11 00:23:36.227041 containerd[1550]: time="2025-07-11T00:23:36.226951903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbmkq,Uid:8c6e936e-c0ab-46d4-ab44-49e09c4576a1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5992af42d45515553163c1e3178f7029f8c23a66bba00891a8aa3dbe9bf1a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.227295 kubelet[2707]: E0711 00:23:36.227247 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5992af42d45515553163c1e3178f7029f8c23a66bba00891a8aa3dbe9bf1a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.227385 kubelet[2707]: E0711 00:23:36.227306 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5992af42d45515553163c1e3178f7029f8c23a66bba00891a8aa3dbe9bf1a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbmkq" Jul 11 00:23:36.227385 kubelet[2707]: E0711 00:23:36.227346 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5992af42d45515553163c1e3178f7029f8c23a66bba00891a8aa3dbe9bf1a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbmkq" Jul 11 00:23:36.227461 kubelet[2707]: E0711 00:23:36.227395 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pbmkq_kube-system(8c6e936e-c0ab-46d4-ab44-49e09c4576a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pbmkq_kube-system(8c6e936e-c0ab-46d4-ab44-49e09c4576a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5992af42d45515553163c1e3178f7029f8c23a66bba00891a8aa3dbe9bf1a10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pbmkq" podUID="8c6e936e-c0ab-46d4-ab44-49e09c4576a1" Jul 11 00:23:36.280406 containerd[1550]: time="2025-07-11T00:23:36.279583649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xb2lq,Uid:1421f787-a419-4d08-9ae2-a92e4a3e603a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cbcec3e0fbde9bd89560668181cf7b4defbebcbd59023d58b74ebebf1cbee98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.280671 kubelet[2707]: E0711 00:23:36.280038 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cbcec3e0fbde9bd89560668181cf7b4defbebcbd59023d58b74ebebf1cbee98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.280671 kubelet[2707]: E0711 00:23:36.280183 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cbcec3e0fbde9bd89560668181cf7b4defbebcbd59023d58b74ebebf1cbee98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xb2lq" Jul 11 00:23:36.281228 kubelet[2707]: E0711 00:23:36.280211 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cbcec3e0fbde9bd89560668181cf7b4defbebcbd59023d58b74ebebf1cbee98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xb2lq" Jul 11 00:23:36.282027 kubelet[2707]: E0711 00:23:36.281858 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-xb2lq_calico-system(1421f787-a419-4d08-9ae2-a92e4a3e603a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-xb2lq_calico-system(1421f787-a419-4d08-9ae2-a92e4a3e603a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cbcec3e0fbde9bd89560668181cf7b4defbebcbd59023d58b74ebebf1cbee98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xb2lq" podUID="1421f787-a419-4d08-9ae2-a92e4a3e603a" Jul 11 00:23:36.283704 containerd[1550]: time="2025-07-11T00:23:36.283483343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-dmmqk,Uid:5feacf03-5a7f-49a2-9aad-9dd21cd054c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04812a3ad4aaf0ec97ac3374bd1f23789f3c818e09e0ad44e41d9204d8e43143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.284228 kubelet[2707]: E0711 00:23:36.284103 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04812a3ad4aaf0ec97ac3374bd1f23789f3c818e09e0ad44e41d9204d8e43143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.284228 kubelet[2707]: E0711 00:23:36.284220 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04812a3ad4aaf0ec97ac3374bd1f23789f3c818e09e0ad44e41d9204d8e43143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" Jul 11 00:23:36.285494 kubelet[2707]: E0711 00:23:36.284260 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04812a3ad4aaf0ec97ac3374bd1f23789f3c818e09e0ad44e41d9204d8e43143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" Jul 11 00:23:36.285494 kubelet[2707]: E0711 00:23:36.284369 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc5d4c775-dmmqk_calico-apiserver(5feacf03-5a7f-49a2-9aad-9dd21cd054c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc5d4c775-dmmqk_calico-apiserver(5feacf03-5a7f-49a2-9aad-9dd21cd054c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04812a3ad4aaf0ec97ac3374bd1f23789f3c818e09e0ad44e41d9204d8e43143\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" podUID="5feacf03-5a7f-49a2-9aad-9dd21cd054c6" Jul 11 00:23:36.286128 containerd[1550]: time="2025-07-11T00:23:36.285772678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-jnkmt,Uid:4d47097b-30e2-4e83-bfe1-30bdae2e8116,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc4f746c2e3b0906d1735c68ef808647d14707e42961a366273fd87b20e68a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.287735 kubelet[2707]: E0711 00:23:36.286594 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc4f746c2e3b0906d1735c68ef808647d14707e42961a366273fd87b20e68a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.287735 kubelet[2707]: E0711 00:23:36.286676 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc4f746c2e3b0906d1735c68ef808647d14707e42961a366273fd87b20e68a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" Jul 11 00:23:36.287735 kubelet[2707]: E0711 00:23:36.286714 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dc4f746c2e3b0906d1735c68ef808647d14707e42961a366273fd87b20e68a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" Jul 11 00:23:36.287873 kubelet[2707]: E0711 00:23:36.286771 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc5d4c775-jnkmt_calico-apiserver(4d47097b-30e2-4e83-bfe1-30bdae2e8116)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc5d4c775-jnkmt_calico-apiserver(4d47097b-30e2-4e83-bfe1-30bdae2e8116)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dc4f746c2e3b0906d1735c68ef808647d14707e42961a366273fd87b20e68a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" podUID="4d47097b-30e2-4e83-bfe1-30bdae2e8116" Jul 11 00:23:36.294625 containerd[1550]: time="2025-07-11T00:23:36.294537476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687ddfff9-hskm4,Uid:8c8df196-6cec-44ee-8ef2-38e60eef6990,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46686d495b32372dbe074f9dcbf2a6af2aa73760e8e96b6d163231ca5fcb2927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.294873 kubelet[2707]: E0711 00:23:36.294825 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46686d495b32372dbe074f9dcbf2a6af2aa73760e8e96b6d163231ca5fcb2927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.294935 kubelet[2707]: E0711 00:23:36.294885 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46686d495b32372dbe074f9dcbf2a6af2aa73760e8e96b6d163231ca5fcb2927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" Jul 11 00:23:36.294935 kubelet[2707]: E0711 00:23:36.294904 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46686d495b32372dbe074f9dcbf2a6af2aa73760e8e96b6d163231ca5fcb2927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" Jul 11 00:23:36.295016 kubelet[2707]: E0711 00:23:36.294972 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-687ddfff9-hskm4_calico-system(8c8df196-6cec-44ee-8ef2-38e60eef6990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-687ddfff9-hskm4_calico-system(8c8df196-6cec-44ee-8ef2-38e60eef6990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46686d495b32372dbe074f9dcbf2a6af2aa73760e8e96b6d163231ca5fcb2927\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" podUID="8c8df196-6cec-44ee-8ef2-38e60eef6990" Jul 11 00:23:36.356921 containerd[1550]: time="2025-07-11T00:23:36.356822271Z" level=error msg="Failed to destroy network for sandbox \"8b08d5613f4e2bc503502e32a5a9aa39ce7dcf1cd79526cbaf70bdd54901edbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.359174 containerd[1550]: time="2025-07-11T00:23:36.359086832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cwbxt,Uid:d158cd74-abdf-48f8-9025-ccae8e128169,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b08d5613f4e2bc503502e32a5a9aa39ce7dcf1cd79526cbaf70bdd54901edbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.359492 kubelet[2707]: E0711 00:23:36.359377 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b08d5613f4e2bc503502e32a5a9aa39ce7dcf1cd79526cbaf70bdd54901edbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.359492 kubelet[2707]: E0711 00:23:36.359451 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b08d5613f4e2bc503502e32a5a9aa39ce7dcf1cd79526cbaf70bdd54901edbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:36.359492 kubelet[2707]: E0711 00:23:36.359472 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b08d5613f4e2bc503502e32a5a9aa39ce7dcf1cd79526cbaf70bdd54901edbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:36.359612 kubelet[2707]: E0711 00:23:36.359535 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cwbxt_calico-system(d158cd74-abdf-48f8-9025-ccae8e128169)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cwbxt_calico-system(d158cd74-abdf-48f8-9025-ccae8e128169)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b08d5613f4e2bc503502e32a5a9aa39ce7dcf1cd79526cbaf70bdd54901edbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cwbxt" podUID="d158cd74-abdf-48f8-9025-ccae8e128169" Jul 11 00:23:36.852954 systemd[1]: run-netns-cni\x2d4f75556c\x2db054\x2d1bc5\x2dbedc\x2db1ee71439e79.mount: Deactivated successfully. Jul 11 00:23:36.853116 systemd[1]: run-netns-cni\x2d3bcd2f28\x2d6217\x2d9230\x2dbacc\x2d868550b6a0aa.mount: Deactivated successfully. Jul 11 00:23:36.853229 systemd[1]: run-netns-cni\x2dd907655b\x2d7fda\x2d844d\x2d1d4d\x2d73cc6638375a.mount: Deactivated successfully. Jul 11 00:23:36.853323 systemd[1]: run-netns-cni\x2da6ca52e0\x2d12bf\x2ddeae\x2d5b73\x2d2e6689752fa2.mount: Deactivated successfully. Jul 11 00:23:36.853453 systemd[1]: run-netns-cni\x2d2d0755dc\x2d0dd4\x2d1641\x2ddd02\x2dde3b482f35c9.mount: Deactivated successfully. Jul 11 00:23:46.787807 kubelet[2707]: I0711 00:23:46.787455 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:23:48.024773 kubelet[2707]: E0711 00:23:48.024716 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:48.026152 containerd[1550]: time="2025-07-11T00:23:48.026075909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cwbxt,Uid:d158cd74-abdf-48f8-9025-ccae8e128169,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:48.027878 containerd[1550]: time="2025-07-11T00:23:48.027806794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-jnkmt,Uid:4d47097b-30e2-4e83-bfe1-30bdae2e8116,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:23:48.028075 containerd[1550]: time="2025-07-11T00:23:48.028045493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-dmmqk,Uid:5feacf03-5a7f-49a2-9aad-9dd21cd054c6,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:23:48.028180 containerd[1550]: time="2025-07-11T00:23:48.028155005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f969fd689-r27dt,Uid:9ffa1684-c024-4b5d-b78a-3599ed95de14,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:48.085073 kubelet[2707]: E0711 00:23:48.084761 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:48.086112 containerd[1550]: time="2025-07-11T00:23:48.086048238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frxnl,Uid:0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:48.171495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655021988.mount: Deactivated successfully. Jul 11 00:23:48.925175 kubelet[2707]: E0711 00:23:48.925129 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:48.925653 containerd[1550]: time="2025-07-11T00:23:48.925604417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbmkq,Uid:8c6e936e-c0ab-46d4-ab44-49e09c4576a1,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:49.032578 kubelet[2707]: E0711 00:23:49.032535 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:50.490373 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:56226.service - OpenSSH per-connection server daemon (10.0.0.1:56226). Jul 11 00:23:50.624175 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 56226 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:50.627856 sshd-session[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:50.637894 systemd-logind[1525]: New session 8 of user core. Jul 11 00:23:50.645754 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:23:50.661424 containerd[1550]: time="2025-07-11T00:23:50.661325452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 00:23:50.662547 containerd[1550]: time="2025-07-11T00:23:50.661323268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:50.667971 containerd[1550]: time="2025-07-11T00:23:50.667904995Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:50.675137 containerd[1550]: time="2025-07-11T00:23:50.675045161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:50.676960 containerd[1550]: time="2025-07-11T00:23:50.676348683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 14.515825503s" Jul 11 00:23:50.676960 containerd[1550]: time="2025-07-11T00:23:50.676419734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 00:23:50.788598 containerd[1550]: time="2025-07-11T00:23:50.788087201Z" level=error msg="Failed to destroy network for sandbox \"d72db4ebd524f6dc983621161b1d58ea67ee295705466abd48fe888433bc1712\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.792629 containerd[1550]: time="2025-07-11T00:23:50.792072114Z" level=info msg="CreateContainer within sandbox \"a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:23:50.792629 containerd[1550]: time="2025-07-11T00:23:50.792255942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-jnkmt,Uid:4d47097b-30e2-4e83-bfe1-30bdae2e8116,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72db4ebd524f6dc983621161b1d58ea67ee295705466abd48fe888433bc1712\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.796915 kubelet[2707]: E0711 00:23:50.794270 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72db4ebd524f6dc983621161b1d58ea67ee295705466abd48fe888433bc1712\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.796915 kubelet[2707]: E0711 00:23:50.794437 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72db4ebd524f6dc983621161b1d58ea67ee295705466abd48fe888433bc1712\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" Jul 11 00:23:50.796915 kubelet[2707]: E0711 00:23:50.794484 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d72db4ebd524f6dc983621161b1d58ea67ee295705466abd48fe888433bc1712\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" Jul 11 00:23:50.798975 kubelet[2707]: E0711 00:23:50.794580 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc5d4c775-jnkmt_calico-apiserver(4d47097b-30e2-4e83-bfe1-30bdae2e8116)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc5d4c775-jnkmt_calico-apiserver(4d47097b-30e2-4e83-bfe1-30bdae2e8116)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d72db4ebd524f6dc983621161b1d58ea67ee295705466abd48fe888433bc1712\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" podUID="4d47097b-30e2-4e83-bfe1-30bdae2e8116" Jul 11 00:23:50.799070 containerd[1550]: time="2025-07-11T00:23:50.797287833Z" level=error msg="Failed to destroy network for sandbox \"ba31172ea6a2bc1adbc2a9c2bac31f0435a64a35dbdc42adc3a9a632b55914a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.808120 containerd[1550]: time="2025-07-11T00:23:50.808016852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-dmmqk,Uid:5feacf03-5a7f-49a2-9aad-9dd21cd054c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba31172ea6a2bc1adbc2a9c2bac31f0435a64a35dbdc42adc3a9a632b55914a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.808775 kubelet[2707]: E0711 00:23:50.808395 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba31172ea6a2bc1adbc2a9c2bac31f0435a64a35dbdc42adc3a9a632b55914a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.808775 kubelet[2707]: E0711 00:23:50.808470 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba31172ea6a2bc1adbc2a9c2bac31f0435a64a35dbdc42adc3a9a632b55914a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" Jul 11 00:23:50.808775 kubelet[2707]: E0711 00:23:50.808497 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba31172ea6a2bc1adbc2a9c2bac31f0435a64a35dbdc42adc3a9a632b55914a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" Jul 11 00:23:50.808919 kubelet[2707]: E0711 00:23:50.808556 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc5d4c775-dmmqk_calico-apiserver(5feacf03-5a7f-49a2-9aad-9dd21cd054c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc5d4c775-dmmqk_calico-apiserver(5feacf03-5a7f-49a2-9aad-9dd21cd054c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba31172ea6a2bc1adbc2a9c2bac31f0435a64a35dbdc42adc3a9a632b55914a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" podUID="5feacf03-5a7f-49a2-9aad-9dd21cd054c6" Jul 11 00:23:50.839192 containerd[1550]: time="2025-07-11T00:23:50.839093120Z" level=info msg="Container b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:23:50.849938 containerd[1550]: time="2025-07-11T00:23:50.849635113Z" level=error msg="Failed to destroy network for sandbox \"48ff03afce4fbdcbe1764700598cabdcc629a09aabf0d2aaecfb3d157e40926d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.851649 containerd[1550]: time="2025-07-11T00:23:50.851578825Z" level=error msg="Failed to destroy network for sandbox \"e35d06bfaeafbd67af515df1b8a86767084ecb37cbb5c3d54876d6913ba5c515\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.851856 containerd[1550]: time="2025-07-11T00:23:50.851836500Z" level=error msg="Failed to destroy network for sandbox \"34c98289ea314d3a0f69849737a15edfb6432eda3a0fd6a1fd9a18b7b7a27545\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.855911 containerd[1550]: time="2025-07-11T00:23:50.855603531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cwbxt,Uid:d158cd74-abdf-48f8-9025-ccae8e128169,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ff03afce4fbdcbe1764700598cabdcc629a09aabf0d2aaecfb3d157e40926d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.858400 kubelet[2707]: E0711 00:23:50.856273 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ff03afce4fbdcbe1764700598cabdcc629a09aabf0d2aaecfb3d157e40926d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.858400 kubelet[2707]: E0711 00:23:50.857106 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ff03afce4fbdcbe1764700598cabdcc629a09aabf0d2aaecfb3d157e40926d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:50.858400 kubelet[2707]: E0711 00:23:50.857265 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ff03afce4fbdcbe1764700598cabdcc629a09aabf0d2aaecfb3d157e40926d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cwbxt" Jul 11 00:23:50.858582 containerd[1550]: time="2025-07-11T00:23:50.856945173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f969fd689-r27dt,Uid:9ffa1684-c024-4b5d-b78a-3599ed95de14,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35d06bfaeafbd67af515df1b8a86767084ecb37cbb5c3d54876d6913ba5c515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.858650 kubelet[2707]: E0711 00:23:50.857563 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cwbxt_calico-system(d158cd74-abdf-48f8-9025-ccae8e128169)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cwbxt_calico-system(d158cd74-abdf-48f8-9025-ccae8e128169)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48ff03afce4fbdcbe1764700598cabdcc629a09aabf0d2aaecfb3d157e40926d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cwbxt" podUID="d158cd74-abdf-48f8-9025-ccae8e128169" Jul 11 00:23:50.860561 kubelet[2707]: E0711 00:23:50.860475 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35d06bfaeafbd67af515df1b8a86767084ecb37cbb5c3d54876d6913ba5c515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.860628 kubelet[2707]: E0711 00:23:50.860572 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35d06bfaeafbd67af515df1b8a86767084ecb37cbb5c3d54876d6913ba5c515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f969fd689-r27dt" Jul 11 00:23:50.860628 kubelet[2707]: E0711 00:23:50.860601 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35d06bfaeafbd67af515df1b8a86767084ecb37cbb5c3d54876d6913ba5c515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f969fd689-r27dt" Jul 11 00:23:50.860725 kubelet[2707]: E0711 00:23:50.860673 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f969fd689-r27dt_calico-system(9ffa1684-c024-4b5d-b78a-3599ed95de14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f969fd689-r27dt_calico-system(9ffa1684-c024-4b5d-b78a-3599ed95de14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e35d06bfaeafbd67af515df1b8a86767084ecb37cbb5c3d54876d6913ba5c515\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f969fd689-r27dt" podUID="9ffa1684-c024-4b5d-b78a-3599ed95de14" Jul 11 00:23:50.863133 containerd[1550]: time="2025-07-11T00:23:50.863037457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frxnl,Uid:0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c98289ea314d3a0f69849737a15edfb6432eda3a0fd6a1fd9a18b7b7a27545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.863878 kubelet[2707]: E0711 00:23:50.863626 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c98289ea314d3a0f69849737a15edfb6432eda3a0fd6a1fd9a18b7b7a27545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.863878 kubelet[2707]: E0711 00:23:50.863724 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c98289ea314d3a0f69849737a15edfb6432eda3a0fd6a1fd9a18b7b7a27545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-frxnl" Jul 11 00:23:50.863878 kubelet[2707]: E0711 00:23:50.863755 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c98289ea314d3a0f69849737a15edfb6432eda3a0fd6a1fd9a18b7b7a27545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-frxnl" Jul 11 00:23:50.864007 kubelet[2707]: E0711 00:23:50.863830 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-frxnl_kube-system(0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-frxnl_kube-system(0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34c98289ea314d3a0f69849737a15edfb6432eda3a0fd6a1fd9a18b7b7a27545\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-frxnl" podUID="0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8" Jul 11 00:23:50.872370 containerd[1550]: time="2025-07-11T00:23:50.872260862Z" level=info msg="CreateContainer within sandbox \"a6e8ba97b7ffa8db62fbdd8eb643464a442698bb6af1ce740c4c909245d0cf13\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec\"" Jul 11 00:23:50.875861 containerd[1550]: time="2025-07-11T00:23:50.875806354Z" level=info msg="StartContainer for \"b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec\"" Jul 11 00:23:50.878370 containerd[1550]: time="2025-07-11T00:23:50.878072710Z" level=info msg="connecting to shim b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec" address="unix:///run/containerd/s/c6c73bd6594454cff446bfee80d34408fd282f5c0ba23ea9f80c1fbfbbdd3d95" protocol=ttrpc version=3 Jul 11 00:23:50.882008 containerd[1550]: time="2025-07-11T00:23:50.881902747Z" level=error msg="Failed to destroy network for sandbox \"78c7a5dd87398dbd17e84eeda4bb8db46c6e7e5b4b22c40bfb912caaeb6b8ecb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.886179 containerd[1550]: time="2025-07-11T00:23:50.886116311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbmkq,Uid:8c6e936e-c0ab-46d4-ab44-49e09c4576a1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c7a5dd87398dbd17e84eeda4bb8db46c6e7e5b4b22c40bfb912caaeb6b8ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.886516 kubelet[2707]: E0711 00:23:50.886446 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c7a5dd87398dbd17e84eeda4bb8db46c6e7e5b4b22c40bfb912caaeb6b8ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.886577 kubelet[2707]: E0711 00:23:50.886527 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c7a5dd87398dbd17e84eeda4bb8db46c6e7e5b4b22c40bfb912caaeb6b8ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbmkq" Jul 11 00:23:50.886577 kubelet[2707]: E0711 00:23:50.886556 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c7a5dd87398dbd17e84eeda4bb8db46c6e7e5b4b22c40bfb912caaeb6b8ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbmkq" Jul 11 00:23:50.886652 kubelet[2707]: E0711 00:23:50.886616 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pbmkq_kube-system(8c6e936e-c0ab-46d4-ab44-49e09c4576a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pbmkq_kube-system(8c6e936e-c0ab-46d4-ab44-49e09c4576a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78c7a5dd87398dbd17e84eeda4bb8db46c6e7e5b4b22c40bfb912caaeb6b8ecb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pbmkq" podUID="8c6e936e-c0ab-46d4-ab44-49e09c4576a1" Jul 11 00:23:50.906271 sshd[3850]: Connection closed by 10.0.0.1 port 56226 Jul 11 00:23:50.907629 sshd-session[3787]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:50.914064 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:56226.service: Deactivated successfully. Jul 11 00:23:50.917404 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:23:50.922110 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:23:50.924002 systemd-logind[1525]: Removed session 8. Jul 11 00:23:50.925582 containerd[1550]: time="2025-07-11T00:23:50.925533687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687ddfff9-hskm4,Uid:8c8df196-6cec-44ee-8ef2-38e60eef6990,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:50.926095 containerd[1550]: time="2025-07-11T00:23:50.926063674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xb2lq,Uid:1421f787-a419-4d08-9ae2-a92e4a3e603a,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:50.954620 systemd[1]: Started cri-containerd-b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec.scope - libcontainer container b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec. Jul 11 00:23:50.992007 containerd[1550]: time="2025-07-11T00:23:50.991930754Z" level=error msg="Failed to destroy network for sandbox \"31a692237cbd23f4453c804acebd6cb412f9faa0f159f3ac6da21425d927d35c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.994255 containerd[1550]: time="2025-07-11T00:23:50.994119267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687ddfff9-hskm4,Uid:8c8df196-6cec-44ee-8ef2-38e60eef6990,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a692237cbd23f4453c804acebd6cb412f9faa0f159f3ac6da21425d927d35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.995641 kubelet[2707]: E0711 00:23:50.994860 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a692237cbd23f4453c804acebd6cb412f9faa0f159f3ac6da21425d927d35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.995641 kubelet[2707]: E0711 00:23:50.994967 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a692237cbd23f4453c804acebd6cb412f9faa0f159f3ac6da21425d927d35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" Jul 11 00:23:50.995641 kubelet[2707]: E0711 00:23:50.995027 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a692237cbd23f4453c804acebd6cb412f9faa0f159f3ac6da21425d927d35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" Jul 11 00:23:50.995884 kubelet[2707]: E0711 00:23:50.995224 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-687ddfff9-hskm4_calico-system(8c8df196-6cec-44ee-8ef2-38e60eef6990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-687ddfff9-hskm4_calico-system(8c8df196-6cec-44ee-8ef2-38e60eef6990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a692237cbd23f4453c804acebd6cb412f9faa0f159f3ac6da21425d927d35c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" podUID="8c8df196-6cec-44ee-8ef2-38e60eef6990" Jul 11 00:23:51.000798 containerd[1550]: time="2025-07-11T00:23:51.000744854Z" level=error msg="Failed to destroy network for sandbox \"4e9b8b05919d3f77152e9949cec4975d4e119de1ef9bdb3f6ec3d8a9aa2e750d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:51.002716 containerd[1550]: time="2025-07-11T00:23:51.002670354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xb2lq,Uid:1421f787-a419-4d08-9ae2-a92e4a3e603a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9b8b05919d3f77152e9949cec4975d4e119de1ef9bdb3f6ec3d8a9aa2e750d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:51.003072 kubelet[2707]: E0711 00:23:51.003019 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9b8b05919d3f77152e9949cec4975d4e119de1ef9bdb3f6ec3d8a9aa2e750d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:51.003072 kubelet[2707]: E0711 00:23:51.003081 2707 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9b8b05919d3f77152e9949cec4975d4e119de1ef9bdb3f6ec3d8a9aa2e750d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xb2lq" Jul 11 00:23:51.003298 kubelet[2707]: E0711 00:23:51.003107 2707 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9b8b05919d3f77152e9949cec4975d4e119de1ef9bdb3f6ec3d8a9aa2e750d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xb2lq" Jul 11 00:23:51.003459 kubelet[2707]: E0711 00:23:51.003180 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-xb2lq_calico-system(1421f787-a419-4d08-9ae2-a92e4a3e603a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-xb2lq_calico-system(1421f787-a419-4d08-9ae2-a92e4a3e603a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e9b8b05919d3f77152e9949cec4975d4e119de1ef9bdb3f6ec3d8a9aa2e750d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xb2lq" podUID="1421f787-a419-4d08-9ae2-a92e4a3e603a" Jul 11 00:23:51.052012 containerd[1550]: time="2025-07-11T00:23:51.050322212Z" level=info msg="StartContainer for \"b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec\" returns successfully" Jul 11 00:23:51.157366 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:23:51.157556 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:23:51.450504 kubelet[2707]: I0711 00:23:51.450446 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx7td\" (UniqueName: \"kubernetes.io/projected/9ffa1684-c024-4b5d-b78a-3599ed95de14-kube-api-access-gx7td\") pod \"9ffa1684-c024-4b5d-b78a-3599ed95de14\" (UID: \"9ffa1684-c024-4b5d-b78a-3599ed95de14\") " Jul 11 00:23:51.450504 kubelet[2707]: I0711 00:23:51.450501 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ffa1684-c024-4b5d-b78a-3599ed95de14-whisker-ca-bundle\") pod \"9ffa1684-c024-4b5d-b78a-3599ed95de14\" (UID: \"9ffa1684-c024-4b5d-b78a-3599ed95de14\") " Jul 11 00:23:51.450721 kubelet[2707]: I0711 00:23:51.450534 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ffa1684-c024-4b5d-b78a-3599ed95de14-whisker-backend-key-pair\") pod \"9ffa1684-c024-4b5d-b78a-3599ed95de14\" (UID: \"9ffa1684-c024-4b5d-b78a-3599ed95de14\") " Jul 11 00:23:51.452380 kubelet[2707]: I0711 00:23:51.452307 2707 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffa1684-c024-4b5d-b78a-3599ed95de14-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9ffa1684-c024-4b5d-b78a-3599ed95de14" (UID: "9ffa1684-c024-4b5d-b78a-3599ed95de14"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:23:51.456243 kubelet[2707]: I0711 00:23:51.456187 2707 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffa1684-c024-4b5d-b78a-3599ed95de14-kube-api-access-gx7td" (OuterVolumeSpecName: "kube-api-access-gx7td") pod "9ffa1684-c024-4b5d-b78a-3599ed95de14" (UID: "9ffa1684-c024-4b5d-b78a-3599ed95de14"). InnerVolumeSpecName "kube-api-access-gx7td". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:23:51.456837 kubelet[2707]: I0711 00:23:51.456779 2707 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ffa1684-c024-4b5d-b78a-3599ed95de14-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9ffa1684-c024-4b5d-b78a-3599ed95de14" (UID: "9ffa1684-c024-4b5d-b78a-3599ed95de14"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:23:51.492595 systemd[1]: run-netns-cni\x2d6ef4215b\x2dd146\x2d3efa\x2dbbf2\x2d8c6bdeebe5f3.mount: Deactivated successfully. Jul 11 00:23:51.492720 systemd[1]: run-netns-cni\x2daad9fe43\x2da33e\x2d6a78\x2dc1a6\x2dbb86ba5e4862.mount: Deactivated successfully. Jul 11 00:23:51.492796 systemd[1]: run-netns-cni\x2d5ebb0000\x2d3c2f\x2d9275\x2d5540\x2df810ee0f4d7e.mount: Deactivated successfully. Jul 11 00:23:51.492876 systemd[1]: run-netns-cni\x2d97f9edd9\x2d6bf1\x2d27fd\x2da900\x2d2382a4c28150.mount: Deactivated successfully. Jul 11 00:23:51.492964 systemd[1]: run-netns-cni\x2d56c57f06\x2db4a6\x2d97d8\x2d9d45\x2d520444cb3ab3.mount: Deactivated successfully. Jul 11 00:23:51.493062 systemd[1]: var-lib-kubelet-pods-9ffa1684\x2dc024\x2d4b5d\x2db78a\x2d3599ed95de14-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgx7td.mount: Deactivated successfully. Jul 11 00:23:51.493202 systemd[1]: var-lib-kubelet-pods-9ffa1684\x2dc024\x2d4b5d\x2db78a\x2d3599ed95de14-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:23:51.551371 kubelet[2707]: I0711 00:23:51.551290 2707 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gx7td\" (UniqueName: \"kubernetes.io/projected/9ffa1684-c024-4b5d-b78a-3599ed95de14-kube-api-access-gx7td\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:51.551371 kubelet[2707]: I0711 00:23:51.551359 2707 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ffa1684-c024-4b5d-b78a-3599ed95de14-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:51.551371 kubelet[2707]: I0711 00:23:51.551368 2707 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ffa1684-c024-4b5d-b78a-3599ed95de14-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:51.939528 systemd[1]: Removed slice kubepods-besteffort-pod9ffa1684_c024_4b5d_b78a_3599ed95de14.slice - libcontainer container kubepods-besteffort-pod9ffa1684_c024_4b5d_b78a_3599ed95de14.slice. Jul 11 00:23:52.099554 kubelet[2707]: I0711 00:23:52.099257 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pt7tk" podStartSLOduration=2.445211561 podStartE2EDuration="29.099219367s" podCreationTimestamp="2025-07-11 00:23:23 +0000 UTC" firstStartedPulling="2025-07-11 00:23:24.07950432 +0000 UTC m=+24.244700012" lastFinishedPulling="2025-07-11 00:23:50.733512126 +0000 UTC m=+50.898707818" observedRunningTime="2025-07-11 00:23:52.096967262 +0000 UTC m=+52.262162954" watchObservedRunningTime="2025-07-11 00:23:52.099219367 +0000 UTC m=+52.264415059" Jul 11 00:23:52.256159 systemd[1]: Created slice kubepods-besteffort-pod119c246a_71bd_4fe7_abce_c5c399c5a8ae.slice - libcontainer container kubepods-besteffort-pod119c246a_71bd_4fe7_abce_c5c399c5a8ae.slice. Jul 11 00:23:52.343489 containerd[1550]: time="2025-07-11T00:23:52.343416815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec\" id:\"33e9e4b998888a3efad3685f91ce846f5ad89205703296150d8367635746e072\" pid:4145 exit_status:1 exited_at:{seconds:1752193432 nanos:342880776}" Jul 11 00:23:52.357190 kubelet[2707]: I0711 00:23:52.357062 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/119c246a-71bd-4fe7-abce-c5c399c5a8ae-whisker-backend-key-pair\") pod \"whisker-57964569f5-zjgrf\" (UID: \"119c246a-71bd-4fe7-abce-c5c399c5a8ae\") " pod="calico-system/whisker-57964569f5-zjgrf" Jul 11 00:23:52.357190 kubelet[2707]: I0711 00:23:52.357160 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rktnd\" (UniqueName: \"kubernetes.io/projected/119c246a-71bd-4fe7-abce-c5c399c5a8ae-kube-api-access-rktnd\") pod \"whisker-57964569f5-zjgrf\" (UID: \"119c246a-71bd-4fe7-abce-c5c399c5a8ae\") " pod="calico-system/whisker-57964569f5-zjgrf" Jul 11 00:23:52.357190 kubelet[2707]: I0711 00:23:52.357207 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/119c246a-71bd-4fe7-abce-c5c399c5a8ae-whisker-ca-bundle\") pod \"whisker-57964569f5-zjgrf\" (UID: \"119c246a-71bd-4fe7-abce-c5c399c5a8ae\") " pod="calico-system/whisker-57964569f5-zjgrf" Jul 11 00:23:52.562148 containerd[1550]: time="2025-07-11T00:23:52.561607012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57964569f5-zjgrf,Uid:119c246a-71bd-4fe7-abce-c5c399c5a8ae,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:53.159320 containerd[1550]: time="2025-07-11T00:23:53.159258189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec\" id:\"969bd3b6a6861bd1d9e56a15cbf6f1a4940d83100381ac3db3a6bfb830e0abb9\" pid:4323 exit_status:1 exited_at:{seconds:1752193433 nanos:158820712}" Jul 11 00:23:53.232661 systemd-networkd[1466]: vxlan.calico: Link UP Jul 11 00:23:53.232676 systemd-networkd[1466]: vxlan.calico: Gained carrier Jul 11 00:23:53.285979 systemd-networkd[1466]: calide858974768: Link UP Jul 11 00:23:53.289858 systemd-networkd[1466]: calide858974768: Gained carrier Jul 11 00:23:53.313362 containerd[1550]: 2025-07-11 00:23:52.679 [INFO][4251] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:23:53.313362 containerd[1550]: 2025-07-11 00:23:52.799 [INFO][4251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57964569f5--zjgrf-eth0 whisker-57964569f5- calico-system 119c246a-71bd-4fe7-abce-c5c399c5a8ae 978 0 2025-07-11 00:23:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57964569f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57964569f5-zjgrf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calide858974768 [] [] }} ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Namespace="calico-system" Pod="whisker-57964569f5-zjgrf" WorkloadEndpoint="localhost-k8s-whisker--57964569f5--zjgrf-" Jul 11 00:23:53.313362 containerd[1550]: 2025-07-11 00:23:52.800 [INFO][4251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Namespace="calico-system" Pod="whisker-57964569f5-zjgrf" WorkloadEndpoint="localhost-k8s-whisker--57964569f5--zjgrf-eth0" Jul 11 00:23:53.313362 containerd[1550]: 2025-07-11 00:23:53.189 [INFO][4277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" HandleID="k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Workload="localhost-k8s-whisker--57964569f5--zjgrf-eth0" Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.190 [INFO][4277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" HandleID="k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Workload="localhost-k8s-whisker--57964569f5--zjgrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000430910), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57964569f5-zjgrf", "timestamp":"2025-07-11 00:23:53.189905498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.190 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.190 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.191 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.207 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" host="localhost" Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.220 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.228 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.231 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.238 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:53.313710 containerd[1550]: 2025-07-11 00:23:53.238 [INFO][4277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" host="localhost" Jul 11 00:23:53.314046 containerd[1550]: 2025-07-11 00:23:53.242 [INFO][4277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f Jul 11 00:23:53.314046 containerd[1550]: 2025-07-11 00:23:53.250 [INFO][4277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" host="localhost" Jul 11 00:23:53.314046 containerd[1550]: 2025-07-11 00:23:53.259 [INFO][4277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" host="localhost" Jul 11 00:23:53.314046 containerd[1550]: 2025-07-11 00:23:53.259 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" host="localhost" Jul 11 00:23:53.314046 containerd[1550]: 2025-07-11 00:23:53.259 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:53.314046 containerd[1550]: 2025-07-11 00:23:53.259 [INFO][4277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" HandleID="k8s-pod-network.d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Workload="localhost-k8s-whisker--57964569f5--zjgrf-eth0" Jul 11 00:23:53.314245 containerd[1550]: 2025-07-11 00:23:53.267 [INFO][4251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Namespace="calico-system" Pod="whisker-57964569f5-zjgrf" WorkloadEndpoint="localhost-k8s-whisker--57964569f5--zjgrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57964569f5--zjgrf-eth0", GenerateName:"whisker-57964569f5-", Namespace:"calico-system", SelfLink:"", UID:"119c246a-71bd-4fe7-abce-c5c399c5a8ae", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57964569f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57964569f5-zjgrf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calide858974768", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:53.314245 containerd[1550]: 2025-07-11 00:23:53.267 [INFO][4251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Namespace="calico-system" Pod="whisker-57964569f5-zjgrf" WorkloadEndpoint="localhost-k8s-whisker--57964569f5--zjgrf-eth0" Jul 11 00:23:53.314417 containerd[1550]: 2025-07-11 00:23:53.267 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide858974768 ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Namespace="calico-system" Pod="whisker-57964569f5-zjgrf" WorkloadEndpoint="localhost-k8s-whisker--57964569f5--zjgrf-eth0" Jul 11 00:23:53.314417 containerd[1550]: 2025-07-11 00:23:53.286 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Namespace="calico-system" Pod="whisker-57964569f5-zjgrf" WorkloadEndpoint="localhost-k8s-whisker--57964569f5--zjgrf-eth0" Jul 11 00:23:53.314491 containerd[1550]: 2025-07-11 00:23:53.287 [INFO][4251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Namespace="calico-system" Pod="whisker-57964569f5-zjgrf" WorkloadEndpoint="localhost-k8s-whisker--57964569f5--zjgrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57964569f5--zjgrf-eth0", GenerateName:"whisker-57964569f5-", Namespace:"calico-system", SelfLink:"", UID:"119c246a-71bd-4fe7-abce-c5c399c5a8ae", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57964569f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f", Pod:"whisker-57964569f5-zjgrf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calide858974768", MAC:"b2:a6:0f:58:b1:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:53.314563 containerd[1550]: 2025-07-11 00:23:53.306 [INFO][4251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" Namespace="calico-system" Pod="whisker-57964569f5-zjgrf" WorkloadEndpoint="localhost-k8s-whisker--57964569f5--zjgrf-eth0" Jul 11 00:23:53.645287 containerd[1550]: time="2025-07-11T00:23:53.645223564Z" level=info msg="connecting to shim d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f" address="unix:///run/containerd/s/2b0710d2e6ed24a92e57f8d9ebba7b941cb623aa518e05f2b35264e30cf8bc02" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:23:53.716653 systemd[1]: Started cri-containerd-d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f.scope - libcontainer container d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f. Jul 11 00:23:53.732780 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:23:53.772799 containerd[1550]: time="2025-07-11T00:23:53.772747808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57964569f5-zjgrf,Uid:119c246a-71bd-4fe7-abce-c5c399c5a8ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f\"" Jul 11 00:23:53.774904 containerd[1550]: time="2025-07-11T00:23:53.774836994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:23:53.928055 kubelet[2707]: I0711 00:23:53.928007 2707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffa1684-c024-4b5d-b78a-3599ed95de14" path="/var/lib/kubelet/pods/9ffa1684-c024-4b5d-b78a-3599ed95de14/volumes" Jul 11 00:23:54.791800 systemd-networkd[1466]: calide858974768: Gained IPv6LL Jul 11 00:23:54.983610 systemd-networkd[1466]: vxlan.calico: Gained IPv6LL Jul 11 00:23:55.704491 containerd[1550]: time="2025-07-11T00:23:55.704424665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:55.705832 containerd[1550]: time="2025-07-11T00:23:55.705740726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 00:23:55.707480 containerd[1550]: time="2025-07-11T00:23:55.707400622Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:55.711045 containerd[1550]: time="2025-07-11T00:23:55.710985866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:55.712190 containerd[1550]: time="2025-07-11T00:23:55.711681661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.936788944s" Jul 11 00:23:55.712190 containerd[1550]: time="2025-07-11T00:23:55.711737874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 00:23:55.718959 containerd[1550]: time="2025-07-11T00:23:55.718901408Z" level=info msg="CreateContainer within sandbox \"d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:23:55.726782 containerd[1550]: time="2025-07-11T00:23:55.726749817Z" level=info msg="Container 5cb97c45be82b01a301e821db1d413d80324eaabb8da1ae94ac62784999f4164: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:23:55.747857 containerd[1550]: time="2025-07-11T00:23:55.747791037Z" level=info msg="CreateContainer within sandbox \"d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5cb97c45be82b01a301e821db1d413d80324eaabb8da1ae94ac62784999f4164\"" Jul 11 00:23:55.749365 containerd[1550]: time="2025-07-11T00:23:55.748511869Z" level=info msg="StartContainer for \"5cb97c45be82b01a301e821db1d413d80324eaabb8da1ae94ac62784999f4164\"" Jul 11 00:23:55.750166 containerd[1550]: time="2025-07-11T00:23:55.750138133Z" level=info msg="connecting to shim 5cb97c45be82b01a301e821db1d413d80324eaabb8da1ae94ac62784999f4164" address="unix:///run/containerd/s/2b0710d2e6ed24a92e57f8d9ebba7b941cb623aa518e05f2b35264e30cf8bc02" protocol=ttrpc version=3 Jul 11 00:23:55.777519 systemd[1]: Started cri-containerd-5cb97c45be82b01a301e821db1d413d80324eaabb8da1ae94ac62784999f4164.scope - libcontainer container 5cb97c45be82b01a301e821db1d413d80324eaabb8da1ae94ac62784999f4164. Jul 11 00:23:55.847270 containerd[1550]: time="2025-07-11T00:23:55.847204867Z" level=info msg="StartContainer for \"5cb97c45be82b01a301e821db1d413d80324eaabb8da1ae94ac62784999f4164\" returns successfully" Jul 11 00:23:55.849098 containerd[1550]: time="2025-07-11T00:23:55.849067979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:23:55.930674 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:56228.service - OpenSSH per-connection server daemon (10.0.0.1:56228). Jul 11 00:23:55.998261 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 56228 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:23:56.000768 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:56.006451 systemd-logind[1525]: New session 9 of user core. Jul 11 00:23:56.013547 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:23:56.207183 sshd[4517]: Connection closed by 10.0.0.1 port 56228 Jul 11 00:23:56.207575 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:56.212150 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:56228.service: Deactivated successfully. Jul 11 00:23:56.214616 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:23:56.215805 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:23:56.218136 systemd-logind[1525]: Removed session 9. Jul 11 00:24:00.289858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203958748.mount: Deactivated successfully. Jul 11 00:24:00.446229 containerd[1550]: time="2025-07-11T00:24:00.446144323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:00.447286 containerd[1550]: time="2025-07-11T00:24:00.447252555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 00:24:00.448844 containerd[1550]: time="2025-07-11T00:24:00.448762250Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:00.452620 containerd[1550]: time="2025-07-11T00:24:00.452559641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:00.453346 containerd[1550]: time="2025-07-11T00:24:00.453301965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.604200345s" Jul 11 00:24:00.453400 containerd[1550]: time="2025-07-11T00:24:00.453362327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 00:24:00.462352 containerd[1550]: time="2025-07-11T00:24:00.462278054Z" level=info msg="CreateContainer within sandbox \"d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:24:00.472671 containerd[1550]: time="2025-07-11T00:24:00.472597911Z" level=info msg="Container 1926290f97a64294e19b5c89b965582d3939b002e48582a48f271bf4b3433d03: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:00.484042 containerd[1550]: time="2025-07-11T00:24:00.483995933Z" level=info msg="CreateContainer within sandbox \"d319c863800753ae9b7fb26f277016e01d590cc651d64ebebef6dd7f09a82a6f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1926290f97a64294e19b5c89b965582d3939b002e48582a48f271bf4b3433d03\"" Jul 11 00:24:00.485433 containerd[1550]: time="2025-07-11T00:24:00.484686832Z" level=info msg="StartContainer for \"1926290f97a64294e19b5c89b965582d3939b002e48582a48f271bf4b3433d03\"" Jul 11 00:24:00.486094 containerd[1550]: time="2025-07-11T00:24:00.486053964Z" level=info msg="connecting to shim 1926290f97a64294e19b5c89b965582d3939b002e48582a48f271bf4b3433d03" address="unix:///run/containerd/s/2b0710d2e6ed24a92e57f8d9ebba7b941cb623aa518e05f2b35264e30cf8bc02" protocol=ttrpc version=3 Jul 11 00:24:00.513639 systemd[1]: Started cri-containerd-1926290f97a64294e19b5c89b965582d3939b002e48582a48f271bf4b3433d03.scope - libcontainer container 1926290f97a64294e19b5c89b965582d3939b002e48582a48f271bf4b3433d03. Jul 11 00:24:01.166247 containerd[1550]: time="2025-07-11T00:24:01.166186934Z" level=info msg="StartContainer for \"1926290f97a64294e19b5c89b965582d3939b002e48582a48f271bf4b3433d03\" returns successfully" Jul 11 00:24:01.227759 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:51632.service - OpenSSH per-connection server daemon (10.0.0.1:51632). Jul 11 00:24:01.304267 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 51632 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:01.306629 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:01.312153 systemd-logind[1525]: New session 10 of user core. Jul 11 00:24:01.327711 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:24:01.479504 sshd[4586]: Connection closed by 10.0.0.1 port 51632 Jul 11 00:24:01.479905 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:01.484895 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:51632.service: Deactivated successfully. Jul 11 00:24:01.487107 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:24:01.488080 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:24:01.489770 systemd-logind[1525]: Removed session 10. Jul 11 00:24:02.186794 kubelet[2707]: I0711 00:24:02.186677 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57964569f5-zjgrf" podStartSLOduration=3.503332397 podStartE2EDuration="10.186657293s" podCreationTimestamp="2025-07-11 00:23:52 +0000 UTC" firstStartedPulling="2025-07-11 00:23:53.774469146 +0000 UTC m=+53.939664838" lastFinishedPulling="2025-07-11 00:24:00.457794042 +0000 UTC m=+60.622989734" observedRunningTime="2025-07-11 00:24:02.185863572 +0000 UTC m=+62.351059274" watchObservedRunningTime="2025-07-11 00:24:02.186657293 +0000 UTC m=+62.351852995" Jul 11 00:24:02.925663 containerd[1550]: time="2025-07-11T00:24:02.925531151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-jnkmt,Uid:4d47097b-30e2-4e83-bfe1-30bdae2e8116,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:24:03.063380 systemd-networkd[1466]: cali770dffe0b59: Link UP Jul 11 00:24:03.063747 systemd-networkd[1466]: cali770dffe0b59: Gained carrier Jul 11 00:24:03.231767 containerd[1550]: 2025-07-11 00:24:02.970 [INFO][4602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0 calico-apiserver-6cc5d4c775- calico-apiserver 4d47097b-30e2-4e83-bfe1-30bdae2e8116 843 0 2025-07-11 00:23:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc5d4c775 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cc5d4c775-jnkmt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali770dffe0b59 [] [] }} ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-jnkmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-" Jul 11 00:24:03.231767 containerd[1550]: 2025-07-11 00:24:02.970 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-jnkmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" Jul 11 00:24:03.231767 containerd[1550]: 2025-07-11 00:24:03.005 [INFO][4617] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" HandleID="k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Workload="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.005 [INFO][4617] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" HandleID="k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Workload="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041f160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cc5d4c775-jnkmt", "timestamp":"2025-07-11 00:24:03.005166011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.005 [INFO][4617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.005 [INFO][4617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.005 [INFO][4617] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.015 [INFO][4617] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" host="localhost" Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.022 [INFO][4617] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.030 [INFO][4617] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.033 [INFO][4617] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.036 [INFO][4617] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:03.232073 containerd[1550]: 2025-07-11 00:24:03.036 [INFO][4617] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" host="localhost" Jul 11 00:24:03.232553 containerd[1550]: 2025-07-11 00:24:03.038 [INFO][4617] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71 Jul 11 00:24:03.232553 containerd[1550]: 2025-07-11 00:24:03.045 [INFO][4617] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" host="localhost" Jul 11 00:24:03.232553 containerd[1550]: 2025-07-11 00:24:03.054 [INFO][4617] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" host="localhost" Jul 11 00:24:03.232553 containerd[1550]: 2025-07-11 00:24:03.054 [INFO][4617] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" host="localhost" Jul 11 00:24:03.232553 containerd[1550]: 2025-07-11 00:24:03.054 [INFO][4617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:03.232553 containerd[1550]: 2025-07-11 00:24:03.054 [INFO][4617] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" HandleID="k8s-pod-network.6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Workload="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" Jul 11 00:24:03.232697 containerd[1550]: 2025-07-11 00:24:03.059 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-jnkmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0", GenerateName:"calico-apiserver-6cc5d4c775-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d47097b-30e2-4e83-bfe1-30bdae2e8116", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc5d4c775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cc5d4c775-jnkmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali770dffe0b59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:03.232758 containerd[1550]: 2025-07-11 00:24:03.059 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-jnkmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" Jul 11 00:24:03.232758 containerd[1550]: 2025-07-11 00:24:03.059 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali770dffe0b59 ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-jnkmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" Jul 11 00:24:03.232758 containerd[1550]: 2025-07-11 00:24:03.064 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-jnkmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" Jul 11 00:24:03.232825 containerd[1550]: 2025-07-11 00:24:03.064 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-jnkmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0", GenerateName:"calico-apiserver-6cc5d4c775-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d47097b-30e2-4e83-bfe1-30bdae2e8116", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc5d4c775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71", Pod:"calico-apiserver-6cc5d4c775-jnkmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali770dffe0b59", MAC:"8a:a6:67:91:36:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:03.232875 containerd[1550]: 2025-07-11 00:24:03.227 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-jnkmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--jnkmt-eth0" Jul 11 00:24:03.354371 containerd[1550]: time="2025-07-11T00:24:03.353788081Z" level=info msg="connecting to shim 6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71" address="unix:///run/containerd/s/91d9da2a6752746108fb323b3b52b87b227f62badbcb1207e0d60d21bbc40200" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:03.388697 systemd[1]: Started cri-containerd-6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71.scope - libcontainer container 6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71. Jul 11 00:24:03.407721 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:03.449097 containerd[1550]: time="2025-07-11T00:24:03.449040484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-jnkmt,Uid:4d47097b-30e2-4e83-bfe1-30bdae2e8116,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71\"" Jul 11 00:24:03.451576 containerd[1550]: time="2025-07-11T00:24:03.451513107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:24:03.926367 kubelet[2707]: E0711 00:24:03.925846 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:03.926981 containerd[1550]: time="2025-07-11T00:24:03.926243424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbmkq,Uid:8c6e936e-c0ab-46d4-ab44-49e09c4576a1,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:03.926981 containerd[1550]: time="2025-07-11T00:24:03.926388152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xb2lq,Uid:1421f787-a419-4d08-9ae2-a92e4a3e603a,Namespace:calico-system,Attempt:0,}" Jul 11 00:24:03.926981 containerd[1550]: time="2025-07-11T00:24:03.926464203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-dmmqk,Uid:5feacf03-5a7f-49a2-9aad-9dd21cd054c6,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:24:04.391582 systemd-networkd[1466]: cali770dffe0b59: Gained IPv6LL Jul 11 00:24:04.601373 systemd-networkd[1466]: cali47ebace32b3: Link UP Jul 11 00:24:04.601946 systemd-networkd[1466]: cali47ebace32b3: Gained carrier Jul 11 00:24:04.623117 containerd[1550]: 2025-07-11 00:24:04.503 [INFO][4714] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0 goldmane-768f4c5c69- calico-system 1421f787-a419-4d08-9ae2-a92e4a3e603a 846 0 2025-07-11 00:23:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-xb2lq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali47ebace32b3 [] [] }} ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Namespace="calico-system" Pod="goldmane-768f4c5c69-xb2lq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xb2lq-" Jul 11 00:24:04.623117 containerd[1550]: 2025-07-11 00:24:04.503 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Namespace="calico-system" Pod="goldmane-768f4c5c69-xb2lq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" Jul 11 00:24:04.623117 containerd[1550]: 2025-07-11 00:24:04.539 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" HandleID="k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Workload="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.540 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" HandleID="k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Workload="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-xb2lq", "timestamp":"2025-07-11 00:24:04.539918267 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.540 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.540 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.540 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.551 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" host="localhost" Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.559 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.566 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.569 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.573 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:04.624029 containerd[1550]: 2025-07-11 00:24:04.573 [INFO][4739] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" host="localhost" Jul 11 00:24:04.624374 containerd[1550]: 2025-07-11 00:24:04.576 [INFO][4739] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133 Jul 11 00:24:04.624374 containerd[1550]: 2025-07-11 00:24:04.581 [INFO][4739] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" host="localhost" Jul 11 00:24:04.624374 containerd[1550]: 2025-07-11 00:24:04.592 [INFO][4739] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" host="localhost" Jul 11 00:24:04.624374 containerd[1550]: 2025-07-11 00:24:04.592 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" host="localhost" Jul 11 00:24:04.624374 containerd[1550]: 2025-07-11 00:24:04.592 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:04.624374 containerd[1550]: 2025-07-11 00:24:04.592 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" HandleID="k8s-pod-network.8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Workload="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" Jul 11 00:24:04.624529 containerd[1550]: 2025-07-11 00:24:04.596 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Namespace="calico-system" Pod="goldmane-768f4c5c69-xb2lq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1421f787-a419-4d08-9ae2-a92e4a3e603a", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-xb2lq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali47ebace32b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:04.624529 containerd[1550]: 2025-07-11 00:24:04.596 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Namespace="calico-system" Pod="goldmane-768f4c5c69-xb2lq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" Jul 11 00:24:04.624631 containerd[1550]: 2025-07-11 00:24:04.596 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47ebace32b3 ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Namespace="calico-system" Pod="goldmane-768f4c5c69-xb2lq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" Jul 11 00:24:04.624631 containerd[1550]: 2025-07-11 00:24:04.602 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Namespace="calico-system" Pod="goldmane-768f4c5c69-xb2lq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" Jul 11 00:24:04.624681 containerd[1550]: 2025-07-11 00:24:04.603 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Namespace="calico-system" Pod="goldmane-768f4c5c69-xb2lq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1421f787-a419-4d08-9ae2-a92e4a3e603a", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133", Pod:"goldmane-768f4c5c69-xb2lq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali47ebace32b3", MAC:"66:44:e3:5c:24:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:04.624812 containerd[1550]: 2025-07-11 00:24:04.618 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" Namespace="calico-system" Pod="goldmane-768f4c5c69-xb2lq" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xb2lq-eth0" Jul 11 00:24:04.925855 containerd[1550]: time="2025-07-11T00:24:04.925803711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cwbxt,Uid:d158cd74-abdf-48f8-9025-ccae8e128169,Namespace:calico-system,Attempt:0,}" Jul 11 00:24:05.023630 systemd-networkd[1466]: cali24f7e10284e: Link UP Jul 11 00:24:05.023855 systemd-networkd[1466]: cali24f7e10284e: Gained carrier Jul 11 00:24:05.335319 containerd[1550]: 2025-07-11 00:24:04.505 [INFO][4703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0 calico-apiserver-6cc5d4c775- calico-apiserver 5feacf03-5a7f-49a2-9aad-9dd21cd054c6 845 0 2025-07-11 00:23:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc5d4c775 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cc5d4c775-dmmqk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali24f7e10284e [] [] }} ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-dmmqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-" Jul 11 00:24:05.335319 containerd[1550]: 2025-07-11 00:24:04.505 [INFO][4703] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-dmmqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" Jul 11 00:24:05.335319 containerd[1550]: 2025-07-11 00:24:04.544 [INFO][4741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" HandleID="k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Workload="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.544 [INFO][4741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" HandleID="k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Workload="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cc5d4c775-dmmqk", "timestamp":"2025-07-11 00:24:04.544193804 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.544 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.592 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.592 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.651 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" host="localhost" Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.744 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.750 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.752 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.755 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:05.335994 containerd[1550]: 2025-07-11 00:24:04.755 [INFO][4741] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" host="localhost" Jul 11 00:24:05.336306 containerd[1550]: 2025-07-11 00:24:04.756 [INFO][4741] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3 Jul 11 00:24:05.336306 containerd[1550]: 2025-07-11 00:24:04.781 [INFO][4741] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" host="localhost" Jul 11 00:24:05.336306 containerd[1550]: 2025-07-11 00:24:05.016 [INFO][4741] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" host="localhost" Jul 11 00:24:05.336306 containerd[1550]: 2025-07-11 00:24:05.016 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" host="localhost" Jul 11 00:24:05.336306 containerd[1550]: 2025-07-11 00:24:05.016 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:05.336306 containerd[1550]: 2025-07-11 00:24:05.016 [INFO][4741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" HandleID="k8s-pod-network.0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Workload="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" Jul 11 00:24:05.336624 containerd[1550]: 2025-07-11 00:24:05.020 [INFO][4703] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-dmmqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0", GenerateName:"calico-apiserver-6cc5d4c775-", Namespace:"calico-apiserver", SelfLink:"", UID:"5feacf03-5a7f-49a2-9aad-9dd21cd054c6", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc5d4c775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cc5d4c775-dmmqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24f7e10284e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:05.336705 containerd[1550]: 2025-07-11 00:24:05.020 [INFO][4703] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-dmmqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" Jul 11 00:24:05.336705 containerd[1550]: 2025-07-11 00:24:05.020 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24f7e10284e ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-dmmqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" Jul 11 00:24:05.336705 containerd[1550]: 2025-07-11 00:24:05.023 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-dmmqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" Jul 11 00:24:05.336799 containerd[1550]: 2025-07-11 00:24:05.027 [INFO][4703] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-dmmqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0", GenerateName:"calico-apiserver-6cc5d4c775-", Namespace:"calico-apiserver", SelfLink:"", UID:"5feacf03-5a7f-49a2-9aad-9dd21cd054c6", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc5d4c775", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3", Pod:"calico-apiserver-6cc5d4c775-dmmqk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24f7e10284e", MAC:"6a:72:93:09:e5:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:05.336871 containerd[1550]: 2025-07-11 00:24:05.331 [INFO][4703] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" Namespace="calico-apiserver" Pod="calico-apiserver-6cc5d4c775-dmmqk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc5d4c775--dmmqk-eth0" Jul 11 00:24:05.638982 systemd-networkd[1466]: cali62e2f6e30db: Link UP Jul 11 00:24:05.639427 systemd-networkd[1466]: cali62e2f6e30db: Gained carrier Jul 11 00:24:05.874157 containerd[1550]: time="2025-07-11T00:24:05.874089968Z" level=info msg="connecting to shim 8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133" address="unix:///run/containerd/s/12687ffed23ba96df3ef485cf13c33dcebb71afd017a9292f05a33617756d0b6" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:05.902549 systemd[1]: Started cri-containerd-8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133.scope - libcontainer container 8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133. Jul 11 00:24:05.915498 containerd[1550]: 2025-07-11 00:24:04.498 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0 coredns-674b8bbfcf- kube-system 8c6e936e-c0ab-46d4-ab44-49e09c4576a1 842 0 2025-07-11 00:23:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pbmkq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali62e2f6e30db [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbmkq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pbmkq-" Jul 11 00:24:05.915498 containerd[1550]: 2025-07-11 00:24:04.498 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbmkq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" Jul 11 00:24:05.915498 containerd[1550]: 2025-07-11 00:24:04.545 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" HandleID="k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Workload="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:04.546 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" HandleID="k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Workload="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001355f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pbmkq", "timestamp":"2025-07-11 00:24:04.545779355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:04.546 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:05.016 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:05.016 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:05.026 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" host="localhost" Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:05.034 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:05.389 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:05.391 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:05.394 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:05.916016 containerd[1550]: 2025-07-11 00:24:05.394 [INFO][4733] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" host="localhost" Jul 11 00:24:05.916322 containerd[1550]: 2025-07-11 00:24:05.396 [INFO][4733] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e Jul 11 00:24:05.916322 containerd[1550]: 2025-07-11 00:24:05.546 [INFO][4733] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" host="localhost" Jul 11 00:24:05.916322 containerd[1550]: 2025-07-11 00:24:05.632 [INFO][4733] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" host="localhost" Jul 11 00:24:05.916322 containerd[1550]: 2025-07-11 00:24:05.632 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" host="localhost" Jul 11 00:24:05.916322 containerd[1550]: 2025-07-11 00:24:05.632 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:05.916322 containerd[1550]: 2025-07-11 00:24:05.632 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" HandleID="k8s-pod-network.783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Workload="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" Jul 11 00:24:05.916552 containerd[1550]: 2025-07-11 00:24:05.635 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbmkq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8c6e936e-c0ab-46d4-ab44-49e09c4576a1", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pbmkq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62e2f6e30db", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:05.916661 containerd[1550]: 2025-07-11 00:24:05.635 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbmkq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" Jul 11 00:24:05.916661 containerd[1550]: 2025-07-11 00:24:05.635 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62e2f6e30db ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbmkq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" Jul 11 00:24:05.916661 containerd[1550]: 2025-07-11 00:24:05.639 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbmkq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" Jul 11 00:24:05.916739 containerd[1550]: 2025-07-11 00:24:05.640 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbmkq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8c6e936e-c0ab-46d4-ab44-49e09c4576a1", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e", Pod:"coredns-674b8bbfcf-pbmkq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62e2f6e30db", MAC:"d2:ed:47:81:a2:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:05.916739 containerd[1550]: 2025-07-11 00:24:05.908 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbmkq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pbmkq-eth0" Jul 11 00:24:05.932834 kubelet[2707]: E0711 00:24:05.932740 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:05.934068 containerd[1550]: time="2025-07-11T00:24:05.933880917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frxnl,Uid:0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8,Namespace:kube-system,Attempt:0,}" Jul 11 00:24:05.934385 containerd[1550]: time="2025-07-11T00:24:05.933982175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687ddfff9-hskm4,Uid:8c8df196-6cec-44ee-8ef2-38e60eef6990,Namespace:calico-system,Attempt:0,}" Jul 11 00:24:05.943564 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:06.379771 systemd-networkd[1466]: calif188967cb1e: Link UP Jul 11 00:24:06.380028 systemd-networkd[1466]: calif188967cb1e: Gained carrier Jul 11 00:24:06.439542 systemd-networkd[1466]: cali24f7e10284e: Gained IPv6LL Jul 11 00:24:06.499302 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:51646.service - OpenSSH per-connection server daemon (10.0.0.1:51646). Jul 11 00:24:06.559220 sshd[4904]: Accepted publickey for core from 10.0.0.1 port 51646 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:06.560846 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:06.565698 systemd-logind[1525]: New session 11 of user core. Jul 11 00:24:06.567538 systemd-networkd[1466]: cali47ebace32b3: Gained IPv6LL Jul 11 00:24:06.573484 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:24:06.640454 containerd[1550]: time="2025-07-11T00:24:06.640082014Z" level=info msg="connecting to shim 0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3" address="unix:///run/containerd/s/73e3a2fe2f6d7e0d638a03f7d491bd3f72778c61f9c52e26f2063718dc2029ee" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:06.675494 systemd[1]: Started cri-containerd-0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3.scope - libcontainer container 0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3. Jul 11 00:24:06.693277 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:06.705719 containerd[1550]: time="2025-07-11T00:24:06.705672863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xb2lq,Uid:1421f787-a419-4d08-9ae2-a92e4a3e603a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133\"" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.913 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cwbxt-eth0 csi-node-driver- calico-system d158cd74-abdf-48f8-9025-ccae8e128169 733 0 2025-07-11 00:23:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cwbxt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif188967cb1e [] [] }} ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Namespace="calico-system" Pod="csi-node-driver-cwbxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cwbxt-" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.913 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Namespace="calico-system" Pod="csi-node-driver-cwbxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cwbxt-eth0" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.959 [INFO][4842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" HandleID="k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Workload="localhost-k8s-csi--node--driver--cwbxt-eth0" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.960 [INFO][4842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" HandleID="k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Workload="localhost-k8s-csi--node--driver--cwbxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cc700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cwbxt", "timestamp":"2025-07-11 00:24:05.959803602 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.960 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.960 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.960 [INFO][4842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.970 [INFO][4842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.981 [INFO][4842] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:05.999 [INFO][4842] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.002 [INFO][4842] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.006 [INFO][4842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.006 [INFO][4842] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.069 [INFO][4842] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.275 [INFO][4842] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.372 [INFO][4842] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.372 [INFO][4842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" host="localhost" Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.373 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:06.732407 containerd[1550]: 2025-07-11 00:24:06.373 [INFO][4842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" HandleID="k8s-pod-network.9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Workload="localhost-k8s-csi--node--driver--cwbxt-eth0" Jul 11 00:24:06.733153 containerd[1550]: 2025-07-11 00:24:06.375 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Namespace="calico-system" Pod="csi-node-driver-cwbxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cwbxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cwbxt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d158cd74-abdf-48f8-9025-ccae8e128169", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cwbxt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif188967cb1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:06.733153 containerd[1550]: 2025-07-11 00:24:06.375 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Namespace="calico-system" Pod="csi-node-driver-cwbxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cwbxt-eth0" Jul 11 00:24:06.733153 containerd[1550]: 2025-07-11 00:24:06.375 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif188967cb1e ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Namespace="calico-system" Pod="csi-node-driver-cwbxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cwbxt-eth0" Jul 11 00:24:06.733153 containerd[1550]: 2025-07-11 00:24:06.380 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Namespace="calico-system" Pod="csi-node-driver-cwbxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cwbxt-eth0" Jul 11 00:24:06.733153 containerd[1550]: 2025-07-11 00:24:06.382 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Namespace="calico-system" Pod="csi-node-driver-cwbxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cwbxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cwbxt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d158cd74-abdf-48f8-9025-ccae8e128169", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a", Pod:"csi-node-driver-cwbxt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif188967cb1e", MAC:"aa:29:41:96:5e:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:06.733153 containerd[1550]: 2025-07-11 00:24:06.726 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" Namespace="calico-system" Pod="csi-node-driver-cwbxt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cwbxt-eth0" Jul 11 00:24:06.798461 containerd[1550]: time="2025-07-11T00:24:06.798274254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc5d4c775-dmmqk,Uid:5feacf03-5a7f-49a2-9aad-9dd21cd054c6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3\"" Jul 11 00:24:06.802420 sshd[4915]: Connection closed by 10.0.0.1 port 51646 Jul 11 00:24:06.803667 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:06.814471 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:51646.service: Deactivated successfully. Jul 11 00:24:06.817091 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:24:06.817988 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:24:06.822491 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:51654.service - OpenSSH per-connection server daemon (10.0.0.1:51654). Jul 11 00:24:06.823346 systemd-logind[1525]: Removed session 11. Jul 11 00:24:06.860909 systemd-networkd[1466]: calie23efd73cd3: Link UP Jul 11 00:24:06.864011 systemd-networkd[1466]: calie23efd73cd3: Gained carrier Jul 11 00:24:06.881893 containerd[1550]: time="2025-07-11T00:24:06.881809250Z" level=info msg="connecting to shim 783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e" address="unix:///run/containerd/s/ba0d68aa2014e9fad4b782a0e32a75e685757efc5b88eb6abc2fb3babfc6c27c" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:06.919625 systemd[1]: Started cri-containerd-783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e.scope - libcontainer container 783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e. Jul 11 00:24:06.925546 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 51654 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:06.928043 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:06.934416 systemd-logind[1525]: New session 12 of user core. Jul 11 00:24:06.945758 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:24:06.951625 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.274 [INFO][4855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0 calico-kube-controllers-687ddfff9- calico-system 8c8df196-6cec-44ee-8ef2-38e60eef6990 844 0 2025-07-11 00:23:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:687ddfff9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-687ddfff9-hskm4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie23efd73cd3 [] [] }} ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Namespace="calico-system" Pod="calico-kube-controllers-687ddfff9-hskm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.274 [INFO][4855] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Namespace="calico-system" Pod="calico-kube-controllers-687ddfff9-hskm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.303 [INFO][4893] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" HandleID="k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Workload="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.304 [INFO][4893] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" HandleID="k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Workload="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-687ddfff9-hskm4", "timestamp":"2025-07-11 00:24:06.303978162 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.304 [INFO][4893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.373 [INFO][4893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.373 [INFO][4893] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.788 [INFO][4893] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.796 [INFO][4893] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.805 [INFO][4893] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.808 [INFO][4893] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.811 [INFO][4893] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.811 [INFO][4893] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.813 [INFO][4893] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8 Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.831 [INFO][4893] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.846 [INFO][4893] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.846 [INFO][4893] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" host="localhost" Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.846 [INFO][4893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:07.007231 containerd[1550]: 2025-07-11 00:24:06.846 [INFO][4893] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" HandleID="k8s-pod-network.18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Workload="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" Jul 11 00:24:07.008016 containerd[1550]: 2025-07-11 00:24:06.852 [INFO][4855] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Namespace="calico-system" Pod="calico-kube-controllers-687ddfff9-hskm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0", GenerateName:"calico-kube-controllers-687ddfff9-", Namespace:"calico-system", SelfLink:"", UID:"8c8df196-6cec-44ee-8ef2-38e60eef6990", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"687ddfff9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-687ddfff9-hskm4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie23efd73cd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:07.008016 containerd[1550]: 2025-07-11 00:24:06.853 [INFO][4855] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Namespace="calico-system" Pod="calico-kube-controllers-687ddfff9-hskm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" Jul 11 00:24:07.008016 containerd[1550]: 2025-07-11 00:24:06.854 [INFO][4855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie23efd73cd3 ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Namespace="calico-system" Pod="calico-kube-controllers-687ddfff9-hskm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" Jul 11 00:24:07.008016 containerd[1550]: 2025-07-11 00:24:06.866 [INFO][4855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Namespace="calico-system" Pod="calico-kube-controllers-687ddfff9-hskm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" Jul 11 00:24:07.008016 containerd[1550]: 2025-07-11 00:24:06.869 [INFO][4855] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Namespace="calico-system" Pod="calico-kube-controllers-687ddfff9-hskm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0", GenerateName:"calico-kube-controllers-687ddfff9-", Namespace:"calico-system", SelfLink:"", UID:"8c8df196-6cec-44ee-8ef2-38e60eef6990", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"687ddfff9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8", Pod:"calico-kube-controllers-687ddfff9-hskm4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie23efd73cd3", MAC:"6a:c5:a5:5e:f2:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:07.008016 containerd[1550]: 2025-07-11 00:24:07.001 [INFO][4855] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" Namespace="calico-system" Pod="calico-kube-controllers-687ddfff9-hskm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--687ddfff9--hskm4-eth0" Jul 11 00:24:07.009744 containerd[1550]: time="2025-07-11T00:24:07.009675603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbmkq,Uid:8c6e936e-c0ab-46d4-ab44-49e09c4576a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e\"" Jul 11 00:24:07.011916 kubelet[2707]: E0711 00:24:07.011875 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:07.031644 containerd[1550]: time="2025-07-11T00:24:07.031552963Z" level=info msg="CreateContainer within sandbox \"783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:24:07.047745 containerd[1550]: time="2025-07-11T00:24:07.046908796Z" level=info msg="connecting to shim 9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a" address="unix:///run/containerd/s/0431225d5d6751130fc8de5dd7d371356b063af2355417a728246f17cf71582b" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:07.084488 systemd-networkd[1466]: cali62e2f6e30db: Gained IPv6LL Jul 11 00:24:07.090250 systemd-networkd[1466]: cali044d24c1d49: Link UP Jul 11 00:24:07.093089 systemd-networkd[1466]: cali044d24c1d49: Gained carrier Jul 11 00:24:07.104544 containerd[1550]: time="2025-07-11T00:24:07.104467367Z" level=info msg="Container 84721da2e74ffd8bb29d1f8c4e2ffd36c8cb8b9ac0897d7baa0628d9ffeec320: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:07.107591 containerd[1550]: time="2025-07-11T00:24:07.107501498Z" level=info msg="connecting to shim 18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8" address="unix:///run/containerd/s/a5dfc9f68b51bad44ea3138b304731a84c918cd5485835c4e07a832d4cf05752" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:07.110641 systemd[1]: Started cri-containerd-9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a.scope - libcontainer container 9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a. Jul 11 00:24:07.121292 containerd[1550]: time="2025-07-11T00:24:07.121165151Z" level=info msg="CreateContainer within sandbox \"783c6ea2486fa459d9c0f69e17595d3231f05324e2182bf31833be94200b081e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84721da2e74ffd8bb29d1f8c4e2ffd36c8cb8b9ac0897d7baa0628d9ffeec320\"" Jul 11 00:24:07.122366 containerd[1550]: time="2025-07-11T00:24:07.122305668Z" level=info msg="StartContainer for \"84721da2e74ffd8bb29d1f8c4e2ffd36c8cb8b9ac0897d7baa0628d9ffeec320\"" Jul 11 00:24:07.123647 containerd[1550]: time="2025-07-11T00:24:07.123499946Z" level=info msg="connecting to shim 84721da2e74ffd8bb29d1f8c4e2ffd36c8cb8b9ac0897d7baa0628d9ffeec320" address="unix:///run/containerd/s/ba0d68aa2014e9fad4b782a0e32a75e685757efc5b88eb6abc2fb3babfc6c27c" protocol=ttrpc version=3 Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.298 [INFO][4865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--frxnl-eth0 coredns-674b8bbfcf- kube-system 0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8 838 0 2025-07-11 00:23:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-frxnl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali044d24c1d49 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Namespace="kube-system" Pod="coredns-674b8bbfcf-frxnl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--frxnl-" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.298 [INFO][4865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Namespace="kube-system" Pod="coredns-674b8bbfcf-frxnl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.525 [INFO][4906] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" HandleID="k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Workload="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.526 [INFO][4906] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" HandleID="k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Workload="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c190), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-frxnl", "timestamp":"2025-07-11 00:24:06.525876345 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.526 [INFO][4906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.846 [INFO][4906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.847 [INFO][4906] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.859 [INFO][4906] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:06.997 [INFO][4906] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.009 [INFO][4906] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.016 [INFO][4906] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.025 [INFO][4906] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.025 [INFO][4906] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.034 [INFO][4906] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.046 [INFO][4906] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.065 [INFO][4906] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.065 [INFO][4906] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" host="localhost" Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.065 [INFO][4906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:07.145892 containerd[1550]: 2025-07-11 00:24:07.065 [INFO][4906] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" HandleID="k8s-pod-network.2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Workload="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" Jul 11 00:24:07.146661 containerd[1550]: 2025-07-11 00:24:07.072 [INFO][4865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Namespace="kube-system" Pod="coredns-674b8bbfcf-frxnl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--frxnl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-frxnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali044d24c1d49", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:07.146661 containerd[1550]: 2025-07-11 00:24:07.072 [INFO][4865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Namespace="kube-system" Pod="coredns-674b8bbfcf-frxnl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" Jul 11 00:24:07.146661 containerd[1550]: 2025-07-11 00:24:07.072 [INFO][4865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali044d24c1d49 ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Namespace="kube-system" Pod="coredns-674b8bbfcf-frxnl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" Jul 11 00:24:07.146661 containerd[1550]: 2025-07-11 00:24:07.093 [INFO][4865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Namespace="kube-system" Pod="coredns-674b8bbfcf-frxnl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" Jul 11 00:24:07.146661 containerd[1550]: 2025-07-11 00:24:07.094 [INFO][4865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Namespace="kube-system" Pod="coredns-674b8bbfcf-frxnl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--frxnl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f", Pod:"coredns-674b8bbfcf-frxnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali044d24c1d49", MAC:"56:7a:d1:55:3b:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:07.146661 containerd[1550]: 2025-07-11 00:24:07.118 [INFO][4865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" Namespace="kube-system" Pod="coredns-674b8bbfcf-frxnl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--frxnl-eth0" Jul 11 00:24:07.157646 systemd[1]: Started cri-containerd-18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8.scope - libcontainer container 18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8. Jul 11 00:24:07.170970 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:07.175861 systemd[1]: Started cri-containerd-84721da2e74ffd8bb29d1f8c4e2ffd36c8cb8b9ac0897d7baa0628d9ffeec320.scope - libcontainer container 84721da2e74ffd8bb29d1f8c4e2ffd36c8cb8b9ac0897d7baa0628d9ffeec320. Jul 11 00:24:07.203365 sshd[5026]: Connection closed by 10.0.0.1 port 51654 Jul 11 00:24:07.204458 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:07.213370 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:07.218606 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:51654.service: Deactivated successfully. Jul 11 00:24:07.223264 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:24:07.226230 containerd[1550]: time="2025-07-11T00:24:07.226185702Z" level=info msg="connecting to shim 2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f" address="unix:///run/containerd/s/1e23a35fb35e95e27c1df57248d56d1d0e66d4d7b13f193a3255469612f26019" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:24:07.228539 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:24:07.234196 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:51656.service - OpenSSH per-connection server daemon (10.0.0.1:51656). Jul 11 00:24:07.241354 systemd-logind[1525]: Removed session 12. Jul 11 00:24:07.285916 systemd[1]: Started cri-containerd-2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f.scope - libcontainer container 2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f. Jul 11 00:24:07.316458 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:07.463625 systemd-networkd[1466]: calif188967cb1e: Gained IPv6LL Jul 11 00:24:07.572735 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 51656 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:07.574773 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:07.579881 systemd-logind[1525]: New session 13 of user core. Jul 11 00:24:07.588513 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:24:07.750419 sshd[5224]: Connection closed by 10.0.0.1 port 51656 Jul 11 00:24:07.750895 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:07.754493 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:51656.service: Deactivated successfully. Jul 11 00:24:07.757470 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:24:07.760560 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:24:07.761935 systemd-logind[1525]: Removed session 13. Jul 11 00:24:07.873766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050616139.mount: Deactivated successfully. Jul 11 00:24:07.910039 containerd[1550]: time="2025-07-11T00:24:07.909959567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cwbxt,Uid:d158cd74-abdf-48f8-9025-ccae8e128169,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a\"" Jul 11 00:24:07.910902 containerd[1550]: time="2025-07-11T00:24:07.910864538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687ddfff9-hskm4,Uid:8c8df196-6cec-44ee-8ef2-38e60eef6990,Namespace:calico-system,Attempt:0,} returns sandbox id \"18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8\"" Jul 11 00:24:07.912250 containerd[1550]: time="2025-07-11T00:24:07.912177154Z" level=info msg="StartContainer for \"84721da2e74ffd8bb29d1f8c4e2ffd36c8cb8b9ac0897d7baa0628d9ffeec320\" returns successfully" Jul 11 00:24:07.925821 containerd[1550]: time="2025-07-11T00:24:07.925763032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frxnl,Uid:0a35ba07-09e7-4ff9-a5d7-62f1c6d02ff8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f\"" Jul 11 00:24:07.927213 kubelet[2707]: E0711 00:24:07.927182 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:07.936241 containerd[1550]: time="2025-07-11T00:24:07.936176532Z" level=info msg="CreateContainer within sandbox \"2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:24:07.994050 containerd[1550]: time="2025-07-11T00:24:07.993962395Z" level=info msg="Container 5d6ee757d0bca2f68a6a82c9191a338bea64f2b9d81abcffe7c3f0b1553e3eca: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:08.012040 containerd[1550]: time="2025-07-11T00:24:08.011319659Z" level=info msg="CreateContainer within sandbox \"2eaf20c62b903ea1493a19d2e656de26b15d6afb3e0e109c674a0d3b7750507f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d6ee757d0bca2f68a6a82c9191a338bea64f2b9d81abcffe7c3f0b1553e3eca\"" Jul 11 00:24:08.012276 containerd[1550]: time="2025-07-11T00:24:08.012193341Z" level=info msg="StartContainer for \"5d6ee757d0bca2f68a6a82c9191a338bea64f2b9d81abcffe7c3f0b1553e3eca\"" Jul 11 00:24:08.014561 containerd[1550]: time="2025-07-11T00:24:08.013682226Z" level=info msg="connecting to shim 5d6ee757d0bca2f68a6a82c9191a338bea64f2b9d81abcffe7c3f0b1553e3eca" address="unix:///run/containerd/s/1e23a35fb35e95e27c1df57248d56d1d0e66d4d7b13f193a3255469612f26019" protocol=ttrpc version=3 Jul 11 00:24:08.049762 systemd[1]: Started cri-containerd-5d6ee757d0bca2f68a6a82c9191a338bea64f2b9d81abcffe7c3f0b1553e3eca.scope - libcontainer container 5d6ee757d0bca2f68a6a82c9191a338bea64f2b9d81abcffe7c3f0b1553e3eca. Jul 11 00:24:08.149623 containerd[1550]: time="2025-07-11T00:24:08.149563942Z" level=info msg="StartContainer for \"5d6ee757d0bca2f68a6a82c9191a338bea64f2b9d81abcffe7c3f0b1553e3eca\" returns successfully" Jul 11 00:24:08.215221 kubelet[2707]: E0711 00:24:08.215152 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:08.229832 kubelet[2707]: E0711 00:24:08.229803 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:08.294775 kubelet[2707]: I0711 00:24:08.294521 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-frxnl" podStartSLOduration=60.293939189 podStartE2EDuration="1m0.293939189s" podCreationTimestamp="2025-07-11 00:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:08.270109091 +0000 UTC m=+68.435304783" watchObservedRunningTime="2025-07-11 00:24:08.293939189 +0000 UTC m=+68.459134871" Jul 11 00:24:08.653888 containerd[1550]: time="2025-07-11T00:24:08.653726003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:08.654575 containerd[1550]: time="2025-07-11T00:24:08.654518845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 00:24:08.655887 containerd[1550]: time="2025-07-11T00:24:08.655849907Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:08.658869 containerd[1550]: time="2025-07-11T00:24:08.658833006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:08.659774 containerd[1550]: time="2025-07-11T00:24:08.659721506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.208146103s" Jul 11 00:24:08.659827 containerd[1550]: time="2025-07-11T00:24:08.659771228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:24:08.661170 containerd[1550]: time="2025-07-11T00:24:08.661057056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:24:08.665758 containerd[1550]: time="2025-07-11T00:24:08.665702311Z" level=info msg="CreateContainer within sandbox \"6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:24:08.679645 systemd-networkd[1466]: calie23efd73cd3: Gained IPv6LL Jul 11 00:24:08.691776 containerd[1550]: time="2025-07-11T00:24:08.691604195Z" level=info msg="Container 078cd33dc3d6da4610253c2a99086bbe4d3bf2e91f1398b66898c608e7f10c3b: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:08.703120 containerd[1550]: time="2025-07-11T00:24:08.703054947Z" level=info msg="CreateContainer within sandbox \"6ba0b4b32e5083d30d99c09d9f90ba776df332ac4a960a7353786d64912d6a71\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"078cd33dc3d6da4610253c2a99086bbe4d3bf2e91f1398b66898c608e7f10c3b\"" Jul 11 00:24:08.703907 containerd[1550]: time="2025-07-11T00:24:08.703836058Z" level=info msg="StartContainer for \"078cd33dc3d6da4610253c2a99086bbe4d3bf2e91f1398b66898c608e7f10c3b\"" Jul 11 00:24:08.705640 containerd[1550]: time="2025-07-11T00:24:08.705584464Z" level=info msg="connecting to shim 078cd33dc3d6da4610253c2a99086bbe4d3bf2e91f1398b66898c608e7f10c3b" address="unix:///run/containerd/s/91d9da2a6752746108fb323b3b52b87b227f62badbcb1207e0d60d21bbc40200" protocol=ttrpc version=3 Jul 11 00:24:08.748564 systemd[1]: Started cri-containerd-078cd33dc3d6da4610253c2a99086bbe4d3bf2e91f1398b66898c608e7f10c3b.scope - libcontainer container 078cd33dc3d6da4610253c2a99086bbe4d3bf2e91f1398b66898c608e7f10c3b. Jul 11 00:24:08.809710 containerd[1550]: time="2025-07-11T00:24:08.809662297Z" level=info msg="StartContainer for \"078cd33dc3d6da4610253c2a99086bbe4d3bf2e91f1398b66898c608e7f10c3b\" returns successfully" Jul 11 00:24:08.873293 systemd-networkd[1466]: cali044d24c1d49: Gained IPv6LL Jul 11 00:24:09.237065 kubelet[2707]: E0711 00:24:09.236992 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:09.237751 kubelet[2707]: E0711 00:24:09.237674 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:09.255082 kubelet[2707]: I0711 00:24:09.254997 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pbmkq" podStartSLOduration=61.254978129 podStartE2EDuration="1m1.254978129s" podCreationTimestamp="2025-07-11 00:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:08.295961904 +0000 UTC m=+68.461157626" watchObservedRunningTime="2025-07-11 00:24:09.254978129 +0000 UTC m=+69.420173831" Jul 11 00:24:09.255387 kubelet[2707]: I0711 00:24:09.255114 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cc5d4c775-jnkmt" podStartSLOduration=44.045189809 podStartE2EDuration="49.255109263s" podCreationTimestamp="2025-07-11 00:23:20 +0000 UTC" firstStartedPulling="2025-07-11 00:24:03.450843795 +0000 UTC m=+63.616039488" lastFinishedPulling="2025-07-11 00:24:08.66076325 +0000 UTC m=+68.825958942" observedRunningTime="2025-07-11 00:24:09.253649922 +0000 UTC m=+69.418845614" watchObservedRunningTime="2025-07-11 00:24:09.255109263 +0000 UTC m=+69.420304965" Jul 11 00:24:10.245671 kubelet[2707]: I0711 00:24:10.245625 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:24:10.432530 kubelet[2707]: E0711 00:24:10.245764 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:10.432530 kubelet[2707]: E0711 00:24:10.246124 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:11.248010 kubelet[2707]: E0711 00:24:11.247968 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:11.925750 kubelet[2707]: E0711 00:24:11.925564 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:12.768213 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:57270.service - OpenSSH per-connection server daemon (10.0.0.1:57270). Jul 11 00:24:12.773020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039532956.mount: Deactivated successfully. Jul 11 00:24:12.844282 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 57270 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:12.846590 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:12.853273 systemd-logind[1525]: New session 14 of user core. Jul 11 00:24:12.863603 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:24:13.051314 sshd[5349]: Connection closed by 10.0.0.1 port 57270 Jul 11 00:24:13.051749 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:13.060546 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:57270.service: Deactivated successfully. Jul 11 00:24:13.065177 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:24:13.067214 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:24:13.070244 systemd-logind[1525]: Removed session 14. Jul 11 00:24:13.658526 containerd[1550]: time="2025-07-11T00:24:13.658411882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:13.660021 containerd[1550]: time="2025-07-11T00:24:13.659970100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 00:24:13.661259 containerd[1550]: time="2025-07-11T00:24:13.661226336Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:13.664130 containerd[1550]: time="2025-07-11T00:24:13.664072509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:13.664789 containerd[1550]: time="2025-07-11T00:24:13.664757883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.003669058s" Jul 11 00:24:13.664789 containerd[1550]: time="2025-07-11T00:24:13.664787758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 00:24:13.667275 containerd[1550]: time="2025-07-11T00:24:13.667231352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:24:13.675493 containerd[1550]: time="2025-07-11T00:24:13.675421833Z" level=info msg="CreateContainer within sandbox \"8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:24:13.693239 containerd[1550]: time="2025-07-11T00:24:13.693166229Z" level=info msg="Container 2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:13.710060 containerd[1550]: time="2025-07-11T00:24:13.709990704Z" level=info msg="CreateContainer within sandbox \"8e6ecc1b0caefddf8f5bd4a21e3ebf6b64a2d9982ab8eada29d1f2c797483133\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb\"" Jul 11 00:24:13.711184 containerd[1550]: time="2025-07-11T00:24:13.711147516Z" level=info msg="StartContainer for \"2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb\"" Jul 11 00:24:13.712381 containerd[1550]: time="2025-07-11T00:24:13.712323933Z" level=info msg="connecting to shim 2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb" address="unix:///run/containerd/s/12687ffed23ba96df3ef485cf13c33dcebb71afd017a9292f05a33617756d0b6" protocol=ttrpc version=3 Jul 11 00:24:13.747534 systemd[1]: Started cri-containerd-2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb.scope - libcontainer container 2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb. Jul 11 00:24:13.810558 containerd[1550]: time="2025-07-11T00:24:13.810494936Z" level=info msg="StartContainer for \"2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb\" returns successfully" Jul 11 00:24:14.386212 containerd[1550]: time="2025-07-11T00:24:14.386165021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb\" id:\"4ccde9233e173ad8df392ff175907938a6d2ae90df07f7f46475de56beea9c8f\" pid:5422 exit_status:1 exited_at:{seconds:1752193454 nanos:385650775}" Jul 11 00:24:14.531805 kubelet[2707]: I0711 00:24:14.531706 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-xb2lq" podStartSLOduration=45.572403112 podStartE2EDuration="52.53168375s" podCreationTimestamp="2025-07-11 00:23:22 +0000 UTC" firstStartedPulling="2025-07-11 00:24:06.707068764 +0000 UTC m=+66.872264456" lastFinishedPulling="2025-07-11 00:24:13.666349402 +0000 UTC m=+73.831545094" observedRunningTime="2025-07-11 00:24:14.530994248 +0000 UTC m=+74.696189940" watchObservedRunningTime="2025-07-11 00:24:14.53168375 +0000 UTC m=+74.696879442" Jul 11 00:24:15.367701 containerd[1550]: time="2025-07-11T00:24:15.367636968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb\" id:\"830f24115e46b9143f2f76641ce2f9da82a9a566e4c69b33f338b8dc54b1c751\" pid:5448 exit_status:1 exited_at:{seconds:1752193455 nanos:366985867}" Jul 11 00:24:16.153942 containerd[1550]: time="2025-07-11T00:24:16.153854730Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:16.261438 containerd[1550]: time="2025-07-11T00:24:16.261366947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:24:16.263826 containerd[1550]: time="2025-07-11T00:24:16.263759419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.596488905s" Jul 11 00:24:16.263826 containerd[1550]: time="2025-07-11T00:24:16.263808541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:24:16.264931 containerd[1550]: time="2025-07-11T00:24:16.264884603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:24:16.586603 containerd[1550]: time="2025-07-11T00:24:16.574784095Z" level=info msg="CreateContainer within sandbox \"0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:24:17.303724 containerd[1550]: time="2025-07-11T00:24:17.303532648Z" level=info msg="Container aded490bd18f03a8668f56dc7d7c4427dcb3ff5b28806fc184395bbe3f51d820: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:17.475354 containerd[1550]: time="2025-07-11T00:24:17.475265675Z" level=info msg="CreateContainer within sandbox \"0298cbf170dbb98a7976fdbd965c23807ee54cd1f7784a8b733a9c77a3775ea3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aded490bd18f03a8668f56dc7d7c4427dcb3ff5b28806fc184395bbe3f51d820\"" Jul 11 00:24:17.476015 containerd[1550]: time="2025-07-11T00:24:17.475959276Z" level=info msg="StartContainer for \"aded490bd18f03a8668f56dc7d7c4427dcb3ff5b28806fc184395bbe3f51d820\"" Jul 11 00:24:17.477416 containerd[1550]: time="2025-07-11T00:24:17.477324196Z" level=info msg="connecting to shim aded490bd18f03a8668f56dc7d7c4427dcb3ff5b28806fc184395bbe3f51d820" address="unix:///run/containerd/s/73e3a2fe2f6d7e0d638a03f7d491bd3f72778c61f9c52e26f2063718dc2029ee" protocol=ttrpc version=3 Jul 11 00:24:17.501595 systemd[1]: Started cri-containerd-aded490bd18f03a8668f56dc7d7c4427dcb3ff5b28806fc184395bbe3f51d820.scope - libcontainer container aded490bd18f03a8668f56dc7d7c4427dcb3ff5b28806fc184395bbe3f51d820. Jul 11 00:24:17.570999 containerd[1550]: time="2025-07-11T00:24:17.570848013Z" level=info msg="StartContainer for \"aded490bd18f03a8668f56dc7d7c4427dcb3ff5b28806fc184395bbe3f51d820\" returns successfully" Jul 11 00:24:18.071043 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:57280.service - OpenSSH per-connection server daemon (10.0.0.1:57280). Jul 11 00:24:18.158590 sshd[5501]: Accepted publickey for core from 10.0.0.1 port 57280 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:18.160377 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:18.169323 systemd-logind[1525]: New session 15 of user core. Jul 11 00:24:18.176871 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:24:18.298175 kubelet[2707]: I0711 00:24:18.298071 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cc5d4c775-dmmqk" podStartSLOduration=48.833516545 podStartE2EDuration="58.298049859s" podCreationTimestamp="2025-07-11 00:23:20 +0000 UTC" firstStartedPulling="2025-07-11 00:24:06.800177416 +0000 UTC m=+66.965373108" lastFinishedPulling="2025-07-11 00:24:16.26471073 +0000 UTC m=+76.429906422" observedRunningTime="2025-07-11 00:24:18.293591748 +0000 UTC m=+78.458787440" watchObservedRunningTime="2025-07-11 00:24:18.298049859 +0000 UTC m=+78.463245551" Jul 11 00:24:18.421377 sshd[5503]: Connection closed by 10.0.0.1 port 57280 Jul 11 00:24:18.422364 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:18.428666 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:57280.service: Deactivated successfully. Jul 11 00:24:18.431394 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:24:18.433961 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:24:18.435972 systemd-logind[1525]: Removed session 15. Jul 11 00:24:18.583246 containerd[1550]: time="2025-07-11T00:24:18.583166283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:18.586936 containerd[1550]: time="2025-07-11T00:24:18.586863156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 00:24:18.589676 containerd[1550]: time="2025-07-11T00:24:18.589600172Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:18.593753 containerd[1550]: time="2025-07-11T00:24:18.593691170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:18.594551 containerd[1550]: time="2025-07-11T00:24:18.594446615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.329519624s" Jul 11 00:24:18.594551 containerd[1550]: time="2025-07-11T00:24:18.594503221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 00:24:18.596689 containerd[1550]: time="2025-07-11T00:24:18.596640380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:24:18.605138 containerd[1550]: time="2025-07-11T00:24:18.604989896Z" level=info msg="CreateContainer within sandbox \"9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:24:18.626393 containerd[1550]: time="2025-07-11T00:24:18.626308858Z" level=info msg="Container f57aae8351a9ba6b69efeb8d127b9e591e5e105d54f0d1428682e3ed6eaa58e7: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:18.641956 containerd[1550]: time="2025-07-11T00:24:18.641885220Z" level=info msg="CreateContainer within sandbox \"9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f57aae8351a9ba6b69efeb8d127b9e591e5e105d54f0d1428682e3ed6eaa58e7\"" Jul 11 00:24:18.643532 containerd[1550]: time="2025-07-11T00:24:18.643494616Z" level=info msg="StartContainer for \"f57aae8351a9ba6b69efeb8d127b9e591e5e105d54f0d1428682e3ed6eaa58e7\"" Jul 11 00:24:18.646027 containerd[1550]: time="2025-07-11T00:24:18.645975434Z" level=info msg="connecting to shim f57aae8351a9ba6b69efeb8d127b9e591e5e105d54f0d1428682e3ed6eaa58e7" address="unix:///run/containerd/s/0431225d5d6751130fc8de5dd7d371356b063af2355417a728246f17cf71582b" protocol=ttrpc version=3 Jul 11 00:24:18.673534 systemd[1]: Started cri-containerd-f57aae8351a9ba6b69efeb8d127b9e591e5e105d54f0d1428682e3ed6eaa58e7.scope - libcontainer container f57aae8351a9ba6b69efeb8d127b9e591e5e105d54f0d1428682e3ed6eaa58e7. Jul 11 00:24:18.733135 containerd[1550]: time="2025-07-11T00:24:18.733068349Z" level=info msg="StartContainer for \"f57aae8351a9ba6b69efeb8d127b9e591e5e105d54f0d1428682e3ed6eaa58e7\" returns successfully" Jul 11 00:24:22.864743 containerd[1550]: time="2025-07-11T00:24:22.864617816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:22.866558 containerd[1550]: time="2025-07-11T00:24:22.866494754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 00:24:22.869917 containerd[1550]: time="2025-07-11T00:24:22.869830790Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:22.872683 containerd[1550]: time="2025-07-11T00:24:22.872565236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:22.873593 containerd[1550]: time="2025-07-11T00:24:22.873508132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.276824904s" Jul 11 00:24:22.873593 containerd[1550]: time="2025-07-11T00:24:22.873575308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 00:24:22.876466 containerd[1550]: time="2025-07-11T00:24:22.876413767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:24:22.901255 containerd[1550]: time="2025-07-11T00:24:22.901208050Z" level=info msg="CreateContainer within sandbox \"18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:24:22.914628 containerd[1550]: time="2025-07-11T00:24:22.914554601Z" level=info msg="Container ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:22.926448 containerd[1550]: time="2025-07-11T00:24:22.926305769Z" level=info msg="CreateContainer within sandbox \"18903aacef758102aec042538f2bbded81ca6ddb183594390410ad07a9b46ed8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b\"" Jul 11 00:24:22.927066 containerd[1550]: time="2025-07-11T00:24:22.927033545Z" level=info msg="StartContainer for \"ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b\"" Jul 11 00:24:22.928625 containerd[1550]: time="2025-07-11T00:24:22.928539771Z" level=info msg="connecting to shim ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b" address="unix:///run/containerd/s/a5dfc9f68b51bad44ea3138b304731a84c918cd5485835c4e07a832d4cf05752" protocol=ttrpc version=3 Jul 11 00:24:22.971676 systemd[1]: Started cri-containerd-ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b.scope - libcontainer container ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b. Jul 11 00:24:23.046386 containerd[1550]: time="2025-07-11T00:24:23.046298959Z" level=info msg="StartContainer for \"ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b\" returns successfully" Jul 11 00:24:23.178144 containerd[1550]: time="2025-07-11T00:24:23.178096418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec\" id:\"ad96a2a89dfb4e396a7373fba9a9b54d1abc92d368bb015d878a0d2310c2a658\" pid:5615 exit_status:1 exited_at:{seconds:1752193463 nanos:177635328}" Jul 11 00:24:23.336388 containerd[1550]: time="2025-07-11T00:24:23.336298049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b\" id:\"9ac6d6c29c20da0d3c070f8b57281e9df8fdab828373762a6d4ce4e4622e8249\" pid:5642 exited_at:{seconds:1752193463 nanos:335976420}" Jul 11 00:24:23.437409 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:53406.service - OpenSSH per-connection server daemon (10.0.0.1:53406). Jul 11 00:24:23.503243 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 53406 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:23.505563 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:23.510788 systemd-logind[1525]: New session 16 of user core. Jul 11 00:24:23.519550 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:24:23.598971 kubelet[2707]: I0711 00:24:23.598628 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-687ddfff9-hskm4" podStartSLOduration=45.637509946 podStartE2EDuration="1m0.598610486s" podCreationTimestamp="2025-07-11 00:23:23 +0000 UTC" firstStartedPulling="2025-07-11 00:24:07.913678962 +0000 UTC m=+68.078874654" lastFinishedPulling="2025-07-11 00:24:22.874779502 +0000 UTC m=+83.039975194" observedRunningTime="2025-07-11 00:24:23.597633956 +0000 UTC m=+83.762829649" watchObservedRunningTime="2025-07-11 00:24:23.598610486 +0000 UTC m=+83.763806178" Jul 11 00:24:23.925175 kubelet[2707]: E0711 00:24:23.925000 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:24.091326 sshd[5656]: Connection closed by 10.0.0.1 port 53406 Jul 11 00:24:24.096174 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:53406.service: Deactivated successfully. Jul 11 00:24:24.091711 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:24.098448 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:24:24.099197 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:24:24.101255 systemd-logind[1525]: Removed session 16. Jul 11 00:24:27.488359 containerd[1550]: time="2025-07-11T00:24:27.488254450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:27.489606 containerd[1550]: time="2025-07-11T00:24:27.489538650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 00:24:27.491738 containerd[1550]: time="2025-07-11T00:24:27.491676431Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:27.494597 containerd[1550]: time="2025-07-11T00:24:27.494540728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:27.495280 containerd[1550]: time="2025-07-11T00:24:27.495247487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.618792302s" Jul 11 00:24:27.495280 containerd[1550]: time="2025-07-11T00:24:27.495287344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 00:24:27.523417 containerd[1550]: time="2025-07-11T00:24:27.523297045Z" level=info msg="CreateContainer within sandbox \"9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:24:27.647576 containerd[1550]: time="2025-07-11T00:24:27.647522543Z" level=info msg="Container 186c9df7cbd56a2180f60632c397cf9a789473444e11967d08b22ff26707bd3f: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:24:27.663007 containerd[1550]: time="2025-07-11T00:24:27.662913209Z" level=info msg="CreateContainer within sandbox \"9f8ef9e6856933ac2a03bbfd63e7e5b1d8962158d2468a99ecd6d472ee0b2a4a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"186c9df7cbd56a2180f60632c397cf9a789473444e11967d08b22ff26707bd3f\"" Jul 11 00:24:27.663847 containerd[1550]: time="2025-07-11T00:24:27.663789162Z" level=info msg="StartContainer for \"186c9df7cbd56a2180f60632c397cf9a789473444e11967d08b22ff26707bd3f\"" Jul 11 00:24:27.670491 containerd[1550]: time="2025-07-11T00:24:27.670432858Z" level=info msg="connecting to shim 186c9df7cbd56a2180f60632c397cf9a789473444e11967d08b22ff26707bd3f" address="unix:///run/containerd/s/0431225d5d6751130fc8de5dd7d371356b063af2355417a728246f17cf71582b" protocol=ttrpc version=3 Jul 11 00:24:27.713620 systemd[1]: Started cri-containerd-186c9df7cbd56a2180f60632c397cf9a789473444e11967d08b22ff26707bd3f.scope - libcontainer container 186c9df7cbd56a2180f60632c397cf9a789473444e11967d08b22ff26707bd3f. Jul 11 00:24:27.777743 containerd[1550]: time="2025-07-11T00:24:27.777556259Z" level=info msg="StartContainer for \"186c9df7cbd56a2180f60632c397cf9a789473444e11967d08b22ff26707bd3f\" returns successfully" Jul 11 00:24:28.051502 kubelet[2707]: I0711 00:24:28.051381 2707 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:24:28.053142 kubelet[2707]: I0711 00:24:28.053110 2707 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:24:29.111140 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:53422.service - OpenSSH per-connection server daemon (10.0.0.1:53422). Jul 11 00:24:29.206577 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 53422 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:29.209017 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:29.214412 systemd-logind[1525]: New session 17 of user core. Jul 11 00:24:29.223603 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:24:29.583299 sshd[5710]: Connection closed by 10.0.0.1 port 53422 Jul 11 00:24:29.583801 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:29.589921 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:53422.service: Deactivated successfully. Jul 11 00:24:29.592425 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:24:29.595440 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:24:29.597843 systemd-logind[1525]: Removed session 17. Jul 11 00:24:30.924862 kubelet[2707]: E0711 00:24:30.924804 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:33.935515 kubelet[2707]: I0711 00:24:33.935461 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:24:33.967149 kubelet[2707]: I0711 00:24:33.967066 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cwbxt" podStartSLOduration=51.382285602 podStartE2EDuration="1m10.967045912s" podCreationTimestamp="2025-07-11 00:23:23 +0000 UTC" firstStartedPulling="2025-07-11 00:24:07.91198658 +0000 UTC m=+68.077182272" lastFinishedPulling="2025-07-11 00:24:27.49674689 +0000 UTC m=+87.661942582" observedRunningTime="2025-07-11 00:24:28.318612558 +0000 UTC m=+88.483808260" watchObservedRunningTime="2025-07-11 00:24:33.967045912 +0000 UTC m=+94.132241594" Jul 11 00:24:34.599582 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:58242.service - OpenSSH per-connection server daemon (10.0.0.1:58242). Jul 11 00:24:34.656057 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 58242 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:34.657879 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:34.662731 systemd-logind[1525]: New session 18 of user core. Jul 11 00:24:34.673513 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:24:34.799071 sshd[5737]: Connection closed by 10.0.0.1 port 58242 Jul 11 00:24:34.799430 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:34.803591 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:58242.service: Deactivated successfully. Jul 11 00:24:34.805827 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:24:34.806614 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:24:34.808118 systemd-logind[1525]: Removed session 18. Jul 11 00:24:39.813304 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:45006.service - OpenSSH per-connection server daemon (10.0.0.1:45006). Jul 11 00:24:39.875988 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 45006 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:39.877899 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:39.883633 systemd-logind[1525]: New session 19 of user core. Jul 11 00:24:39.893520 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:24:40.088528 sshd[5753]: Connection closed by 10.0.0.1 port 45006 Jul 11 00:24:40.090733 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:40.101932 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:45006.service: Deactivated successfully. Jul 11 00:24:40.104933 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:24:40.106009 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:24:40.111705 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:45010.service - OpenSSH per-connection server daemon (10.0.0.1:45010). Jul 11 00:24:40.112735 systemd-logind[1525]: Removed session 19. Jul 11 00:24:40.180987 sshd[5767]: Accepted publickey for core from 10.0.0.1 port 45010 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:40.182964 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:40.188421 systemd-logind[1525]: New session 20 of user core. Jul 11 00:24:40.197611 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:24:41.080675 sshd[5771]: Connection closed by 10.0.0.1 port 45010 Jul 11 00:24:41.081090 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:41.092854 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:45010.service: Deactivated successfully. Jul 11 00:24:41.095224 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:24:41.096193 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:24:41.100249 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:45026.service - OpenSSH per-connection server daemon (10.0.0.1:45026). Jul 11 00:24:41.101682 systemd-logind[1525]: Removed session 20. Jul 11 00:24:41.172538 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 45026 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:41.174716 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:41.180911 systemd-logind[1525]: New session 21 of user core. Jul 11 00:24:41.188516 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:24:41.936267 kubelet[2707]: E0711 00:24:41.936220 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:42.105656 sshd[5785]: Connection closed by 10.0.0.1 port 45026 Jul 11 00:24:42.107109 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:42.119404 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:45026.service: Deactivated successfully. Jul 11 00:24:42.123649 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:24:42.127722 systemd-logind[1525]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:24:42.130265 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:45028.service - OpenSSH per-connection server daemon (10.0.0.1:45028). Jul 11 00:24:42.132994 systemd-logind[1525]: Removed session 21. Jul 11 00:24:42.187852 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 45028 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:42.190029 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:42.194974 systemd-logind[1525]: New session 22 of user core. Jul 11 00:24:42.199590 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:24:42.748298 sshd[5810]: Connection closed by 10.0.0.1 port 45028 Jul 11 00:24:42.749460 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:42.759566 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:45028.service: Deactivated successfully. Jul 11 00:24:42.762320 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:24:42.763560 systemd-logind[1525]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:24:42.767872 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:45044.service - OpenSSH per-connection server daemon (10.0.0.1:45044). Jul 11 00:24:42.770398 systemd-logind[1525]: Removed session 22. Jul 11 00:24:42.825517 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 45044 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:42.828118 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:42.834399 systemd-logind[1525]: New session 23 of user core. Jul 11 00:24:42.848667 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:24:42.980542 sshd[5823]: Connection closed by 10.0.0.1 port 45044 Jul 11 00:24:42.980938 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:42.985764 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:45044.service: Deactivated successfully. Jul 11 00:24:42.988251 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:24:42.989538 systemd-logind[1525]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:24:42.991617 systemd-logind[1525]: Removed session 23. Jul 11 00:24:45.367908 containerd[1550]: time="2025-07-11T00:24:45.367829928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb\" id:\"1ae66f3e13d40ba12c22813da428417c46d0fcd3bd11b2e4557247c9716fbb65\" pid:5847 exited_at:{seconds:1752193485 nanos:354498111}" Jul 11 00:24:48.006252 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:45046.service - OpenSSH per-connection server daemon (10.0.0.1:45046). Jul 11 00:24:48.065980 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 45046 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:48.068002 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:48.073848 systemd-logind[1525]: New session 24 of user core. Jul 11 00:24:48.081480 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:24:48.201144 sshd[5865]: Connection closed by 10.0.0.1 port 45046 Jul 11 00:24:48.201524 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:48.206450 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:45046.service: Deactivated successfully. Jul 11 00:24:48.208985 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:24:48.209973 systemd-logind[1525]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:24:48.211859 systemd-logind[1525]: Removed session 24. Jul 11 00:24:53.061917 containerd[1550]: time="2025-07-11T00:24:53.061859068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b\" id:\"9a44d237b7fb5cce36d6443ed86936081f51fd653a8a5e345fbf471ba8ada237\" pid:5888 exited_at:{seconds:1752193493 nanos:61594593}" Jul 11 00:24:53.194957 containerd[1550]: time="2025-07-11T00:24:53.194885779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b42b478df43d4201efc91608fad31ccd4515f755c2ba254744f0e48d55e2b6ec\" id:\"97f32b219d7c689d5ec9e399a39bf0acded979b38436fc2ca7b9f17d99c39f26\" pid:5911 exited_at:{seconds:1752193493 nanos:194525772}" Jul 11 00:24:53.222187 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:54572.service - OpenSSH per-connection server daemon (10.0.0.1:54572). Jul 11 00:24:53.288949 sshd[5925]: Accepted publickey for core from 10.0.0.1 port 54572 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:53.291274 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:53.297882 systemd-logind[1525]: New session 25 of user core. Jul 11 00:24:53.303697 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:24:53.339718 containerd[1550]: time="2025-07-11T00:24:53.339574751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce7257723d21fe14ff85212cd485edce2669b963509a75ddf6bbc13145e47c3b\" id:\"77f18285a8d9dea86dfef36dded271a1e2cac7cf48e1539e26061a07cdecc469\" pid:5942 exited_at:{seconds:1752193493 nanos:339129902}" Jul 11 00:24:53.554886 sshd[5940]: Connection closed by 10.0.0.1 port 54572 Jul 11 00:24:53.555314 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:53.560308 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:54572.service: Deactivated successfully. Jul 11 00:24:53.563387 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:24:53.564408 systemd-logind[1525]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:24:53.566651 systemd-logind[1525]: Removed session 25. Jul 11 00:24:57.956184 containerd[1550]: time="2025-07-11T00:24:57.956124447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dbd67f64e3f356d32b6bb947d6734c53c5a0dde310b6b5b8221cc994138a6cb\" id:\"59867c7b8cfd3752e3120aae09c96f763db3a4594375e70dadc735a203b5cade\" pid:5975 exited_at:{seconds:1752193497 nanos:955588766}" Jul 11 00:24:58.569496 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:54578.service - OpenSSH per-connection server daemon (10.0.0.1:54578). Jul 11 00:24:58.641474 sshd[5988]: Accepted publickey for core from 10.0.0.1 port 54578 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:24:58.643731 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:58.649616 systemd-logind[1525]: New session 26 of user core. Jul 11 00:24:58.661709 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:24:58.875217 sshd[5990]: Connection closed by 10.0.0.1 port 54578 Jul 11 00:24:58.876101 sshd-session[5988]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:58.882765 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:54578.service: Deactivated successfully. Jul 11 00:24:58.887224 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:24:58.889284 systemd-logind[1525]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:24:58.892825 systemd-logind[1525]: Removed session 26. Jul 11 00:25:01.925653 kubelet[2707]: E0711 00:25:01.925603 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:03.891781 systemd[1]: Started sshd@26-10.0.0.71:22-10.0.0.1:47228.service - OpenSSH per-connection server daemon (10.0.0.1:47228). Jul 11 00:25:03.975287 sshd[6007]: Accepted publickey for core from 10.0.0.1 port 47228 ssh2: RSA SHA256:9BEQUnvf4tMrcd/+eQHNBnXq9udNjDMMLU+6/KLi7hY Jul 11 00:25:03.977550 sshd-session[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:03.983917 systemd-logind[1525]: New session 27 of user core. Jul 11 00:25:03.990602 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 00:25:04.327230 sshd[6009]: Connection closed by 10.0.0.1 port 47228 Jul 11 00:25:04.328113 sshd-session[6007]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:04.334419 systemd[1]: sshd@26-10.0.0.71:22-10.0.0.1:47228.service: Deactivated successfully. Jul 11 00:25:04.337649 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 00:25:04.338966 systemd-logind[1525]: Session 27 logged out. Waiting for processes to exit. Jul 11 00:25:04.341200 systemd-logind[1525]: Removed session 27.