Jul 7 00:06:11.945399 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:06:11.945446 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:06:11.945457 kernel: BIOS-provided physical RAM map: Jul 7 00:06:11.945465 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 00:06:11.945473 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 00:06:11.945481 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 00:06:11.945490 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 7 00:06:11.945504 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 7 00:06:11.945515 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 00:06:11.945523 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 00:06:11.945531 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:06:11.945539 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 00:06:11.945547 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:06:11.945555 kernel: NX (Execute Disable) protection: active Jul 7 00:06:11.945568 kernel: APIC: Static calls initialized Jul 7 00:06:11.945576 kernel: SMBIOS 2.8 present. Jul 7 00:06:11.945594 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 7 00:06:11.945619 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:06:11.945640 kernel: Hypervisor detected: KVM Jul 7 00:06:11.945654 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:06:11.945664 kernel: kvm-clock: using sched offset of 4892348154 cycles Jul 7 00:06:11.945674 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:06:11.945683 kernel: tsc: Detected 2794.746 MHz processor Jul 7 00:06:11.945695 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:06:11.945705 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:06:11.945717 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 7 00:06:11.945731 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 00:06:11.945762 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:06:11.945772 kernel: Using GB pages for direct mapping Jul 7 00:06:11.945781 kernel: ACPI: Early table checksum verification disabled Jul 7 00:06:11.945795 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 7 00:06:11.945804 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:06:11.945817 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:06:11.945831 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:06:11.945840 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 7 00:06:11.945849 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:06:11.945858 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:06:11.945867 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:06:11.945875 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:06:11.945885 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 7 00:06:11.945903 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 7 00:06:11.945912 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 7 00:06:11.945922 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 7 00:06:11.945931 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 7 00:06:11.945940 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 7 00:06:11.945952 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 7 00:06:11.945963 kernel: No NUMA configuration found Jul 7 00:06:11.945973 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 7 00:06:11.945982 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 7 00:06:11.945991 kernel: Zone ranges: Jul 7 00:06:11.946001 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:06:11.946010 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 7 00:06:11.946019 kernel: Normal empty Jul 7 00:06:11.946028 kernel: Device empty Jul 7 00:06:11.946037 kernel: Movable zone start for each node Jul 7 00:06:11.946060 kernel: Early memory node ranges Jul 7 00:06:11.946090 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 00:06:11.946111 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 7 00:06:11.946128 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 7 00:06:11.946143 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:06:11.946165 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 00:06:11.946180 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 7 00:06:11.946190 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:06:11.946202 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:06:11.946212 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:06:11.946224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:06:11.946234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:06:11.946254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:06:11.946264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:06:11.946273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:06:11.946288 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:06:11.946297 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:06:11.946306 kernel: TSC deadline timer available Jul 7 00:06:11.946316 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:06:11.946329 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:06:11.946338 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:06:11.946347 kernel: CPU topo: Max. threads per core: 1 Jul 7 00:06:11.946356 kernel: CPU topo: Num. cores per package: 4 Jul 7 00:06:11.946365 kernel: CPU topo: Num. threads per package: 4 Jul 7 00:06:11.946375 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 00:06:11.946384 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:06:11.946393 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 00:06:11.946402 kernel: kvm-guest: setup PV sched yield Jul 7 00:06:11.946421 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 00:06:11.946430 kernel: Booting paravirtualized kernel on KVM Jul 7 00:06:11.946440 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:06:11.946449 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 00:06:11.946458 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 00:06:11.946468 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 00:06:11.946477 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 00:06:11.946485 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:06:11.946499 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:06:11.946533 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:06:11.946547 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:06:11.946567 kernel: random: crng init done Jul 7 00:06:11.946577 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:06:11.946587 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:06:11.946596 kernel: Fallback order for Node 0: 0 Jul 7 00:06:11.946725 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 7 00:06:11.946749 kernel: Policy zone: DMA32 Jul 7 00:06:11.946765 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:06:11.946775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 00:06:11.946784 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:06:11.946798 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:06:11.946811 kernel: Dynamic Preempt: voluntary Jul 7 00:06:11.946823 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:06:11.946833 kernel: rcu: RCU event tracing is enabled. Jul 7 00:06:11.946843 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 00:06:11.946861 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:06:11.946875 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:06:11.946896 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:06:11.946917 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:06:11.950421 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 00:06:11.950691 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:06:11.950701 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:06:11.950709 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:06:11.950717 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 00:06:11.950728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:06:11.950767 kernel: Console: colour VGA+ 80x25 Jul 7 00:06:11.950775 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:06:11.950784 kernel: ACPI: Core revision 20240827 Jul 7 00:06:11.950796 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 00:06:11.950805 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:06:11.950812 kernel: x2apic enabled Jul 7 00:06:11.950823 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:06:11.950831 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 00:06:11.950839 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 00:06:11.950856 kernel: kvm-guest: setup PV IPIs Jul 7 00:06:11.950863 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:06:11.950872 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 7 00:06:11.950880 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 7 00:06:11.950888 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 00:06:11.950895 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 00:06:11.950903 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 00:06:11.950911 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:06:11.950922 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:06:11.950930 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:06:11.950938 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 00:06:11.950946 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 00:06:11.950954 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:06:11.950962 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:06:11.950970 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 00:06:11.950979 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 00:06:11.950992 kernel: x86/bugs: return thunk changed Jul 7 00:06:11.951000 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 00:06:11.951008 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:06:11.951016 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:06:11.951024 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:06:11.951032 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:06:11.951040 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 00:06:11.951047 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:06:11.951055 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:06:11.951065 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:06:11.951073 kernel: landlock: Up and running. Jul 7 00:06:11.951080 kernel: SELinux: Initializing. Jul 7 00:06:11.951091 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:06:11.951101 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:06:11.951109 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 00:06:11.951117 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 00:06:11.951125 kernel: ... version: 0 Jul 7 00:06:11.951133 kernel: ... bit width: 48 Jul 7 00:06:11.951143 kernel: ... generic registers: 6 Jul 7 00:06:11.951150 kernel: ... value mask: 0000ffffffffffff Jul 7 00:06:11.951158 kernel: ... max period: 00007fffffffffff Jul 7 00:06:11.951166 kernel: ... fixed-purpose events: 0 Jul 7 00:06:11.951174 kernel: ... event mask: 000000000000003f Jul 7 00:06:11.951181 kernel: signal: max sigframe size: 1776 Jul 7 00:06:11.951189 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:06:11.951197 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:06:11.951205 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:06:11.951215 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:06:11.951223 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:06:11.951231 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 00:06:11.951238 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 00:06:11.951246 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 7 00:06:11.951255 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 136904K reserved, 0K cma-reserved) Jul 7 00:06:11.951265 kernel: devtmpfs: initialized Jul 7 00:06:11.951276 kernel: x86/mm: Memory block size: 128MB Jul 7 00:06:11.951287 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:06:11.951305 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 00:06:11.951316 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:06:11.951333 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:06:11.951350 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:06:11.951358 kernel: audit: type=2000 audit(1751846769.992:1): state=initialized audit_enabled=0 res=1 Jul 7 00:06:11.951366 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:06:11.951374 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:06:11.951382 kernel: cpuidle: using governor menu Jul 7 00:06:11.951389 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:06:11.951401 kernel: dca service started, version 1.12.1 Jul 7 00:06:11.951418 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 7 00:06:11.951426 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 00:06:11.951434 kernel: PCI: Using configuration type 1 for base access Jul 7 00:06:11.951442 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:06:11.951451 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:06:11.951458 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:06:11.951466 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:06:11.951474 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:06:11.951484 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:06:11.951492 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:06:11.951499 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:06:11.951507 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:06:11.951515 kernel: ACPI: Interpreter enabled Jul 7 00:06:11.951527 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 00:06:11.951535 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:06:11.951543 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:06:11.951551 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:06:11.951560 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 00:06:11.951568 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:06:11.951879 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:06:11.952150 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 00:06:11.952310 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 00:06:11.952322 kernel: PCI host bridge to bus 0000:00 Jul 7 00:06:11.952475 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:06:11.952626 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:06:11.957805 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:06:11.957967 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 7 00:06:11.958083 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 00:06:11.959218 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 7 00:06:11.959376 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:06:11.959579 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:06:11.959915 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:06:11.960544 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 7 00:06:11.960757 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 7 00:06:11.960927 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 7 00:06:11.961912 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:06:11.962449 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 00:06:11.962764 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 7 00:06:11.963013 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 7 00:06:11.964094 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 7 00:06:11.964288 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 00:06:11.964466 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 7 00:06:11.964603 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 7 00:06:11.964805 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 7 00:06:11.964985 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 00:06:11.965153 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 7 00:06:11.965351 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 7 00:06:11.965626 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 7 00:06:11.972092 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 7 00:06:11.972954 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:06:11.973152 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 00:06:11.977689 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 00:06:11.980858 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 7 00:06:11.981468 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 7 00:06:11.984018 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 00:06:11.984351 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 7 00:06:11.984377 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:06:11.984391 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:06:11.984405 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:06:11.984430 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:06:11.984453 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 00:06:11.984469 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 00:06:11.984477 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 00:06:11.984492 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 00:06:11.984507 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 00:06:11.984516 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 00:06:11.984524 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 00:06:11.984550 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 00:06:11.984570 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 00:06:11.984590 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 00:06:11.984608 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 00:06:11.984626 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 00:06:11.984641 kernel: iommu: Default domain type: Translated Jul 7 00:06:11.984660 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:06:11.984675 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:06:11.984702 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:06:11.984734 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 00:06:11.984775 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 7 00:06:11.986231 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 00:06:11.986525 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 00:06:11.988414 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:06:11.988455 kernel: vgaarb: loaded Jul 7 00:06:11.988473 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 00:06:11.988494 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 00:06:11.988523 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:06:11.988546 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:06:11.988567 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:06:11.988585 kernel: pnp: PnP ACPI init Jul 7 00:06:11.989923 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 00:06:11.989950 kernel: pnp: PnP ACPI: found 6 devices Jul 7 00:06:11.989968 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:06:11.989984 kernel: NET: Registered PF_INET protocol family Jul 7 00:06:11.990009 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:06:11.990030 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 00:06:11.990051 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:06:11.990069 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:06:11.990090 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 00:06:11.990111 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 00:06:11.990129 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:06:11.990152 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:06:11.990167 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:06:11.990189 kernel: NET: Registered PF_XDP protocol family Jul 7 00:06:11.990997 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:06:11.991136 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:06:11.991289 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:06:11.991569 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 7 00:06:11.992673 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 00:06:11.993780 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 7 00:06:11.993812 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:06:11.993843 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 7 00:06:11.993864 kernel: Initialise system trusted keyrings Jul 7 00:06:11.993885 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 00:06:11.993906 kernel: Key type asymmetric registered Jul 7 00:06:11.993928 kernel: Asymmetric key parser 'x509' registered Jul 7 00:06:11.993949 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:06:11.993959 kernel: io scheduler mq-deadline registered Jul 7 00:06:11.993967 kernel: io scheduler kyber registered Jul 7 00:06:11.993975 kernel: io scheduler bfq registered Jul 7 00:06:11.993998 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:06:11.994017 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 00:06:11.994033 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 00:06:11.994056 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 00:06:11.994076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:06:11.994092 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:06:11.994113 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:06:11.994134 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:06:11.994149 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:06:11.994326 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:06:11.994653 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 00:06:11.994983 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 00:06:11.995308 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T00:06:11 UTC (1751846771) Jul 7 00:06:11.995532 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 00:06:11.995544 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 00:06:11.995565 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:06:11.995583 kernel: Segment Routing with IPv6 Jul 7 00:06:11.995608 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:06:11.995627 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:06:11.995636 kernel: Key type dns_resolver registered Jul 7 00:06:11.995647 kernel: IPI shorthand broadcast: enabled Jul 7 00:06:11.995660 kernel: sched_clock: Marking stable (2958003998, 122942651)->(3100530517, -19583868) Jul 7 00:06:11.995669 kernel: registered taskstats version 1 Jul 7 00:06:11.995677 kernel: Loading compiled-in X.509 certificates Jul 7 00:06:11.995685 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:06:11.995693 kernel: Demotion targets for Node 0: null Jul 7 00:06:11.995718 kernel: Key type .fscrypt registered Jul 7 00:06:11.995806 kernel: Key type fscrypt-provisioning registered Jul 7 00:06:11.995827 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:06:11.995843 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:06:11.995861 kernel: ima: No architecture policies found Jul 7 00:06:11.995877 kernel: clk: Disabling unused clocks Jul 7 00:06:11.995895 kernel: Warning: unable to open an initial console. Jul 7 00:06:11.995914 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:06:11.995930 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:06:11.998376 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:06:11.998393 kernel: Run /init as init process Jul 7 00:06:11.998401 kernel: with arguments: Jul 7 00:06:11.998417 kernel: /init Jul 7 00:06:11.998434 kernel: with environment: Jul 7 00:06:11.998452 kernel: HOME=/ Jul 7 00:06:11.998467 kernel: TERM=linux Jul 7 00:06:11.998483 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:06:11.998492 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:06:11.998512 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:06:11.998543 systemd[1]: Detected virtualization kvm. Jul 7 00:06:11.998551 systemd[1]: Detected architecture x86-64. Jul 7 00:06:11.998560 systemd[1]: Running in initrd. Jul 7 00:06:11.998570 systemd[1]: No hostname configured, using default hostname. Jul 7 00:06:11.998585 systemd[1]: Hostname set to . Jul 7 00:06:11.998598 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:06:11.998609 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:06:11.998619 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:06:11.998631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:06:11.998646 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:06:11.998655 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:06:11.998664 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:06:11.998680 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:06:11.998690 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:06:11.998699 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:06:11.998708 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:06:11.998717 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:06:11.998725 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:06:11.998734 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:06:11.998759 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:06:11.998767 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:06:11.998776 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:06:11.998785 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:06:11.998793 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:06:11.998802 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:06:11.998810 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:06:11.998819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:06:11.998830 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:06:11.998838 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:06:11.998847 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:06:11.998856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:06:11.998864 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:06:11.998874 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:06:11.998887 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:06:11.998907 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:06:11.998923 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:06:11.998932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:06:11.998940 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:06:11.998953 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:06:11.998962 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:06:11.998971 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:06:11.999015 systemd-journald[219]: Collecting audit messages is disabled. Jul 7 00:06:11.999038 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:06:11.999048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:06:11.999057 systemd-journald[219]: Journal started Jul 7 00:06:11.999076 systemd-journald[219]: Runtime Journal (/run/log/journal/7250120b3dab4205bc47f624438d892c) is 6M, max 48.6M, 42.5M free. Jul 7 00:06:11.945060 systemd-modules-load[221]: Inserted module 'overlay' Jul 7 00:06:12.045934 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:06:12.516259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:06:12.523227 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:06:12.527324 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:06:12.530367 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:06:12.532921 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 7 00:06:12.534174 kernel: Bridge firewalling registered Jul 7 00:06:12.541026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:06:12.541551 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:06:12.543686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:06:12.553486 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:06:12.558499 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:06:12.559977 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:06:12.562325 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:06:12.567733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:06:12.586702 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:06:12.619719 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:06:12.620811 systemd-resolved[259]: Positive Trust Anchors: Jul 7 00:06:12.620822 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:06:12.620853 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:06:12.623375 systemd-resolved[259]: Defaulting to hostname 'linux'. Jul 7 00:06:12.624808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:06:12.626589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:06:12.738792 kernel: SCSI subsystem initialized Jul 7 00:06:12.751776 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:06:12.765795 kernel: iscsi: registered transport (tcp) Jul 7 00:06:12.791877 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:06:12.791968 kernel: QLogic iSCSI HBA Driver Jul 7 00:06:12.815779 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:06:12.844037 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:06:12.846600 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:06:12.918820 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:06:12.922024 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:06:12.990783 kernel: raid6: avx2x4 gen() 26730 MB/s Jul 7 00:06:13.007847 kernel: raid6: avx2x2 gen() 25876 MB/s Jul 7 00:06:13.024894 kernel: raid6: avx2x1 gen() 22343 MB/s Jul 7 00:06:13.024968 kernel: raid6: using algorithm avx2x4 gen() 26730 MB/s Jul 7 00:06:13.042922 kernel: raid6: .... xor() 6707 MB/s, rmw enabled Jul 7 00:06:13.043002 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:06:13.065785 kernel: xor: automatically using best checksumming function avx Jul 7 00:06:13.246799 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:06:13.258667 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:06:13.262125 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:06:13.306053 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jul 7 00:06:13.314046 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:06:13.317551 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:06:13.353508 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jul 7 00:06:13.385951 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:06:13.390235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:06:13.473474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:06:13.477848 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:06:13.511783 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 00:06:13.515071 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 00:06:13.522526 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:06:13.522554 kernel: GPT:9289727 != 19775487 Jul 7 00:06:13.522565 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:06:13.522576 kernel: GPT:9289727 != 19775487 Jul 7 00:06:13.522586 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:06:13.522605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:06:13.538772 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:06:13.539768 kernel: libata version 3.00 loaded. Jul 7 00:06:13.548763 kernel: AES CTR mode by8 optimization enabled Jul 7 00:06:13.553768 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 00:06:13.558767 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 00:06:13.562054 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 00:06:13.562164 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 00:06:13.562863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:06:13.566107 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 00:06:13.566353 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 00:06:13.563223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:06:13.570995 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:06:13.578361 kernel: scsi host0: ahci Jul 7 00:06:13.578586 kernel: scsi host1: ahci Jul 7 00:06:13.575005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:06:13.584829 kernel: scsi host2: ahci Jul 7 00:06:13.581728 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:06:13.599823 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 00:06:13.644780 kernel: scsi host3: ahci Jul 7 00:06:13.646296 kernel: scsi host4: ahci Jul 7 00:06:13.648771 kernel: scsi host5: ahci Jul 7 00:06:13.658423 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 7 00:06:13.658470 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 7 00:06:13.658487 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 7 00:06:13.660618 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 7 00:06:13.660684 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 7 00:06:13.662597 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 7 00:06:13.686253 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 00:06:13.695078 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 00:06:13.695158 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 00:06:13.706352 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:06:13.707720 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:06:13.861629 disk-uuid[635]: Primary Header is updated. Jul 7 00:06:13.861629 disk-uuid[635]: Secondary Entries is updated. Jul 7 00:06:13.861629 disk-uuid[635]: Secondary Header is updated. Jul 7 00:06:13.885329 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:06:13.888606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:06:13.969799 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:06:13.973440 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:06:13.973476 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:06:13.973506 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 00:06:13.973523 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 00:06:13.973536 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 00:06:13.974786 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 00:06:13.976168 kernel: ata3.00: applying bridge limits Jul 7 00:06:13.976238 kernel: ata3.00: configured for UDMA/100 Jul 7 00:06:13.976775 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 00:06:14.043837 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 00:06:14.044260 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:06:14.067784 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 00:06:14.539178 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:06:14.539905 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:06:14.542577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:06:14.542957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:06:14.544429 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:06:14.571108 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:06:14.874667 disk-uuid[636]: The operation has completed successfully. Jul 7 00:06:14.876237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:06:14.916473 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:06:14.916615 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:06:14.948844 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:06:14.979211 sh[666]: Success Jul 7 00:06:15.000664 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:06:15.000707 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:06:15.000729 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:06:15.011785 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 00:06:15.049381 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:06:15.051615 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:06:15.069668 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:06:15.077569 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:06:15.077607 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (678) Jul 7 00:06:15.079804 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:06:15.079880 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:06:15.079896 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:06:15.085736 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:06:15.087121 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:06:15.088674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:06:15.089577 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:06:15.091425 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:06:15.120793 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (713) Jul 7 00:06:15.122971 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:06:15.123000 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:06:15.123016 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:06:15.131757 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:06:15.132520 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:06:15.135925 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:06:15.522614 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:06:15.525416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:06:15.525448 ignition[752]: Ignition 2.21.0 Jul 7 00:06:15.525458 ignition[752]: Stage: fetch-offline Jul 7 00:06:15.527343 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:06:15.527357 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:06:15.528590 ignition[752]: parsed url from cmdline: "" Jul 7 00:06:15.528594 ignition[752]: no config URL provided Jul 7 00:06:15.528600 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:06:15.528616 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:06:15.529445 ignition[752]: op(1): [started] loading QEMU firmware config module Jul 7 00:06:15.529456 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 00:06:15.540388 ignition[752]: op(1): [finished] loading QEMU firmware config module Jul 7 00:06:15.576463 systemd-networkd[854]: lo: Link UP Jul 7 00:06:15.576474 systemd-networkd[854]: lo: Gained carrier Jul 7 00:06:15.578161 systemd-networkd[854]: Enumeration completed Jul 7 00:06:15.578301 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:06:15.578656 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:06:15.578661 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:06:15.579394 systemd-networkd[854]: eth0: Link UP Jul 7 00:06:15.579399 systemd-networkd[854]: eth0: Gained carrier Jul 7 00:06:15.579417 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:06:15.581234 systemd[1]: Reached target network.target - Network. Jul 7 00:06:15.591116 ignition[752]: parsing config with SHA512: 55ae28b4c0605c03ce0de70fe52765281a15394d520f956431ae9a639ee4d687136f950b9fd75f52cc1bab88c025fd3606228963556db25e4f5a764b140383a4 Jul 7 00:06:15.594979 unknown[752]: fetched base config from "system" Jul 7 00:06:15.594992 unknown[752]: fetched user config from "qemu" Jul 7 00:06:15.595344 ignition[752]: fetch-offline: fetch-offline passed Jul 7 00:06:15.595804 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:06:15.595416 ignition[752]: Ignition finished successfully Jul 7 00:06:15.598349 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:06:15.599561 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:06:15.600703 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:06:15.652947 ignition[861]: Ignition 2.21.0 Jul 7 00:06:15.652959 ignition[861]: Stage: kargs Jul 7 00:06:15.653086 ignition[861]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:06:15.653096 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:06:15.655483 ignition[861]: kargs: kargs passed Jul 7 00:06:15.655536 ignition[861]: Ignition finished successfully Jul 7 00:06:15.660622 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:06:15.662792 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:06:15.708821 ignition[869]: Ignition 2.21.0 Jul 7 00:06:15.708835 ignition[869]: Stage: disks Jul 7 00:06:15.708962 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:06:15.708973 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:06:15.712171 ignition[869]: disks: disks passed Jul 7 00:06:15.712352 ignition[869]: Ignition finished successfully Jul 7 00:06:15.714785 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:06:15.717829 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:06:15.720125 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:06:15.720232 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:06:15.724670 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:06:15.724783 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:06:15.726429 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:06:15.765224 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:06:15.873850 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:06:15.878344 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:06:16.023778 kernel: EXT4-fs (vda9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:06:16.024665 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:06:16.025552 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:06:16.028670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:06:16.030436 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:06:16.031666 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:06:16.031707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:06:16.031731 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:06:16.042008 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:06:16.043513 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:06:16.051474 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Jul 7 00:06:16.051510 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:06:16.051522 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:06:16.053436 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:06:16.058117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:06:16.084207 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:06:16.091677 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:06:16.096604 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:06:16.101813 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:06:16.201198 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:06:16.204341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:06:16.206403 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:06:16.229603 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:06:16.230899 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:06:16.245688 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:06:16.262594 ignition[1002]: INFO : Ignition 2.21.0 Jul 7 00:06:16.262594 ignition[1002]: INFO : Stage: mount Jul 7 00:06:16.264392 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:06:16.264392 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:06:16.269457 ignition[1002]: INFO : mount: mount passed Jul 7 00:06:16.270345 ignition[1002]: INFO : Ignition finished successfully Jul 7 00:06:16.272603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:06:16.275502 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:06:16.294761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:06:16.333771 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Jul 7 00:06:16.336448 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:06:16.336473 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:06:16.336485 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:06:16.341626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:06:16.385316 ignition[1031]: INFO : Ignition 2.21.0 Jul 7 00:06:16.385316 ignition[1031]: INFO : Stage: files Jul 7 00:06:16.387243 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:06:16.387243 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:06:16.389681 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:06:16.390940 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:06:16.390940 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:06:16.394308 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:06:16.394308 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:06:16.394308 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:06:16.394102 unknown[1031]: wrote ssh authorized keys file for user: core Jul 7 00:06:16.400087 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:06:16.400087 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 00:06:16.439765 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:06:16.515377 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:06:16.515377 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:06:16.519402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:06:16.519402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:06:16.519402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:06:16.519402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:06:16.519402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:06:16.519402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:06:16.519402 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:06:16.704756 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:06:16.713764 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:06:16.715793 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:06:16.753044 systemd-networkd[854]: eth0: Gained IPv6LL Jul 7 00:06:16.844058 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:06:16.846808 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:06:16.846808 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 00:06:17.431516 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:06:18.157804 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:06:18.157804 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:06:18.161657 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:06:18.360166 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:06:18.360166 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:06:18.360166 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 00:06:18.360166 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:06:18.368077 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:06:18.368077 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 00:06:18.368077 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 00:06:18.401316 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:06:18.405930 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:06:18.408024 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 00:06:18.408024 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:06:18.411275 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:06:18.411275 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:06:18.411275 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:06:18.411275 ignition[1031]: INFO : files: files passed Jul 7 00:06:18.411275 ignition[1031]: INFO : Ignition finished successfully Jul 7 00:06:18.413684 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:06:18.416918 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:06:18.423280 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:06:18.440221 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:06:18.440383 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:06:18.443810 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 00:06:18.448309 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:06:18.448309 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:06:18.451713 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:06:18.454710 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:06:18.456274 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:06:18.459487 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:06:18.526599 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:06:18.526762 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:06:18.529786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:06:18.532096 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:06:18.534179 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:06:18.537845 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:06:18.574567 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:06:18.576331 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:06:18.612582 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:06:18.615074 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:06:18.615252 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:06:18.618587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:06:18.618759 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:06:18.620107 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:06:18.620495 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:06:18.621040 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:06:18.621390 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:06:18.621760 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:06:18.622283 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:06:18.622640 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:06:18.623173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:06:18.623548 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:06:18.624081 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:06:18.624434 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:06:18.624775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:06:18.624892 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:06:18.625682 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:06:18.626245 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:06:18.626565 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:06:18.626710 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:06:18.652582 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:06:18.652705 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:06:18.655674 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:06:18.655812 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:06:18.656850 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:06:18.657220 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:06:18.664844 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:06:18.665010 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:06:18.667554 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:06:18.670101 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:06:18.670201 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:06:18.671041 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:06:18.671127 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:06:18.672721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:06:18.672858 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:06:18.673188 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:06:18.673307 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:06:18.680866 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:06:18.682618 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:06:18.686187 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:06:18.687581 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:06:18.690331 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:06:18.691562 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:06:18.698512 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:06:18.698673 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:06:18.721488 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:06:18.726762 ignition[1086]: INFO : Ignition 2.21.0 Jul 7 00:06:18.726762 ignition[1086]: INFO : Stage: umount Jul 7 00:06:18.729315 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:06:18.729315 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:06:18.729315 ignition[1086]: INFO : umount: umount passed Jul 7 00:06:18.729315 ignition[1086]: INFO : Ignition finished successfully Jul 7 00:06:18.731866 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:06:18.732021 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:06:18.733416 systemd[1]: Stopped target network.target - Network. Jul 7 00:06:18.736063 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:06:18.736121 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:06:18.738318 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:06:18.738375 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:06:18.739363 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:06:18.739422 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:06:18.742471 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:06:18.742517 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:06:18.743697 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:06:18.744048 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:06:18.758040 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:06:18.758196 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:06:18.763764 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:06:18.764125 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:06:18.764175 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:06:18.767935 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:06:18.768178 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:06:18.768310 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:06:18.772549 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:06:18.773059 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:06:18.775145 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:06:18.775192 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:06:18.779653 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:06:18.781507 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:06:18.781561 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:06:18.782107 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:06:18.782152 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:06:18.786574 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:06:18.786622 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:06:18.787674 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:06:18.788969 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:06:18.814118 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:06:18.814261 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:06:18.816446 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:06:18.816617 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:06:18.817692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:06:18.817780 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:06:18.820163 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:06:18.820206 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:06:18.821413 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:06:18.821465 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:06:18.822195 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:06:18.822255 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:06:18.823034 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:06:18.823091 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:06:18.824565 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:06:18.832276 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:06:18.832334 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:06:18.835292 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:06:18.835346 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:06:18.838901 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 00:06:18.838949 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:06:18.842367 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:06:18.842419 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:06:18.843390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:06:18.843436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:06:18.856707 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:06:18.856845 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:06:19.290355 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:06:19.290513 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:06:19.292855 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:06:19.294689 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:06:19.294758 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:06:19.295917 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:06:19.326115 systemd[1]: Switching root. Jul 7 00:06:19.365480 systemd-journald[219]: Journal stopped Jul 7 00:06:21.095890 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 7 00:06:21.095956 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:06:21.095979 kernel: SELinux: policy capability open_perms=1 Jul 7 00:06:21.096001 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:06:21.096019 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:06:21.096030 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:06:21.096045 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:06:21.096060 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:06:21.096074 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:06:21.096087 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:06:21.096098 kernel: audit: type=1403 audit(1751846780.200:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:06:21.096116 systemd[1]: Successfully loaded SELinux policy in 52.350ms. Jul 7 00:06:21.096140 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16ms. Jul 7 00:06:21.096166 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:06:21.096179 systemd[1]: Detected virtualization kvm. Jul 7 00:06:21.096191 systemd[1]: Detected architecture x86-64. Jul 7 00:06:21.096203 systemd[1]: Detected first boot. Jul 7 00:06:21.096220 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:06:21.096232 zram_generator::config[1131]: No configuration found. Jul 7 00:06:21.096245 kernel: Guest personality initialized and is inactive Jul 7 00:06:21.096257 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:06:21.096273 kernel: Initialized host personality Jul 7 00:06:21.096285 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:06:21.096296 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:06:21.096311 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:06:21.096323 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:06:21.096335 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:06:21.096347 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:06:21.096360 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:06:21.096372 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:06:21.096389 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:06:21.096401 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:06:21.096413 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:06:21.096429 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:06:21.096441 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:06:21.096453 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:06:21.096466 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:06:21.096478 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:06:21.096490 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:06:21.096507 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:06:21.096520 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:06:21.096532 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:06:21.096545 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:06:21.096557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:06:21.096569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:06:21.096582 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:06:21.096598 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:06:21.096611 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:06:21.096623 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:06:21.096635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:06:21.096647 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:06:21.096659 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:06:21.096671 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:06:21.096683 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:06:21.096695 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:06:21.096707 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:06:21.096724 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:06:21.096757 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:06:21.096772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:06:21.096783 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:06:21.096796 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:06:21.096808 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:06:21.096820 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:06:21.096833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:06:21.096845 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:06:21.096862 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:06:21.096874 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:06:21.096887 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:06:21.096899 systemd[1]: Reached target machines.target - Containers. Jul 7 00:06:21.096911 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:06:21.096923 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:06:21.096935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:06:21.096947 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:06:21.096964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:06:21.096976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:06:21.096988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:06:21.097000 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:06:21.097012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:06:21.097030 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:06:21.097047 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:06:21.097060 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:06:21.097078 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:06:21.097092 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:06:21.097108 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:06:21.097124 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:06:21.097137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:06:21.097148 kernel: loop: module loaded Jul 7 00:06:21.097167 kernel: fuse: init (API version 7.41) Jul 7 00:06:21.097179 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:06:21.097191 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:06:21.097210 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:06:21.097226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:06:21.097241 kernel: ACPI: bus type drm_connector registered Jul 7 00:06:21.097255 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:06:21.097269 systemd[1]: Stopped verity-setup.service. Jul 7 00:06:21.097293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:06:21.097310 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:06:21.097327 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:06:21.097344 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:06:21.097363 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:06:21.097418 systemd-journald[1195]: Collecting audit messages is disabled. Jul 7 00:06:21.097448 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:06:21.097465 systemd-journald[1195]: Journal started Jul 7 00:06:21.097496 systemd-journald[1195]: Runtime Journal (/run/log/journal/7250120b3dab4205bc47f624438d892c) is 6M, max 48.6M, 42.5M free. Jul 7 00:06:20.827792 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:06:20.854965 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 00:06:20.855545 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:06:21.100271 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:06:21.101112 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:06:21.102383 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:06:21.103968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:06:21.104203 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:06:21.105804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:06:21.106102 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:06:21.107546 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:06:21.107976 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:06:21.109358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:06:21.109573 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:06:21.111395 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:06:21.111729 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:06:21.113333 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:06:21.113662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:06:21.115427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:06:21.117054 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:06:21.118795 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:06:21.120515 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:06:21.141987 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:06:21.145513 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:06:21.148306 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:06:21.149752 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:06:21.149791 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:06:21.152206 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:06:21.165855 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:06:21.167176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:06:21.327384 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:06:21.329775 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:06:21.331039 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:06:21.333217 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:06:21.334558 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:06:21.340044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:06:21.345291 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:06:21.346179 systemd-journald[1195]: Time spent on flushing to /var/log/journal/7250120b3dab4205bc47f624438d892c is 20.393ms for 973 entries. Jul 7 00:06:21.346179 systemd-journald[1195]: System Journal (/var/log/journal/7250120b3dab4205bc47f624438d892c) is 8M, max 195.6M, 187.6M free. Jul 7 00:06:21.788217 systemd-journald[1195]: Received client request to flush runtime journal. Jul 7 00:06:21.788271 kernel: loop0: detected capacity change from 0 to 221472 Jul 7 00:06:21.788290 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:06:21.788307 kernel: loop1: detected capacity change from 0 to 113872 Jul 7 00:06:21.788323 kernel: loop2: detected capacity change from 0 to 146240 Jul 7 00:06:21.788340 kernel: loop3: detected capacity change from 0 to 221472 Jul 7 00:06:21.788356 kernel: loop4: detected capacity change from 0 to 113872 Jul 7 00:06:21.788374 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 00:06:21.349406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:06:21.355031 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:06:21.357481 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:06:21.359973 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:06:21.383471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:06:21.397220 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jul 7 00:06:21.397238 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jul 7 00:06:21.402843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:06:21.759947 (sd-merge)[1248]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 00:06:21.760547 (sd-merge)[1248]: Merged extensions into '/usr'. Jul 7 00:06:21.764024 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:06:21.766438 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:06:21.770579 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:06:21.775952 systemd[1]: Reload requested from client PID 1228 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:06:21.775966 systemd[1]: Reloading... Jul 7 00:06:21.845192 zram_generator::config[1287]: No configuration found. Jul 7 00:06:22.206446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:06:22.290192 systemd[1]: Reloading finished in 513 ms. Jul 7 00:06:22.320697 ldconfig[1223]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:06:22.322114 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:06:22.323981 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:06:22.341588 systemd[1]: Starting ensure-sysext.service... Jul 7 00:06:22.361725 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:06:22.361763 systemd[1]: Reloading... Jul 7 00:06:22.431943 zram_generator::config[1357]: No configuration found. Jul 7 00:06:22.576338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:06:22.658123 systemd[1]: Reloading finished in 295 ms. Jul 7 00:06:22.675056 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:06:22.676809 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:06:22.703938 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:06:22.716899 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:06:22.717107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:06:22.724019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:06:22.728920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:06:22.731100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:06:22.732262 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:06:22.732483 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:06:22.732629 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:06:22.733857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:06:22.734359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:06:22.738430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:06:22.738680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:06:22.747450 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:06:22.747702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:06:22.755147 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:06:22.755360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:06:22.757008 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:06:22.759336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:06:22.767877 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:06:22.768970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:06:22.769186 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:06:22.769384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:06:22.770800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:06:22.771053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:06:22.772715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:06:22.772992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:06:22.777695 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:06:22.777970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:06:22.783755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:06:22.784070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:06:22.785595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:06:22.840607 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:06:22.855241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:06:22.857640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:06:22.858764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:06:22.858911 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:06:22.859134 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:06:22.860460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:06:22.860683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:06:22.862238 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:06:22.862458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:06:22.871025 systemd[1]: Finished ensure-sysext.service. Jul 7 00:06:22.874146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:06:22.874418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:06:22.876151 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:06:22.876386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:06:22.881329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:06:22.881422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:06:23.032117 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:06:23.033213 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:06:23.035293 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:06:23.040627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:06:23.043124 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:06:23.075397 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:06:23.075439 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:06:23.075686 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:06:23.075955 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:06:23.076626 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Jul 7 00:06:23.076643 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Jul 7 00:06:23.076768 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:06:23.077007 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. Jul 7 00:06:23.077080 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. Jul 7 00:06:23.081443 systemd-tmpfiles[1424]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:06:23.081460 systemd-tmpfiles[1424]: Skipping /boot Jul 7 00:06:23.084033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:06:23.099151 systemd-tmpfiles[1424]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:06:23.099168 systemd-tmpfiles[1424]: Skipping /boot Jul 7 00:06:23.295322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:06:23.298561 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:06:23.301478 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:06:23.312221 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:06:23.317661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:06:23.322949 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:06:23.327613 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:06:23.332298 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:06:23.337147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:06:23.399778 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:06:23.401991 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:06:23.421157 augenrules[1458]: No rules Jul 7 00:06:23.423583 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:06:23.424055 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:06:23.537592 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:06:23.546890 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:06:23.558268 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:06:23.562455 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:06:23.578520 systemd-udevd[1443]: Using default interface naming scheme 'v255'. Jul 7 00:06:23.594424 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:06:23.598586 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:06:23.601642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:06:23.614726 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:06:23.699854 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:06:23.834249 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:06:23.836238 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:06:23.859852 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:06:23.862782 systemd-networkd[1493]: lo: Link UP Jul 7 00:06:23.862795 systemd-networkd[1493]: lo: Gained carrier Jul 7 00:06:23.865451 systemd-networkd[1493]: Enumeration completed Jul 7 00:06:23.865545 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:06:23.868916 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:06:23.868928 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:06:23.869589 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:06:23.869819 systemd-networkd[1493]: eth0: Link UP Jul 7 00:06:23.870123 systemd-networkd[1493]: eth0: Gained carrier Jul 7 00:06:23.870142 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:06:23.874014 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:06:23.881809 systemd-networkd[1493]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:06:23.884825 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Jul 7 00:06:25.343838 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 00:06:25.343879 systemd-timesyncd[1434]: Initial clock synchronization to Mon 2025-07-07 00:06:25.343756 UTC. Jul 7 00:06:25.346567 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:06:25.366927 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:06:25.367873 systemd-resolved[1433]: Positive Trust Anchors: Jul 7 00:06:25.367890 systemd-resolved[1433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:06:25.367922 systemd-resolved[1433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:06:25.371609 systemd-resolved[1433]: Defaulting to hostname 'linux'. Jul 7 00:06:25.375318 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:06:25.377262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:06:25.385168 systemd[1]: Reached target network.target - Network. Jul 7 00:06:25.391055 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:06:25.391962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:06:25.393684 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:06:25.395120 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:06:25.397799 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 00:06:25.399757 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 00:06:25.400442 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:06:25.398767 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:06:25.400390 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:06:25.402098 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:06:25.404097 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:06:25.405728 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:06:25.407590 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:06:25.407623 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:06:25.408870 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:06:25.412310 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:06:25.415836 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:06:25.426460 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:06:25.429233 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:06:25.430875 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:06:25.442248 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:06:25.444582 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:06:25.447137 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:06:25.449818 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:06:25.452825 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:06:25.455321 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:06:25.456534 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:06:25.456652 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:06:25.460088 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:06:25.465317 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:06:25.469301 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:06:25.482237 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:06:25.486209 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:06:25.487577 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:06:25.489248 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:06:25.493821 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:06:25.496401 jq[1542]: false Jul 7 00:06:25.498391 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:06:25.508237 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:06:25.512539 extend-filesystems[1543]: Found /dev/vda6 Jul 7 00:06:25.513771 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:06:25.520229 extend-filesystems[1543]: Found /dev/vda9 Jul 7 00:06:25.523885 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing passwd entry cache Jul 7 00:06:25.522942 oslogin_cache_refresh[1544]: Refreshing passwd entry cache Jul 7 00:06:25.527203 extend-filesystems[1543]: Checking size of /dev/vda9 Jul 7 00:06:25.526445 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:06:25.560570 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:06:25.567757 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting users, quitting Jul 7 00:06:25.567757 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:06:25.567757 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing group entry cache Jul 7 00:06:25.567757 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting groups, quitting Jul 7 00:06:25.567757 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:06:25.561255 oslogin_cache_refresh[1544]: Failure getting users, quitting Jul 7 00:06:25.561282 oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:06:25.561338 oslogin_cache_refresh[1544]: Refreshing group entry cache Jul 7 00:06:25.567006 oslogin_cache_refresh[1544]: Failure getting groups, quitting Jul 7 00:06:25.567034 oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:06:25.568495 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:06:25.572204 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:06:25.578910 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:06:25.583651 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:06:25.585636 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:06:25.585881 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:06:25.586211 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:06:25.586464 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:06:25.588106 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:06:25.588403 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:06:25.593321 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:06:25.593583 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:06:25.602985 jq[1565]: true Jul 7 00:06:25.615744 update_engine[1563]: I20250707 00:06:25.615655 1563 main.cc:92] Flatcar Update Engine starting Jul 7 00:06:25.626152 extend-filesystems[1543]: Resized partition /dev/vda9 Jul 7 00:06:25.640053 jq[1577]: true Jul 7 00:06:25.648546 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:06:25.650636 tar[1569]: linux-amd64/helm Jul 7 00:06:25.658827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:06:25.712381 extend-filesystems[1582]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:06:25.751431 kernel: kvm_amd: TSC scaling supported Jul 7 00:06:25.751497 kernel: kvm_amd: Nested Virtualization enabled Jul 7 00:06:25.751511 kernel: kvm_amd: Nested Paging enabled Jul 7 00:06:25.752483 kernel: kvm_amd: LBR virtualization supported Jul 7 00:06:25.752508 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 00:06:25.753081 kernel: kvm_amd: Virtual GIF supported Jul 7 00:06:25.755730 systemd-logind[1560]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:06:25.756319 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:06:25.759565 systemd-logind[1560]: New seat seat0. Jul 7 00:06:25.765149 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:06:25.831043 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 00:06:25.951368 dbus-daemon[1540]: [system] SELinux support is enabled Jul 7 00:06:25.951582 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:06:25.957943 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:06:25.957351 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:06:25.957379 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:06:25.962517 dbus-daemon[1540]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:06:25.978629 update_engine[1563]: I20250707 00:06:25.963415 1563 update_check_scheduler.cc:74] Next update check in 4m53s Jul 7 00:06:25.960493 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:06:25.960515 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:06:25.963885 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:06:25.970369 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:06:25.997064 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 00:06:26.009133 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:06:26.020618 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 00:06:26.020618 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:06:26.020618 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 00:06:26.024835 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Jul 7 00:06:26.026459 bash[1603]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:06:26.028540 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:06:26.030112 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:06:26.154966 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:06:26.157898 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:06:26.188430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:06:26.203836 containerd[1575]: time="2025-07-07T00:06:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:06:26.207277 containerd[1575]: time="2025-07-07T00:06:26.207234152Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:06:26.221034 containerd[1575]: time="2025-07-07T00:06:26.220674677Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.353µs" Jul 7 00:06:26.221034 containerd[1575]: time="2025-07-07T00:06:26.220718409Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:06:26.221034 containerd[1575]: time="2025-07-07T00:06:26.220743246Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:06:26.221034 containerd[1575]: time="2025-07-07T00:06:26.220971254Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:06:26.221034 containerd[1575]: time="2025-07-07T00:06:26.220986603Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:06:26.221225 containerd[1575]: time="2025-07-07T00:06:26.221209080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:06:26.221369 containerd[1575]: time="2025-07-07T00:06:26.221351327Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:06:26.221433 containerd[1575]: time="2025-07-07T00:06:26.221419715Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:06:26.221795 containerd[1575]: time="2025-07-07T00:06:26.221774280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:06:26.221853 containerd[1575]: time="2025-07-07T00:06:26.221840234Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:06:26.221908 containerd[1575]: time="2025-07-07T00:06:26.221894796Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:06:26.221953 containerd[1575]: time="2025-07-07T00:06:26.221942235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:06:26.222141 containerd[1575]: time="2025-07-07T00:06:26.222119959Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:06:26.222484 containerd[1575]: time="2025-07-07T00:06:26.222464916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:06:26.222564 containerd[1575]: time="2025-07-07T00:06:26.222548914Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:06:26.222612 containerd[1575]: time="2025-07-07T00:06:26.222600340Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:06:26.222741 containerd[1575]: time="2025-07-07T00:06:26.222718251Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:06:26.223150 containerd[1575]: time="2025-07-07T00:06:26.223089528Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:06:26.223298 containerd[1575]: time="2025-07-07T00:06:26.223266319Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:06:26.232090 containerd[1575]: time="2025-07-07T00:06:26.231987982Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:06:26.232090 containerd[1575]: time="2025-07-07T00:06:26.232057783Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:06:26.232090 containerd[1575]: time="2025-07-07T00:06:26.232072581Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:06:26.232090 containerd[1575]: time="2025-07-07T00:06:26.232086497Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:06:26.232199 containerd[1575]: time="2025-07-07T00:06:26.232098599Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:06:26.232199 containerd[1575]: time="2025-07-07T00:06:26.232108939Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:06:26.232199 containerd[1575]: time="2025-07-07T00:06:26.232121142Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:06:26.232199 containerd[1575]: time="2025-07-07T00:06:26.232132323Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:06:26.232199 containerd[1575]: time="2025-07-07T00:06:26.232143023Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:06:26.232199 containerd[1575]: time="2025-07-07T00:06:26.232152330Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:06:26.232199 containerd[1575]: time="2025-07-07T00:06:26.232160936Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:06:26.232199 containerd[1575]: time="2025-07-07T00:06:26.232173199Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:06:26.232353 containerd[1575]: time="2025-07-07T00:06:26.232307241Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:06:26.232353 containerd[1575]: time="2025-07-07T00:06:26.232337027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:06:26.232353 containerd[1575]: time="2025-07-07T00:06:26.232350081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:06:26.232413 containerd[1575]: time="2025-07-07T00:06:26.232373104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:06:26.232413 containerd[1575]: time="2025-07-07T00:06:26.232384716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:06:26.232413 containerd[1575]: time="2025-07-07T00:06:26.232394955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:06:26.232413 containerd[1575]: time="2025-07-07T00:06:26.232405676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:06:26.232413 containerd[1575]: time="2025-07-07T00:06:26.232415484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:06:26.232543 containerd[1575]: time="2025-07-07T00:06:26.232426996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:06:26.232543 containerd[1575]: time="2025-07-07T00:06:26.232436834Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:06:26.232543 containerd[1575]: time="2025-07-07T00:06:26.232446312Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:06:26.232543 containerd[1575]: time="2025-07-07T00:06:26.232517736Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:06:26.232543 containerd[1575]: time="2025-07-07T00:06:26.232531852Z" level=info msg="Start snapshots syncer" Jul 7 00:06:26.232642 containerd[1575]: time="2025-07-07T00:06:26.232559354Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:06:26.232846 containerd[1575]: time="2025-07-07T00:06:26.232790177Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:06:26.233093 containerd[1575]: time="2025-07-07T00:06:26.232865438Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:06:26.234175 containerd[1575]: time="2025-07-07T00:06:26.234141943Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:06:26.234294 containerd[1575]: time="2025-07-07T00:06:26.234262609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:06:26.234348 containerd[1575]: time="2025-07-07T00:06:26.234310920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:06:26.234348 containerd[1575]: time="2025-07-07T00:06:26.234336618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:06:26.234387 containerd[1575]: time="2025-07-07T00:06:26.234368268Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:06:26.234387 containerd[1575]: time="2025-07-07T00:06:26.234384488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:06:26.234437 containerd[1575]: time="2025-07-07T00:06:26.234395569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:06:26.234437 containerd[1575]: time="2025-07-07T00:06:26.234406529Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:06:26.234437 containerd[1575]: time="2025-07-07T00:06:26.234429973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:06:26.234506 containerd[1575]: time="2025-07-07T00:06:26.234440383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:06:26.234506 containerd[1575]: time="2025-07-07T00:06:26.234451404Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:06:26.235541 containerd[1575]: time="2025-07-07T00:06:26.235512254Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:06:26.235575 containerd[1575]: time="2025-07-07T00:06:26.235546227Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:06:26.235575 containerd[1575]: time="2025-07-07T00:06:26.235556948Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:06:26.235575 containerd[1575]: time="2025-07-07T00:06:26.235566185Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:06:26.235575 containerd[1575]: time="2025-07-07T00:06:26.235573759Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:06:26.235661 containerd[1575]: time="2025-07-07T00:06:26.235592965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:06:26.235661 containerd[1575]: time="2025-07-07T00:06:26.235605869Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:06:26.235661 containerd[1575]: time="2025-07-07T00:06:26.235628612Z" level=info msg="runtime interface created" Jul 7 00:06:26.235661 containerd[1575]: time="2025-07-07T00:06:26.235634383Z" level=info msg="created NRI interface" Jul 7 00:06:26.235661 containerd[1575]: time="2025-07-07T00:06:26.235641877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:06:26.235661 containerd[1575]: time="2025-07-07T00:06:26.235651916Z" level=info msg="Connect containerd service" Jul 7 00:06:26.235984 containerd[1575]: time="2025-07-07T00:06:26.235673817Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:06:26.238254 containerd[1575]: time="2025-07-07T00:06:26.236625041Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:06:26.392729 tar[1569]: linux-amd64/LICENSE Jul 7 00:06:26.393054 tar[1569]: linux-amd64/README.md Jul 7 00:06:26.438345 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:06:26.482177 containerd[1575]: time="2025-07-07T00:06:26.481495616Z" level=info msg="Start subscribing containerd event" Jul 7 00:06:26.482292 containerd[1575]: time="2025-07-07T00:06:26.481597427Z" level=info msg="Start recovering state" Jul 7 00:06:26.482406 containerd[1575]: time="2025-07-07T00:06:26.482378553Z" level=info msg="Start event monitor" Jul 7 00:06:26.482523 containerd[1575]: time="2025-07-07T00:06:26.482478891Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:06:26.482523 containerd[1575]: time="2025-07-07T00:06:26.482493548Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:06:26.482523 containerd[1575]: time="2025-07-07T00:06:26.482500251Z" level=info msg="Start streaming server" Jul 7 00:06:26.482523 containerd[1575]: time="2025-07-07T00:06:26.482542931Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:06:26.482523 containerd[1575]: time="2025-07-07T00:06:26.482554052Z" level=info msg="runtime interface starting up..." Jul 7 00:06:26.482523 containerd[1575]: time="2025-07-07T00:06:26.482568038Z" level=info msg="starting plugins..." Jul 7 00:06:26.482879 containerd[1575]: time="2025-07-07T00:06:26.482575462Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:06:26.482879 containerd[1575]: time="2025-07-07T00:06:26.482590991Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:06:26.482879 containerd[1575]: time="2025-07-07T00:06:26.482768895Z" level=info msg="containerd successfully booted in 0.281515s" Jul 7 00:06:26.483161 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:06:26.531266 systemd-networkd[1493]: eth0: Gained IPv6LL Jul 7 00:06:26.535039 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:06:26.537039 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:06:26.540040 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 00:06:26.542849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:06:26.548245 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:06:26.594747 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:06:26.600133 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:06:26.613091 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 00:06:26.613504 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 00:06:26.629809 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:06:26.634898 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:06:26.635990 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:06:26.663179 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:06:26.663548 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:06:26.667000 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:06:26.707324 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:06:26.710555 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:06:26.712904 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:06:26.714519 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:06:27.304916 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:06:27.308120 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:48090.service - OpenSSH per-connection server daemon (10.0.0.1:48090). Jul 7 00:06:27.382648 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 48090 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:27.384830 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:27.392351 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:06:27.394742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:06:27.403861 systemd-logind[1560]: New session 1 of user core. Jul 7 00:06:27.461709 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:06:27.465731 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:06:27.486118 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:06:27.489109 systemd-logind[1560]: New session c1 of user core. Jul 7 00:06:27.840210 systemd[1682]: Queued start job for default target default.target. Jul 7 00:06:27.855742 systemd[1682]: Created slice app.slice - User Application Slice. Jul 7 00:06:27.855778 systemd[1682]: Reached target paths.target - Paths. Jul 7 00:06:27.855834 systemd[1682]: Reached target timers.target - Timers. Jul 7 00:06:27.857907 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:06:27.873926 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:06:27.874110 systemd[1682]: Reached target sockets.target - Sockets. Jul 7 00:06:27.874165 systemd[1682]: Reached target basic.target - Basic System. Jul 7 00:06:27.874212 systemd[1682]: Reached target default.target - Main User Target. Jul 7 00:06:27.874258 systemd[1682]: Startup finished in 374ms. Jul 7 00:06:27.874791 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:06:27.885234 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:06:27.957064 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:48096.service - OpenSSH per-connection server daemon (10.0.0.1:48096). Jul 7 00:06:28.088443 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 48096 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:28.090269 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:28.096358 systemd-logind[1560]: New session 2 of user core. Jul 7 00:06:28.108162 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:06:28.162388 sshd[1695]: Connection closed by 10.0.0.1 port 48096 Jul 7 00:06:28.162795 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jul 7 00:06:28.167793 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:48096.service: Deactivated successfully. Jul 7 00:06:28.170101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:06:28.171711 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:06:28.173516 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:06:28.175458 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:06:28.194438 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:06:28.195554 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:48112.service - OpenSSH per-connection server daemon (10.0.0.1:48112). Jul 7 00:06:28.221875 systemd[1]: Startup finished in 3.021s (kernel) + 8.576s (initrd) + 6.612s (userspace) = 18.210s. Jul 7 00:06:28.225249 systemd-logind[1560]: Removed session 2. Jul 7 00:06:28.270852 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 48112 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:28.272451 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:28.279538 systemd-logind[1560]: New session 3 of user core. Jul 7 00:06:28.280034 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:06:28.340040 sshd[1711]: Connection closed by 10.0.0.1 port 48112 Jul 7 00:06:28.338664 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jul 7 00:06:28.344478 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:48112.service: Deactivated successfully. Jul 7 00:06:28.348146 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:06:28.349066 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:06:28.351941 systemd-logind[1560]: Removed session 3. Jul 7 00:06:29.100749 kubelet[1703]: E0707 00:06:29.100657 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:06:29.105928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:06:29.106177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:06:29.106665 systemd[1]: kubelet.service: Consumed 2.256s CPU time, 265.3M memory peak. Jul 7 00:06:38.363235 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:35730.service - OpenSSH per-connection server daemon (10.0.0.1:35730). Jul 7 00:06:38.412877 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 35730 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:38.414504 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:38.419028 systemd-logind[1560]: New session 4 of user core. Jul 7 00:06:38.430144 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:06:38.484956 sshd[1728]: Connection closed by 10.0.0.1 port 35730 Jul 7 00:06:38.485495 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 7 00:06:38.498169 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:35730.service: Deactivated successfully. Jul 7 00:06:38.500536 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:06:38.501448 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:06:38.504771 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:35740.service - OpenSSH per-connection server daemon (10.0.0.1:35740). Jul 7 00:06:38.505416 systemd-logind[1560]: Removed session 4. Jul 7 00:06:38.549996 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 35740 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:38.551517 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:38.556319 systemd-logind[1560]: New session 5 of user core. Jul 7 00:06:38.573345 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:06:38.624118 sshd[1736]: Connection closed by 10.0.0.1 port 35740 Jul 7 00:06:38.624394 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jul 7 00:06:38.634490 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:35740.service: Deactivated successfully. Jul 7 00:06:38.636880 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:06:38.637728 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:06:38.641146 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:35748.service - OpenSSH per-connection server daemon (10.0.0.1:35748). Jul 7 00:06:38.641763 systemd-logind[1560]: Removed session 5. Jul 7 00:06:38.702401 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 35748 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:38.704390 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:38.709615 systemd-logind[1560]: New session 6 of user core. Jul 7 00:06:38.723157 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:06:38.779574 sshd[1744]: Connection closed by 10.0.0.1 port 35748 Jul 7 00:06:38.780104 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jul 7 00:06:38.799393 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:35748.service: Deactivated successfully. Jul 7 00:06:38.801649 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:06:38.802448 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:06:38.806417 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:35754.service - OpenSSH per-connection server daemon (10.0.0.1:35754). Jul 7 00:06:38.806989 systemd-logind[1560]: Removed session 6. Jul 7 00:06:38.864940 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 35754 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:38.866843 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:38.871912 systemd-logind[1560]: New session 7 of user core. Jul 7 00:06:38.889192 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:06:38.951254 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:06:38.951630 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:06:38.968195 sudo[1753]: pam_unix(sudo:session): session closed for user root Jul 7 00:06:38.970209 sshd[1752]: Connection closed by 10.0.0.1 port 35754 Jul 7 00:06:38.970636 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jul 7 00:06:38.984407 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:35754.service: Deactivated successfully. Jul 7 00:06:38.986688 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:06:38.987572 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:06:38.991377 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:35770.service - OpenSSH per-connection server daemon (10.0.0.1:35770). Jul 7 00:06:38.992173 systemd-logind[1560]: Removed session 7. Jul 7 00:06:39.053867 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 35770 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:39.055910 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:39.061113 systemd-logind[1560]: New session 8 of user core. Jul 7 00:06:39.071282 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:06:39.127294 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:06:39.127637 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:06:39.128768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:06:39.130507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:06:39.274428 sudo[1763]: pam_unix(sudo:session): session closed for user root Jul 7 00:06:39.282809 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:06:39.283190 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:06:39.294334 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:06:39.351394 augenrules[1788]: No rules Jul 7 00:06:39.353384 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:06:39.353694 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:06:39.354903 sudo[1762]: pam_unix(sudo:session): session closed for user root Jul 7 00:06:39.356514 sshd[1761]: Connection closed by 10.0.0.1 port 35770 Jul 7 00:06:39.356875 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jul 7 00:06:39.372148 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:35770.service: Deactivated successfully. Jul 7 00:06:39.374223 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:06:39.374988 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:06:39.378146 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:35786.service - OpenSSH per-connection server daemon (10.0.0.1:35786). Jul 7 00:06:39.378739 systemd-logind[1560]: Removed session 8. Jul 7 00:06:39.430809 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 35786 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:06:39.432495 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:06:39.437455 systemd-logind[1560]: New session 9 of user core. Jul 7 00:06:39.445273 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:06:39.500058 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:06:39.500427 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:06:39.555974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:06:39.567530 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:06:39.628203 kubelet[1815]: E0707 00:06:39.628102 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:06:39.636319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:06:39.636801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:06:39.637415 systemd[1]: kubelet.service: Consumed 318ms CPU time, 108M memory peak. Jul 7 00:06:40.083296 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:06:40.103652 (dockerd)[1834]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:06:40.635109 dockerd[1834]: time="2025-07-07T00:06:40.634973952Z" level=info msg="Starting up" Jul 7 00:06:40.636799 dockerd[1834]: time="2025-07-07T00:06:40.636506387Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:06:41.208189 dockerd[1834]: time="2025-07-07T00:06:41.208099306Z" level=info msg="Loading containers: start." Jul 7 00:06:41.221046 kernel: Initializing XFRM netlink socket Jul 7 00:06:41.571808 systemd-networkd[1493]: docker0: Link UP Jul 7 00:06:41.576528 dockerd[1834]: time="2025-07-07T00:06:41.576475255Z" level=info msg="Loading containers: done." Jul 7 00:06:41.600701 dockerd[1834]: time="2025-07-07T00:06:41.600624591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:06:41.600891 dockerd[1834]: time="2025-07-07T00:06:41.600746420Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:06:41.600891 dockerd[1834]: time="2025-07-07T00:06:41.600871505Z" level=info msg="Initializing buildkit" Jul 7 00:06:41.637901 dockerd[1834]: time="2025-07-07T00:06:41.637814783Z" level=info msg="Completed buildkit initialization" Jul 7 00:06:41.646258 dockerd[1834]: time="2025-07-07T00:06:41.646185938Z" level=info msg="Daemon has completed initialization" Jul 7 00:06:41.646958 dockerd[1834]: time="2025-07-07T00:06:41.646327613Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:06:41.646528 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:06:42.516539 containerd[1575]: time="2025-07-07T00:06:42.516491994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 00:06:43.412131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740076218.mount: Deactivated successfully. Jul 7 00:06:44.577209 containerd[1575]: time="2025-07-07T00:06:44.577150456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:44.577890 containerd[1575]: time="2025-07-07T00:06:44.577854207Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 7 00:06:44.578967 containerd[1575]: time="2025-07-07T00:06:44.578914055Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:44.581345 containerd[1575]: time="2025-07-07T00:06:44.581296545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:44.582247 containerd[1575]: time="2025-07-07T00:06:44.582217853Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.065685654s" Jul 7 00:06:44.582309 containerd[1575]: time="2025-07-07T00:06:44.582257077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 00:06:44.583008 containerd[1575]: time="2025-07-07T00:06:44.582971397Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 00:06:46.115039 containerd[1575]: time="2025-07-07T00:06:46.114952603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:46.122583 containerd[1575]: time="2025-07-07T00:06:46.122524168Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 7 00:06:46.127752 containerd[1575]: time="2025-07-07T00:06:46.127684689Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:46.147730 containerd[1575]: time="2025-07-07T00:06:46.147667899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:46.148662 containerd[1575]: time="2025-07-07T00:06:46.148606059Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.565557457s" Jul 7 00:06:46.148662 containerd[1575]: time="2025-07-07T00:06:46.148651074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 00:06:46.149229 containerd[1575]: time="2025-07-07T00:06:46.149183873Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 00:06:47.467382 containerd[1575]: time="2025-07-07T00:06:47.467316594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:47.468158 containerd[1575]: time="2025-07-07T00:06:47.468101467Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 7 00:06:47.469264 containerd[1575]: time="2025-07-07T00:06:47.469227840Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:47.471826 containerd[1575]: time="2025-07-07T00:06:47.471789636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:47.472784 containerd[1575]: time="2025-07-07T00:06:47.472746812Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.323518735s" Jul 7 00:06:47.472832 containerd[1575]: time="2025-07-07T00:06:47.472784833Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 00:06:47.473297 containerd[1575]: time="2025-07-07T00:06:47.473259173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 00:06:48.873279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195099798.mount: Deactivated successfully. Jul 7 00:06:49.507970 containerd[1575]: time="2025-07-07T00:06:49.507893833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:49.509732 containerd[1575]: time="2025-07-07T00:06:49.509660567Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 7 00:06:49.511126 containerd[1575]: time="2025-07-07T00:06:49.511086673Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:49.512815 containerd[1575]: time="2025-07-07T00:06:49.512770452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:49.513283 containerd[1575]: time="2025-07-07T00:06:49.513248108Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.039956223s" Jul 7 00:06:49.513283 containerd[1575]: time="2025-07-07T00:06:49.513278906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 00:06:49.513923 containerd[1575]: time="2025-07-07T00:06:49.513862390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:06:49.804108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:06:49.805906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:06:50.069416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:06:50.075347 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:06:50.179385 kubelet[2125]: E0707 00:06:50.179293 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:06:50.184061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:06:50.184266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:06:50.184670 systemd[1]: kubelet.service: Consumed 270ms CPU time, 108.9M memory peak. Jul 7 00:06:50.313777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969528787.mount: Deactivated successfully. Jul 7 00:06:51.134460 containerd[1575]: time="2025-07-07T00:06:51.134395216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:51.135169 containerd[1575]: time="2025-07-07T00:06:51.135109676Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 00:06:51.136445 containerd[1575]: time="2025-07-07T00:06:51.136406429Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:51.138943 containerd[1575]: time="2025-07-07T00:06:51.138903423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:51.140102 containerd[1575]: time="2025-07-07T00:06:51.140057889Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.62616414s" Jul 7 00:06:51.140102 containerd[1575]: time="2025-07-07T00:06:51.140096932Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:06:51.140693 containerd[1575]: time="2025-07-07T00:06:51.140667232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:06:51.807590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440988083.mount: Deactivated successfully. Jul 7 00:06:51.814415 containerd[1575]: time="2025-07-07T00:06:51.814363084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:06:51.815215 containerd[1575]: time="2025-07-07T00:06:51.815162143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:06:51.816500 containerd[1575]: time="2025-07-07T00:06:51.816471219Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:06:51.818590 containerd[1575]: time="2025-07-07T00:06:51.818532135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:06:51.819129 containerd[1575]: time="2025-07-07T00:06:51.819097285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 678.40197ms" Jul 7 00:06:51.819197 containerd[1575]: time="2025-07-07T00:06:51.819130167Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:06:51.819667 containerd[1575]: time="2025-07-07T00:06:51.819637238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 00:06:52.415946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407692501.mount: Deactivated successfully. Jul 7 00:06:54.546248 containerd[1575]: time="2025-07-07T00:06:54.546162504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:54.546902 containerd[1575]: time="2025-07-07T00:06:54.546857058Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 7 00:06:54.548202 containerd[1575]: time="2025-07-07T00:06:54.548172024Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:54.550934 containerd[1575]: time="2025-07-07T00:06:54.550865047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:06:54.552072 containerd[1575]: time="2025-07-07T00:06:54.552032267Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.732344313s" Jul 7 00:06:54.552134 containerd[1575]: time="2025-07-07T00:06:54.552074806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 00:06:56.740464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:06:56.740677 systemd[1]: kubelet.service: Consumed 270ms CPU time, 108.9M memory peak. Jul 7 00:06:56.743170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:06:56.772749 systemd[1]: Reload requested from client PID 2274 ('systemctl') (unit session-9.scope)... Jul 7 00:06:56.772765 systemd[1]: Reloading... Jul 7 00:06:56.876052 zram_generator::config[2319]: No configuration found. Jul 7 00:06:57.082893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:06:57.213948 systemd[1]: Reloading finished in 440 ms. Jul 7 00:06:57.279867 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:06:57.279969 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:06:57.280300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:06:57.280344 systemd[1]: kubelet.service: Consumed 180ms CPU time, 98.3M memory peak. Jul 7 00:06:57.282231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:06:57.480026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:06:57.484172 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:06:57.526943 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:06:57.526943 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:06:57.526943 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:06:57.527469 kubelet[2364]: I0707 00:06:57.527041 2364 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:06:57.749667 kubelet[2364]: I0707 00:06:57.749511 2364 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:06:57.749667 kubelet[2364]: I0707 00:06:57.749549 2364 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:06:57.749900 kubelet[2364]: I0707 00:06:57.749801 2364 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:06:57.777950 kubelet[2364]: E0707 00:06:57.777903 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:06:57.778771 kubelet[2364]: I0707 00:06:57.778738 2364 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:06:57.792477 kubelet[2364]: I0707 00:06:57.792438 2364 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:06:57.801222 kubelet[2364]: I0707 00:06:57.801182 2364 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:06:57.802079 kubelet[2364]: I0707 00:06:57.802049 2364 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:06:57.802273 kubelet[2364]: I0707 00:06:57.802237 2364 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:06:57.802459 kubelet[2364]: I0707 00:06:57.802264 2364 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:06:57.802579 kubelet[2364]: I0707 00:06:57.802481 2364 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:06:57.802579 kubelet[2364]: I0707 00:06:57.802494 2364 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:06:57.802687 kubelet[2364]: I0707 00:06:57.802670 2364 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:06:57.806143 kubelet[2364]: I0707 00:06:57.806115 2364 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:06:57.806143 kubelet[2364]: I0707 00:06:57.806140 2364 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:06:57.806217 kubelet[2364]: I0707 00:06:57.806195 2364 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:06:57.806245 kubelet[2364]: I0707 00:06:57.806222 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:06:57.808625 kubelet[2364]: I0707 00:06:57.808567 2364 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:06:57.809037 kubelet[2364]: W0707 00:06:57.808953 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jul 7 00:06:57.809149 kubelet[2364]: E0707 00:06:57.809127 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:06:57.809208 kubelet[2364]: W0707 00:06:57.809125 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jul 7 00:06:57.809291 kubelet[2364]: E0707 00:06:57.809275 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:06:57.809387 kubelet[2364]: I0707 00:06:57.809223 2364 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:06:57.809556 kubelet[2364]: W0707 00:06:57.809448 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:06:57.811973 kubelet[2364]: I0707 00:06:57.811945 2364 server.go:1274] "Started kubelet" Jul 7 00:06:57.812136 kubelet[2364]: I0707 00:06:57.812101 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:06:57.812500 kubelet[2364]: I0707 00:06:57.812364 2364 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:06:57.812640 kubelet[2364]: I0707 00:06:57.812618 2364 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:06:57.813789 kubelet[2364]: I0707 00:06:57.813761 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:06:57.813848 kubelet[2364]: I0707 00:06:57.813814 2364 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:06:57.815590 kubelet[2364]: I0707 00:06:57.815570 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:06:57.817583 kubelet[2364]: E0707 00:06:57.817563 2364 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:06:57.817712 kubelet[2364]: E0707 00:06:57.817699 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:06:57.817806 kubelet[2364]: I0707 00:06:57.817793 2364 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:06:57.818058 kubelet[2364]: I0707 00:06:57.818043 2364 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:06:57.818161 kubelet[2364]: I0707 00:06:57.818138 2364 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:06:57.820664 kubelet[2364]: E0707 00:06:57.818606 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcf662c93c7af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 00:06:57.811908527 +0000 UTC m=+0.323662224,LastTimestamp:2025-07-07 00:06:57.811908527 +0000 UTC m=+0.323662224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:06:57.820992 kubelet[2364]: I0707 00:06:57.820946 2364 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:06:57.821222 kubelet[2364]: I0707 00:06:57.821056 2364 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:06:57.821766 kubelet[2364]: E0707 00:06:57.821699 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Jul 7 00:06:57.822034 kubelet[2364]: W0707 00:06:57.819498 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jul 7 00:06:57.822034 kubelet[2364]: E0707 00:06:57.822024 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:06:57.822594 kubelet[2364]: I0707 00:06:57.822556 2364 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:06:57.839856 kubelet[2364]: I0707 00:06:57.839763 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:06:57.842603 kubelet[2364]: I0707 00:06:57.842293 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:06:57.842603 kubelet[2364]: I0707 00:06:57.842336 2364 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:06:57.842603 kubelet[2364]: I0707 00:06:57.842373 2364 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:06:57.842603 kubelet[2364]: E0707 00:06:57.842424 2364 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:06:57.844916 kubelet[2364]: W0707 00:06:57.844867 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jul 7 00:06:57.844972 kubelet[2364]: E0707 00:06:57.844914 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:06:57.846585 kubelet[2364]: I0707 00:06:57.846553 2364 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:06:57.846585 kubelet[2364]: I0707 00:06:57.846573 2364 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:06:57.846655 kubelet[2364]: I0707 00:06:57.846596 2364 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:06:57.918894 kubelet[2364]: E0707 00:06:57.918819 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:06:57.943145 kubelet[2364]: E0707 00:06:57.943075 2364 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:06:58.019796 kubelet[2364]: E0707 00:06:58.019649 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:06:58.023334 kubelet[2364]: E0707 00:06:58.023295 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Jul 7 00:06:58.120763 kubelet[2364]: E0707 00:06:58.120699 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:06:58.143975 kubelet[2364]: E0707 00:06:58.143905 2364 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:06:58.221590 kubelet[2364]: E0707 00:06:58.221504 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:06:58.322458 kubelet[2364]: E0707 00:06:58.322391 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:06:58.423201 kubelet[2364]: E0707 00:06:58.423119 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:06:58.424536 kubelet[2364]: E0707 00:06:58.424494 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Jul 7 00:06:58.453110 kubelet[2364]: I0707 00:06:58.453006 2364 policy_none.go:49] "None policy: Start" Jul 7 00:06:58.454083 kubelet[2364]: I0707 00:06:58.454043 2364 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:06:58.454083 kubelet[2364]: I0707 00:06:58.454078 2364 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:06:58.461749 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:06:58.482143 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:06:58.486543 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:06:58.500970 kubelet[2364]: I0707 00:06:58.500407 2364 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:06:58.500970 kubelet[2364]: I0707 00:06:58.500735 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:06:58.500970 kubelet[2364]: I0707 00:06:58.500751 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:06:58.501208 kubelet[2364]: I0707 00:06:58.501182 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:06:58.502963 kubelet[2364]: E0707 00:06:58.502896 2364 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 00:06:58.554286 systemd[1]: Created slice kubepods-burstable-podcf38fa416a11620bd3718eefb2913bed.slice - libcontainer container kubepods-burstable-podcf38fa416a11620bd3718eefb2913bed.slice. Jul 7 00:06:58.593746 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 7 00:06:58.602906 kubelet[2364]: I0707 00:06:58.602829 2364 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:06:58.603331 kubelet[2364]: E0707 00:06:58.603263 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jul 7 00:06:58.608703 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 7 00:06:58.625233 kubelet[2364]: I0707 00:06:58.625189 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:06:58.625233 kubelet[2364]: I0707 00:06:58.625232 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:06:58.625360 kubelet[2364]: I0707 00:06:58.625267 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:06:58.625360 kubelet[2364]: I0707 00:06:58.625294 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:06:58.625360 kubelet[2364]: I0707 00:06:58.625319 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:06:58.625360 kubelet[2364]: I0707 00:06:58.625342 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf38fa416a11620bd3718eefb2913bed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf38fa416a11620bd3718eefb2913bed\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:06:58.625509 kubelet[2364]: I0707 00:06:58.625367 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:06:58.625509 kubelet[2364]: I0707 00:06:58.625389 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf38fa416a11620bd3718eefb2913bed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf38fa416a11620bd3718eefb2913bed\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:06:58.625509 kubelet[2364]: I0707 00:06:58.625408 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf38fa416a11620bd3718eefb2913bed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf38fa416a11620bd3718eefb2913bed\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:06:58.682781 kubelet[2364]: W0707 00:06:58.682720 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jul 7 00:06:58.682853 kubelet[2364]: E0707 00:06:58.682784 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:06:58.787623 kubelet[2364]: W0707 00:06:58.787501 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jul 7 00:06:58.787623 kubelet[2364]: E0707 00:06:58.787599 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:06:58.805324 kubelet[2364]: I0707 00:06:58.805271 2364 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:06:58.805744 kubelet[2364]: E0707 00:06:58.805698 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jul 7 00:06:58.892107 kubelet[2364]: E0707 00:06:58.891973 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:06:58.892686 containerd[1575]: time="2025-07-07T00:06:58.892648976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf38fa416a11620bd3718eefb2913bed,Namespace:kube-system,Attempt:0,}" Jul 7 00:06:58.907025 kubelet[2364]: E0707 00:06:58.906983 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:06:58.907487 containerd[1575]: time="2025-07-07T00:06:58.907441232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 7 00:06:58.911737 kubelet[2364]: E0707 00:06:58.911693 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:06:58.912084 containerd[1575]: time="2025-07-07T00:06:58.912043123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 7 00:06:58.924077 containerd[1575]: time="2025-07-07T00:06:58.923993249Z" level=info msg="connecting to shim 2cd76e75a78529067f939056132751bb7ce8d68684c903eee2183b8d44f7ff64" address="unix:///run/containerd/s/633a7c3747e5ffb7168c2da7bc9ca01dd1cda617a997fed0d28efd6c14beca3b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:06:58.953368 containerd[1575]: time="2025-07-07T00:06:58.953309112Z" level=info msg="connecting to shim 36d42eb0bdfc3ad474bbf1bd0420af9b6ac155ecc5fe1e43e2a7de22412ccbd4" address="unix:///run/containerd/s/6113422337c1d3086718283edb86a0a05fa4ea2a330be54db2e2fe7bed439874" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:06:58.985596 containerd[1575]: time="2025-07-07T00:06:58.984647273Z" level=info msg="connecting to shim 623ab8c8f590a8e7d67d15b00f209b61d014bcdbdfab014b3d52cb6a36ab5e33" address="unix:///run/containerd/s/7320f725bc6bce5d9adb155db1f58be3488df49cd002ccfe668392257f135910" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:06:58.985186 systemd[1]: Started cri-containerd-2cd76e75a78529067f939056132751bb7ce8d68684c903eee2183b8d44f7ff64.scope - libcontainer container 2cd76e75a78529067f939056132751bb7ce8d68684c903eee2183b8d44f7ff64. Jul 7 00:06:59.014258 systemd[1]: Started cri-containerd-36d42eb0bdfc3ad474bbf1bd0420af9b6ac155ecc5fe1e43e2a7de22412ccbd4.scope - libcontainer container 36d42eb0bdfc3ad474bbf1bd0420af9b6ac155ecc5fe1e43e2a7de22412ccbd4. Jul 7 00:06:59.021300 systemd[1]: Started cri-containerd-623ab8c8f590a8e7d67d15b00f209b61d014bcdbdfab014b3d52cb6a36ab5e33.scope - libcontainer container 623ab8c8f590a8e7d67d15b00f209b61d014bcdbdfab014b3d52cb6a36ab5e33. Jul 7 00:06:59.050566 containerd[1575]: time="2025-07-07T00:06:59.050524809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf38fa416a11620bd3718eefb2913bed,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cd76e75a78529067f939056132751bb7ce8d68684c903eee2183b8d44f7ff64\"" Jul 7 00:06:59.052157 kubelet[2364]: E0707 00:06:59.052063 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:06:59.055731 containerd[1575]: time="2025-07-07T00:06:59.055700070Z" level=info msg="CreateContainer within sandbox \"2cd76e75a78529067f939056132751bb7ce8d68684c903eee2183b8d44f7ff64\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:06:59.068411 containerd[1575]: time="2025-07-07T00:06:59.068332761Z" level=info msg="Container f0cdb617a57a8c5a9a7eb59b41b7822c071b9b54c9d124be20c8aa6b24a3654e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:06:59.070005 containerd[1575]: time="2025-07-07T00:06:59.069973038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"36d42eb0bdfc3ad474bbf1bd0420af9b6ac155ecc5fe1e43e2a7de22412ccbd4\"" Jul 7 00:06:59.071004 kubelet[2364]: E0707 00:06:59.070962 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:06:59.073159 containerd[1575]: time="2025-07-07T00:06:59.073132198Z" level=info msg="CreateContainer within sandbox \"36d42eb0bdfc3ad474bbf1bd0420af9b6ac155ecc5fe1e43e2a7de22412ccbd4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:06:59.076082 containerd[1575]: time="2025-07-07T00:06:59.076043821Z" level=info msg="CreateContainer within sandbox \"2cd76e75a78529067f939056132751bb7ce8d68684c903eee2183b8d44f7ff64\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f0cdb617a57a8c5a9a7eb59b41b7822c071b9b54c9d124be20c8aa6b24a3654e\"" Jul 7 00:06:59.076946 containerd[1575]: time="2025-07-07T00:06:59.076890731Z" level=info msg="StartContainer for \"f0cdb617a57a8c5a9a7eb59b41b7822c071b9b54c9d124be20c8aa6b24a3654e\"" Jul 7 00:06:59.078296 containerd[1575]: time="2025-07-07T00:06:59.078248885Z" level=info msg="connecting to shim f0cdb617a57a8c5a9a7eb59b41b7822c071b9b54c9d124be20c8aa6b24a3654e" address="unix:///run/containerd/s/633a7c3747e5ffb7168c2da7bc9ca01dd1cda617a997fed0d28efd6c14beca3b" protocol=ttrpc version=3 Jul 7 00:06:59.100763 containerd[1575]: time="2025-07-07T00:06:59.100691377Z" level=info msg="Container 3e892dfbca3622ddbbcf49770cf398d978c867c2d7b0805ab677a05ce4337e11: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:06:59.109260 containerd[1575]: time="2025-07-07T00:06:59.109219900Z" level=info msg="CreateContainer within sandbox \"36d42eb0bdfc3ad474bbf1bd0420af9b6ac155ecc5fe1e43e2a7de22412ccbd4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e892dfbca3622ddbbcf49770cf398d978c867c2d7b0805ab677a05ce4337e11\"" Jul 7 00:06:59.109916 containerd[1575]: time="2025-07-07T00:06:59.109869810Z" level=info msg="StartContainer for \"3e892dfbca3622ddbbcf49770cf398d978c867c2d7b0805ab677a05ce4337e11\"" Jul 7 00:06:59.111201 containerd[1575]: time="2025-07-07T00:06:59.111168369Z" level=info msg="connecting to shim 3e892dfbca3622ddbbcf49770cf398d978c867c2d7b0805ab677a05ce4337e11" address="unix:///run/containerd/s/6113422337c1d3086718283edb86a0a05fa4ea2a330be54db2e2fe7bed439874" protocol=ttrpc version=3 Jul 7 00:06:59.111843 containerd[1575]: time="2025-07-07T00:06:59.111817739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"623ab8c8f590a8e7d67d15b00f209b61d014bcdbdfab014b3d52cb6a36ab5e33\"" Jul 7 00:06:59.112457 kubelet[2364]: E0707 00:06:59.112432 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:06:59.113364 systemd[1]: Started cri-containerd-f0cdb617a57a8c5a9a7eb59b41b7822c071b9b54c9d124be20c8aa6b24a3654e.scope - libcontainer container f0cdb617a57a8c5a9a7eb59b41b7822c071b9b54c9d124be20c8aa6b24a3654e. Jul 7 00:06:59.114809 containerd[1575]: time="2025-07-07T00:06:59.114769360Z" level=info msg="CreateContainer within sandbox \"623ab8c8f590a8e7d67d15b00f209b61d014bcdbdfab014b3d52cb6a36ab5e33\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:06:59.124183 containerd[1575]: time="2025-07-07T00:06:59.124111168Z" level=info msg="Container 7ec9158d286558749daf76d0680f5177cddf89722cdf6093773d7c1bf70c0c59: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:06:59.134147 containerd[1575]: time="2025-07-07T00:06:59.133989860Z" level=info msg="CreateContainer within sandbox \"623ab8c8f590a8e7d67d15b00f209b61d014bcdbdfab014b3d52cb6a36ab5e33\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ec9158d286558749daf76d0680f5177cddf89722cdf6093773d7c1bf70c0c59\"" Jul 7 00:06:59.134691 containerd[1575]: time="2025-07-07T00:06:59.134657144Z" level=info msg="StartContainer for \"7ec9158d286558749daf76d0680f5177cddf89722cdf6093773d7c1bf70c0c59\"" Jul 7 00:06:59.136212 containerd[1575]: time="2025-07-07T00:06:59.136133235Z" level=info msg="connecting to shim 7ec9158d286558749daf76d0680f5177cddf89722cdf6093773d7c1bf70c0c59" address="unix:///run/containerd/s/7320f725bc6bce5d9adb155db1f58be3488df49cd002ccfe668392257f135910" protocol=ttrpc version=3 Jul 7 00:06:59.139194 systemd[1]: Started cri-containerd-3e892dfbca3622ddbbcf49770cf398d978c867c2d7b0805ab677a05ce4337e11.scope - libcontainer container 3e892dfbca3622ddbbcf49770cf398d978c867c2d7b0805ab677a05ce4337e11. Jul 7 00:06:59.160357 systemd[1]: Started cri-containerd-7ec9158d286558749daf76d0680f5177cddf89722cdf6093773d7c1bf70c0c59.scope - libcontainer container 7ec9158d286558749daf76d0680f5177cddf89722cdf6093773d7c1bf70c0c59. Jul 7 00:06:59.193054 kubelet[2364]: W0707 00:06:59.192260 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jul 7 00:06:59.193054 kubelet[2364]: E0707 00:06:59.192364 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:06:59.207896 kubelet[2364]: I0707 00:06:59.207851 2364 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:06:59.209761 kubelet[2364]: E0707 00:06:59.208986 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jul 7 00:06:59.225281 kubelet[2364]: E0707 00:06:59.225193 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Jul 7 00:06:59.292366 containerd[1575]: time="2025-07-07T00:06:59.292272489Z" level=info msg="StartContainer for \"3e892dfbca3622ddbbcf49770cf398d978c867c2d7b0805ab677a05ce4337e11\" returns successfully" Jul 7 00:06:59.293650 containerd[1575]: time="2025-07-07T00:06:59.293547762Z" level=info msg="StartContainer for \"f0cdb617a57a8c5a9a7eb59b41b7822c071b9b54c9d124be20c8aa6b24a3654e\" returns successfully" Jul 7 00:06:59.293945 containerd[1575]: time="2025-07-07T00:06:59.293913516Z" level=info msg="StartContainer for \"7ec9158d286558749daf76d0680f5177cddf89722cdf6093773d7c1bf70c0c59\" returns successfully" Jul 7 00:06:59.862069 kubelet[2364]: E0707 00:06:59.861862 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:06:59.865734 kubelet[2364]: E0707 00:06:59.865681 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:06:59.871962 kubelet[2364]: E0707 00:06:59.871896 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:00.092635 kubelet[2364]: I0707 00:07:00.092590 2364 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:07:00.494088 kubelet[2364]: I0707 00:07:00.494026 2364 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 00:07:00.494088 kubelet[2364]: E0707 00:07:00.494080 2364 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 00:07:00.507366 kubelet[2364]: E0707 00:07:00.507307 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:00.607612 kubelet[2364]: E0707 00:07:00.607562 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:00.708249 kubelet[2364]: E0707 00:07:00.708204 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:00.809130 kubelet[2364]: E0707 00:07:00.809075 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:00.872738 kubelet[2364]: E0707 00:07:00.872698 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:00.909248 kubelet[2364]: E0707 00:07:00.909179 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.009874 kubelet[2364]: E0707 00:07:01.009803 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.110718 kubelet[2364]: E0707 00:07:01.110532 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.211393 kubelet[2364]: E0707 00:07:01.211326 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.311677 kubelet[2364]: E0707 00:07:01.311613 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.412361 kubelet[2364]: E0707 00:07:01.412213 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.513037 kubelet[2364]: E0707 00:07:01.512946 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.613666 kubelet[2364]: E0707 00:07:01.613598 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.714219 kubelet[2364]: E0707 00:07:01.714085 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:01.810821 kubelet[2364]: I0707 00:07:01.810779 2364 apiserver.go:52] "Watching apiserver" Jul 7 00:07:01.819167 kubelet[2364]: I0707 00:07:01.819142 2364 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:07:02.926128 systemd[1]: Reload requested from client PID 2637 ('systemctl') (unit session-9.scope)... Jul 7 00:07:02.926146 systemd[1]: Reloading... Jul 7 00:07:03.026056 zram_generator::config[2683]: No configuration found. Jul 7 00:07:03.115787 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:07:03.250094 systemd[1]: Reloading finished in 323 ms. Jul 7 00:07:03.277404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:07:03.299467 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:07:03.299830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:07:03.299886 systemd[1]: kubelet.service: Consumed 817ms CPU time, 132.7M memory peak. Jul 7 00:07:03.302129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:07:03.530714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:07:03.541503 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:07:03.589297 kubelet[2725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:07:03.589297 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:07:03.589297 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:07:03.589732 kubelet[2725]: I0707 00:07:03.589344 2725 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:07:03.595227 kubelet[2725]: I0707 00:07:03.595198 2725 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:07:03.595227 kubelet[2725]: I0707 00:07:03.595217 2725 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:07:03.595400 kubelet[2725]: I0707 00:07:03.595388 2725 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:07:03.596494 kubelet[2725]: I0707 00:07:03.596478 2725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:07:03.598382 kubelet[2725]: I0707 00:07:03.598354 2725 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:07:03.603214 kubelet[2725]: I0707 00:07:03.603179 2725 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:07:03.608028 kubelet[2725]: I0707 00:07:03.607989 2725 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:07:03.608168 kubelet[2725]: I0707 00:07:03.608146 2725 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:07:03.608333 kubelet[2725]: I0707 00:07:03.608297 2725 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:07:03.608499 kubelet[2725]: I0707 00:07:03.608329 2725 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:07:03.608602 kubelet[2725]: I0707 00:07:03.608506 2725 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:07:03.608602 kubelet[2725]: I0707 00:07:03.608515 2725 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:07:03.608602 kubelet[2725]: I0707 00:07:03.608549 2725 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:07:03.608667 kubelet[2725]: I0707 00:07:03.608650 2725 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:07:03.608667 kubelet[2725]: I0707 00:07:03.608662 2725 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:07:03.608709 kubelet[2725]: I0707 00:07:03.608687 2725 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:07:03.608709 kubelet[2725]: I0707 00:07:03.608697 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:07:03.609692 kubelet[2725]: I0707 00:07:03.609671 2725 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:07:03.610160 kubelet[2725]: I0707 00:07:03.610141 2725 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:07:03.612055 kubelet[2725]: I0707 00:07:03.610572 2725 server.go:1274] "Started kubelet" Jul 7 00:07:03.612055 kubelet[2725]: I0707 00:07:03.610817 2725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:07:03.612055 kubelet[2725]: I0707 00:07:03.610833 2725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:07:03.612055 kubelet[2725]: I0707 00:07:03.611147 2725 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:07:03.612055 kubelet[2725]: I0707 00:07:03.611881 2725 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:07:03.614493 kubelet[2725]: I0707 00:07:03.614473 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:07:03.621710 kubelet[2725]: I0707 00:07:03.615227 2725 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:07:03.622400 kubelet[2725]: I0707 00:07:03.622375 2725 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:07:03.623637 kubelet[2725]: E0707 00:07:03.623152 2725 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:07:03.623637 kubelet[2725]: I0707 00:07:03.623385 2725 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:07:03.623637 kubelet[2725]: I0707 00:07:03.623570 2725 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:07:03.625193 kubelet[2725]: I0707 00:07:03.625168 2725 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:07:03.625283 kubelet[2725]: I0707 00:07:03.625253 2725 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:07:03.627724 kubelet[2725]: E0707 00:07:03.627672 2725 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:07:03.628186 kubelet[2725]: I0707 00:07:03.628158 2725 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:07:03.633237 kubelet[2725]: I0707 00:07:03.633167 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:07:03.634755 kubelet[2725]: I0707 00:07:03.634679 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:07:03.634755 kubelet[2725]: I0707 00:07:03.634714 2725 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:07:03.634755 kubelet[2725]: I0707 00:07:03.634753 2725 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:07:03.634875 kubelet[2725]: E0707 00:07:03.634801 2725 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:07:03.662928 kubelet[2725]: I0707 00:07:03.662902 2725 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:07:03.662928 kubelet[2725]: I0707 00:07:03.662919 2725 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:07:03.662928 kubelet[2725]: I0707 00:07:03.662938 2725 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:07:03.663129 kubelet[2725]: I0707 00:07:03.663090 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:07:03.663129 kubelet[2725]: I0707 00:07:03.663100 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:07:03.663129 kubelet[2725]: I0707 00:07:03.663117 2725 policy_none.go:49] "None policy: Start" Jul 7 00:07:03.663688 kubelet[2725]: I0707 00:07:03.663666 2725 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:07:03.663688 kubelet[2725]: I0707 00:07:03.663690 2725 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:07:03.663846 kubelet[2725]: I0707 00:07:03.663830 2725 state_mem.go:75] "Updated machine memory state" Jul 7 00:07:03.670579 kubelet[2725]: I0707 00:07:03.670544 2725 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:07:03.670781 kubelet[2725]: I0707 00:07:03.670758 2725 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:07:03.670836 kubelet[2725]: I0707 00:07:03.670777 2725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:07:03.670994 kubelet[2725]: I0707 00:07:03.670973 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:07:03.776003 kubelet[2725]: I0707 00:07:03.775958 2725 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:07:03.783724 kubelet[2725]: I0707 00:07:03.783578 2725 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 7 00:07:03.783724 kubelet[2725]: I0707 00:07:03.783664 2725 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 00:07:03.824831 kubelet[2725]: I0707 00:07:03.824782 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf38fa416a11620bd3718eefb2913bed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf38fa416a11620bd3718eefb2913bed\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:07:03.824831 kubelet[2725]: I0707 00:07:03.824816 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:07:03.824831 kubelet[2725]: I0707 00:07:03.824835 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:07:03.825081 kubelet[2725]: I0707 00:07:03.824877 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:07:03.825081 kubelet[2725]: I0707 00:07:03.824962 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf38fa416a11620bd3718eefb2913bed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf38fa416a11620bd3718eefb2913bed\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:07:03.825081 kubelet[2725]: I0707 00:07:03.824996 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf38fa416a11620bd3718eefb2913bed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf38fa416a11620bd3718eefb2913bed\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:07:03.825185 kubelet[2725]: I0707 00:07:03.825099 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:07:03.825185 kubelet[2725]: I0707 00:07:03.825145 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:07:03.825185 kubelet[2725]: I0707 00:07:03.825171 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:07:04.044126 kubelet[2725]: E0707 00:07:04.043967 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:04.044126 kubelet[2725]: E0707 00:07:04.044056 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:04.044985 kubelet[2725]: E0707 00:07:04.044952 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:04.609035 kubelet[2725]: I0707 00:07:04.608978 2725 apiserver.go:52] "Watching apiserver" Jul 7 00:07:04.624124 kubelet[2725]: I0707 00:07:04.624083 2725 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:07:04.648400 kubelet[2725]: E0707 00:07:04.648347 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:04.721435 kubelet[2725]: I0707 00:07:04.721323 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.721279767 podStartE2EDuration="1.721279767s" podCreationTimestamp="2025-07-07 00:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:07:04.720839325 +0000 UTC m=+1.170906977" watchObservedRunningTime="2025-07-07 00:07:04.721279767 +0000 UTC m=+1.171347429" Jul 7 00:07:04.722114 kubelet[2725]: E0707 00:07:04.722091 2725 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 00:07:04.722206 kubelet[2725]: E0707 00:07:04.722167 2725 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:07:04.722407 kubelet[2725]: E0707 00:07:04.722376 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:04.722961 kubelet[2725]: E0707 00:07:04.722653 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:04.735928 kubelet[2725]: I0707 00:07:04.735851 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.735830886 podStartE2EDuration="1.735830886s" podCreationTimestamp="2025-07-07 00:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:07:04.729170157 +0000 UTC m=+1.179237809" watchObservedRunningTime="2025-07-07 00:07:04.735830886 +0000 UTC m=+1.185898538" Jul 7 00:07:04.745178 kubelet[2725]: I0707 00:07:04.745119 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.745102756 podStartE2EDuration="1.745102756s" podCreationTimestamp="2025-07-07 00:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:07:04.736148363 +0000 UTC m=+1.186216015" watchObservedRunningTime="2025-07-07 00:07:04.745102756 +0000 UTC m=+1.195170398" Jul 7 00:07:05.649625 kubelet[2725]: E0707 00:07:05.649585 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:05.650105 kubelet[2725]: E0707 00:07:05.649684 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:06.650772 kubelet[2725]: E0707 00:07:06.650724 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:06.698527 kubelet[2725]: E0707 00:07:06.698470 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:08.027874 kubelet[2725]: E0707 00:07:08.027816 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:09.118399 kubelet[2725]: I0707 00:07:09.118355 2725 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:07:09.119002 containerd[1575]: time="2025-07-07T00:07:09.118849267Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:07:09.119406 kubelet[2725]: I0707 00:07:09.119066 2725 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:07:09.969331 systemd[1]: Created slice kubepods-besteffort-pode4359c07_1304_4697_bf5c_e0a0cfed3466.slice - libcontainer container kubepods-besteffort-pode4359c07_1304_4697_bf5c_e0a0cfed3466.slice. Jul 7 00:07:10.063839 kubelet[2725]: I0707 00:07:10.063732 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4359c07-1304-4697-bf5c-e0a0cfed3466-xtables-lock\") pod \"kube-proxy-dnqb8\" (UID: \"e4359c07-1304-4697-bf5c-e0a0cfed3466\") " pod="kube-system/kube-proxy-dnqb8" Jul 7 00:07:10.064103 kubelet[2725]: I0707 00:07:10.063788 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4359c07-1304-4697-bf5c-e0a0cfed3466-lib-modules\") pod \"kube-proxy-dnqb8\" (UID: \"e4359c07-1304-4697-bf5c-e0a0cfed3466\") " pod="kube-system/kube-proxy-dnqb8" Jul 7 00:07:10.064103 kubelet[2725]: I0707 00:07:10.064123 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvqg7\" (UniqueName: \"kubernetes.io/projected/e4359c07-1304-4697-bf5c-e0a0cfed3466-kube-api-access-dvqg7\") pod \"kube-proxy-dnqb8\" (UID: \"e4359c07-1304-4697-bf5c-e0a0cfed3466\") " pod="kube-system/kube-proxy-dnqb8" Jul 7 00:07:10.064445 kubelet[2725]: I0707 00:07:10.064152 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4359c07-1304-4697-bf5c-e0a0cfed3466-kube-proxy\") pod \"kube-proxy-dnqb8\" (UID: \"e4359c07-1304-4697-bf5c-e0a0cfed3466\") " pod="kube-system/kube-proxy-dnqb8" Jul 7 00:07:10.238140 systemd[1]: Created slice kubepods-besteffort-pod722a377c_e722_445a_9dec_6e970b628c04.slice - libcontainer container kubepods-besteffort-pod722a377c_e722_445a_9dec_6e970b628c04.slice. Jul 7 00:07:10.266265 kubelet[2725]: I0707 00:07:10.266190 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/722a377c-e722-445a-9dec-6e970b628c04-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-hbvqs\" (UID: \"722a377c-e722-445a-9dec-6e970b628c04\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hbvqs" Jul 7 00:07:10.266770 kubelet[2725]: I0707 00:07:10.266306 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hw4l\" (UniqueName: \"kubernetes.io/projected/722a377c-e722-445a-9dec-6e970b628c04-kube-api-access-6hw4l\") pod \"tigera-operator-5bf8dfcb4-hbvqs\" (UID: \"722a377c-e722-445a-9dec-6e970b628c04\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hbvqs" Jul 7 00:07:10.287529 kubelet[2725]: E0707 00:07:10.287483 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:10.288220 containerd[1575]: time="2025-07-07T00:07:10.288155969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnqb8,Uid:e4359c07-1304-4697-bf5c-e0a0cfed3466,Namespace:kube-system,Attempt:0,}" Jul 7 00:07:10.313720 containerd[1575]: time="2025-07-07T00:07:10.313664332Z" level=info msg="connecting to shim c01df512a21210b41be31682f531414a4150f0dfb0490253bd1d070e51618810" address="unix:///run/containerd/s/eef5733f3ff16a2c6f05a0a67a3124e3641e0796440f6492c56f209833d5d34d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:07:10.352193 systemd[1]: Started cri-containerd-c01df512a21210b41be31682f531414a4150f0dfb0490253bd1d070e51618810.scope - libcontainer container c01df512a21210b41be31682f531414a4150f0dfb0490253bd1d070e51618810. Jul 7 00:07:10.386139 containerd[1575]: time="2025-07-07T00:07:10.386093700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnqb8,Uid:e4359c07-1304-4697-bf5c-e0a0cfed3466,Namespace:kube-system,Attempt:0,} returns sandbox id \"c01df512a21210b41be31682f531414a4150f0dfb0490253bd1d070e51618810\"" Jul 7 00:07:10.386907 kubelet[2725]: E0707 00:07:10.386878 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:10.389618 containerd[1575]: time="2025-07-07T00:07:10.389527314Z" level=info msg="CreateContainer within sandbox \"c01df512a21210b41be31682f531414a4150f0dfb0490253bd1d070e51618810\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:07:10.401394 containerd[1575]: time="2025-07-07T00:07:10.401337772Z" level=info msg="Container 1cd5184002c8fa44b485cf2acbade268f064ca5a73dc341cf4ce833322b42575: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:10.413391 containerd[1575]: time="2025-07-07T00:07:10.413337961Z" level=info msg="CreateContainer within sandbox \"c01df512a21210b41be31682f531414a4150f0dfb0490253bd1d070e51618810\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1cd5184002c8fa44b485cf2acbade268f064ca5a73dc341cf4ce833322b42575\"" Jul 7 00:07:10.414126 containerd[1575]: time="2025-07-07T00:07:10.414007663Z" level=info msg="StartContainer for \"1cd5184002c8fa44b485cf2acbade268f064ca5a73dc341cf4ce833322b42575\"" Jul 7 00:07:10.415481 containerd[1575]: time="2025-07-07T00:07:10.415454923Z" level=info msg="connecting to shim 1cd5184002c8fa44b485cf2acbade268f064ca5a73dc341cf4ce833322b42575" address="unix:///run/containerd/s/eef5733f3ff16a2c6f05a0a67a3124e3641e0796440f6492c56f209833d5d34d" protocol=ttrpc version=3 Jul 7 00:07:10.443254 systemd[1]: Started cri-containerd-1cd5184002c8fa44b485cf2acbade268f064ca5a73dc341cf4ce833322b42575.scope - libcontainer container 1cd5184002c8fa44b485cf2acbade268f064ca5a73dc341cf4ce833322b42575. Jul 7 00:07:10.490435 containerd[1575]: time="2025-07-07T00:07:10.490288329Z" level=info msg="StartContainer for \"1cd5184002c8fa44b485cf2acbade268f064ca5a73dc341cf4ce833322b42575\" returns successfully" Jul 7 00:07:10.542340 containerd[1575]: time="2025-07-07T00:07:10.542271161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hbvqs,Uid:722a377c-e722-445a-9dec-6e970b628c04,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:07:10.567806 containerd[1575]: time="2025-07-07T00:07:10.567670477Z" level=info msg="connecting to shim 2b95a550a18c7d766c35b635a08a852ce146207364662f870687bd9eb9c8eec6" address="unix:///run/containerd/s/a5af3be5cecfb768b135290191f1c711d67222c30bb073094141bd981d671cde" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:07:10.596213 systemd[1]: Started cri-containerd-2b95a550a18c7d766c35b635a08a852ce146207364662f870687bd9eb9c8eec6.scope - libcontainer container 2b95a550a18c7d766c35b635a08a852ce146207364662f870687bd9eb9c8eec6. Jul 7 00:07:10.646326 containerd[1575]: time="2025-07-07T00:07:10.646273584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hbvqs,Uid:722a377c-e722-445a-9dec-6e970b628c04,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2b95a550a18c7d766c35b635a08a852ce146207364662f870687bd9eb9c8eec6\"" Jul 7 00:07:10.648560 containerd[1575]: time="2025-07-07T00:07:10.648162343Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:07:10.660881 kubelet[2725]: E0707 00:07:10.660841 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:11.011345 update_engine[1563]: I20250707 00:07:11.011261 1563 update_attempter.cc:509] Updating boot flags... Jul 7 00:07:11.177971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316719581.mount: Deactivated successfully. Jul 7 00:07:12.309058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648983593.mount: Deactivated successfully. Jul 7 00:07:12.917728 kubelet[2725]: E0707 00:07:12.917677 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:12.932672 kubelet[2725]: I0707 00:07:12.932615 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dnqb8" podStartSLOduration=3.93259071 podStartE2EDuration="3.93259071s" podCreationTimestamp="2025-07-07 00:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:07:10.66851445 +0000 UTC m=+7.118582102" watchObservedRunningTime="2025-07-07 00:07:12.93259071 +0000 UTC m=+9.382658362" Jul 7 00:07:13.039417 containerd[1575]: time="2025-07-07T00:07:13.039352207Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:13.040173 containerd[1575]: time="2025-07-07T00:07:13.040124521Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:07:13.041204 containerd[1575]: time="2025-07-07T00:07:13.041172278Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:13.043033 containerd[1575]: time="2025-07-07T00:07:13.042974534Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:13.043583 containerd[1575]: time="2025-07-07T00:07:13.043544404Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.395327577s" Jul 7 00:07:13.043583 containerd[1575]: time="2025-07-07T00:07:13.043575183Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:07:13.045382 containerd[1575]: time="2025-07-07T00:07:13.045352752Z" level=info msg="CreateContainer within sandbox \"2b95a550a18c7d766c35b635a08a852ce146207364662f870687bd9eb9c8eec6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:07:13.053542 containerd[1575]: time="2025-07-07T00:07:13.053506449Z" level=info msg="Container cd18577f79559c625430e0cfb19184f1e10a56ae1d05954da6a8016ff715392e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:13.059649 containerd[1575]: time="2025-07-07T00:07:13.059603517Z" level=info msg="CreateContainer within sandbox \"2b95a550a18c7d766c35b635a08a852ce146207364662f870687bd9eb9c8eec6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cd18577f79559c625430e0cfb19184f1e10a56ae1d05954da6a8016ff715392e\"" Jul 7 00:07:13.060202 containerd[1575]: time="2025-07-07T00:07:13.060169360Z" level=info msg="StartContainer for \"cd18577f79559c625430e0cfb19184f1e10a56ae1d05954da6a8016ff715392e\"" Jul 7 00:07:13.060966 containerd[1575]: time="2025-07-07T00:07:13.060937466Z" level=info msg="connecting to shim cd18577f79559c625430e0cfb19184f1e10a56ae1d05954da6a8016ff715392e" address="unix:///run/containerd/s/a5af3be5cecfb768b135290191f1c711d67222c30bb073094141bd981d671cde" protocol=ttrpc version=3 Jul 7 00:07:13.115174 systemd[1]: Started cri-containerd-cd18577f79559c625430e0cfb19184f1e10a56ae1d05954da6a8016ff715392e.scope - libcontainer container cd18577f79559c625430e0cfb19184f1e10a56ae1d05954da6a8016ff715392e. Jul 7 00:07:13.152555 containerd[1575]: time="2025-07-07T00:07:13.152492567Z" level=info msg="StartContainer for \"cd18577f79559c625430e0cfb19184f1e10a56ae1d05954da6a8016ff715392e\" returns successfully" Jul 7 00:07:13.667551 kubelet[2725]: E0707 00:07:13.667484 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:13.684584 kubelet[2725]: I0707 00:07:13.684488 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-hbvqs" podStartSLOduration=1.287717916 podStartE2EDuration="3.68447147s" podCreationTimestamp="2025-07-07 00:07:10 +0000 UTC" firstStartedPulling="2025-07-07 00:07:10.64756621 +0000 UTC m=+7.097633862" lastFinishedPulling="2025-07-07 00:07:13.044319764 +0000 UTC m=+9.494387416" observedRunningTime="2025-07-07 00:07:13.684301829 +0000 UTC m=+10.134369481" watchObservedRunningTime="2025-07-07 00:07:13.68447147 +0000 UTC m=+10.134539122" Jul 7 00:07:16.702422 kubelet[2725]: E0707 00:07:16.702382 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:18.033277 kubelet[2725]: E0707 00:07:18.033229 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:18.336265 sudo[1800]: pam_unix(sudo:session): session closed for user root Jul 7 00:07:18.338543 sshd[1799]: Connection closed by 10.0.0.1 port 35786 Jul 7 00:07:18.343605 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jul 7 00:07:18.350115 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:35786.service: Deactivated successfully. Jul 7 00:07:18.357233 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:07:18.357505 systemd[1]: session-9.scope: Consumed 4.741s CPU time, 228.2M memory peak. Jul 7 00:07:18.365411 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:07:18.366721 systemd-logind[1560]: Removed session 9. Jul 7 00:07:18.680442 kubelet[2725]: E0707 00:07:18.680210 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:21.628614 systemd[1]: Created slice kubepods-besteffort-pod2f691847_c705_4a65_b60a_c4c47f7bb135.slice - libcontainer container kubepods-besteffort-pod2f691847_c705_4a65_b60a_c4c47f7bb135.slice. Jul 7 00:07:21.644876 kubelet[2725]: I0707 00:07:21.644812 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f691847-c705-4a65-b60a-c4c47f7bb135-tigera-ca-bundle\") pod \"calico-typha-84465f67b6-mvs8n\" (UID: \"2f691847-c705-4a65-b60a-c4c47f7bb135\") " pod="calico-system/calico-typha-84465f67b6-mvs8n" Jul 7 00:07:21.644876 kubelet[2725]: I0707 00:07:21.644875 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2f691847-c705-4a65-b60a-c4c47f7bb135-typha-certs\") pod \"calico-typha-84465f67b6-mvs8n\" (UID: \"2f691847-c705-4a65-b60a-c4c47f7bb135\") " pod="calico-system/calico-typha-84465f67b6-mvs8n" Jul 7 00:07:21.646318 kubelet[2725]: I0707 00:07:21.644903 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tx5\" (UniqueName: \"kubernetes.io/projected/2f691847-c705-4a65-b60a-c4c47f7bb135-kube-api-access-h5tx5\") pod \"calico-typha-84465f67b6-mvs8n\" (UID: \"2f691847-c705-4a65-b60a-c4c47f7bb135\") " pod="calico-system/calico-typha-84465f67b6-mvs8n" Jul 7 00:07:21.932872 kubelet[2725]: E0707 00:07:21.932694 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:21.934071 containerd[1575]: time="2025-07-07T00:07:21.933590028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84465f67b6-mvs8n,Uid:2f691847-c705-4a65-b60a-c4c47f7bb135,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:22.055300 systemd[1]: Created slice kubepods-besteffort-poda2d464d1_7281_4db2_a574_8aaa4704417a.slice - libcontainer container kubepods-besteffort-poda2d464d1_7281_4db2_a574_8aaa4704417a.slice. Jul 7 00:07:22.064927 containerd[1575]: time="2025-07-07T00:07:22.064834031Z" level=info msg="connecting to shim 602444057de93eb48317061de994f708c85a793b47e59025f4ed212be08a9624" address="unix:///run/containerd/s/76698f5fd9a6207a4bbf7689c223dde9c5a985de803f58f80e13c62a92b4a41b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:07:22.101311 systemd[1]: Started cri-containerd-602444057de93eb48317061de994f708c85a793b47e59025f4ed212be08a9624.scope - libcontainer container 602444057de93eb48317061de994f708c85a793b47e59025f4ed212be08a9624. Jul 7 00:07:22.147715 kubelet[2725]: I0707 00:07:22.147637 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-policysync\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.147715 kubelet[2725]: I0707 00:07:22.147693 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-flexvol-driver-host\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.147715 kubelet[2725]: I0707 00:07:22.147716 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crcxn\" (UniqueName: \"kubernetes.io/projected/a2d464d1-7281-4db2-a574-8aaa4704417a-kube-api-access-crcxn\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.147950 kubelet[2725]: I0707 00:07:22.147735 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-cni-bin-dir\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.147950 kubelet[2725]: I0707 00:07:22.147761 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-cni-net-dir\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.147950 kubelet[2725]: I0707 00:07:22.147784 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-lib-modules\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.147950 kubelet[2725]: I0707 00:07:22.147801 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-cni-log-dir\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.147950 kubelet[2725]: I0707 00:07:22.147819 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-var-lib-calico\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.148208 kubelet[2725]: I0707 00:07:22.147839 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2d464d1-7281-4db2-a574-8aaa4704417a-tigera-ca-bundle\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.148208 kubelet[2725]: I0707 00:07:22.147857 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-var-run-calico\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.148208 kubelet[2725]: I0707 00:07:22.147876 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2d464d1-7281-4db2-a574-8aaa4704417a-node-certs\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.148208 kubelet[2725]: I0707 00:07:22.147893 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2d464d1-7281-4db2-a574-8aaa4704417a-xtables-lock\") pod \"calico-node-lblsh\" (UID: \"a2d464d1-7281-4db2-a574-8aaa4704417a\") " pod="calico-system/calico-node-lblsh" Jul 7 00:07:22.162115 containerd[1575]: time="2025-07-07T00:07:22.162066304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84465f67b6-mvs8n,Uid:2f691847-c705-4a65-b60a-c4c47f7bb135,Namespace:calico-system,Attempt:0,} returns sandbox id \"602444057de93eb48317061de994f708c85a793b47e59025f4ed212be08a9624\"" Jul 7 00:07:22.162932 kubelet[2725]: E0707 00:07:22.162899 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:22.164396 containerd[1575]: time="2025-07-07T00:07:22.164369921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:07:22.252098 kubelet[2725]: E0707 00:07:22.251816 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.252098 kubelet[2725]: W0707 00:07:22.251857 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.252098 kubelet[2725]: E0707 00:07:22.251940 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.257049 kubelet[2725]: E0707 00:07:22.255461 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.257049 kubelet[2725]: W0707 00:07:22.255485 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.257049 kubelet[2725]: E0707 00:07:22.255504 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.264215 kubelet[2725]: E0707 00:07:22.264180 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.264215 kubelet[2725]: W0707 00:07:22.264205 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.264403 kubelet[2725]: E0707 00:07:22.264232 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.331941 kubelet[2725]: E0707 00:07:22.331868 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:22.363321 containerd[1575]: time="2025-07-07T00:07:22.363260417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lblsh,Uid:a2d464d1-7281-4db2-a574-8aaa4704417a,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:22.388528 containerd[1575]: time="2025-07-07T00:07:22.388359501Z" level=info msg="connecting to shim 85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e" address="unix:///run/containerd/s/94dd3ff8415c68cc6dc61320d2e4378c00641bd8a17202593c67d5fd7cf3bb92" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:07:22.430060 kubelet[2725]: E0707 00:07:22.429956 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.430060 kubelet[2725]: W0707 00:07:22.429975 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.430060 kubelet[2725]: E0707 00:07:22.429995 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.430340 kubelet[2725]: E0707 00:07:22.430255 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.430340 kubelet[2725]: W0707 00:07:22.430266 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.430340 kubelet[2725]: E0707 00:07:22.430277 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.430488 kubelet[2725]: E0707 00:07:22.430464 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.430488 kubelet[2725]: W0707 00:07:22.430478 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.430488 kubelet[2725]: E0707 00:07:22.430488 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.430739 kubelet[2725]: E0707 00:07:22.430689 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.430739 kubelet[2725]: W0707 00:07:22.430699 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.430739 kubelet[2725]: E0707 00:07:22.430710 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.430911 kubelet[2725]: E0707 00:07:22.430879 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.430911 kubelet[2725]: W0707 00:07:22.430894 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.430911 kubelet[2725]: E0707 00:07:22.430903 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.431172 systemd[1]: Started cri-containerd-85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e.scope - libcontainer container 85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e. Jul 7 00:07:22.431400 kubelet[2725]: E0707 00:07:22.431225 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.431400 kubelet[2725]: W0707 00:07:22.431257 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.431400 kubelet[2725]: E0707 00:07:22.431293 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.431620 kubelet[2725]: E0707 00:07:22.431598 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.431620 kubelet[2725]: W0707 00:07:22.431613 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.431684 kubelet[2725]: E0707 00:07:22.431625 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.431897 kubelet[2725]: E0707 00:07:22.431870 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.431897 kubelet[2725]: W0707 00:07:22.431884 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.431897 kubelet[2725]: E0707 00:07:22.431894 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.432310 kubelet[2725]: E0707 00:07:22.432272 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.432310 kubelet[2725]: W0707 00:07:22.432290 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.432310 kubelet[2725]: E0707 00:07:22.432301 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.432579 kubelet[2725]: E0707 00:07:22.432557 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.432579 kubelet[2725]: W0707 00:07:22.432570 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.432579 kubelet[2725]: E0707 00:07:22.432581 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.432822 kubelet[2725]: E0707 00:07:22.432800 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.432822 kubelet[2725]: W0707 00:07:22.432813 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.432822 kubelet[2725]: E0707 00:07:22.432825 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.433126 kubelet[2725]: E0707 00:07:22.433108 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.433126 kubelet[2725]: W0707 00:07:22.433122 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.433185 kubelet[2725]: E0707 00:07:22.433134 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.433440 kubelet[2725]: E0707 00:07:22.433421 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.433440 kubelet[2725]: W0707 00:07:22.433435 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.433507 kubelet[2725]: E0707 00:07:22.433447 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.433683 kubelet[2725]: E0707 00:07:22.433656 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.433683 kubelet[2725]: W0707 00:07:22.433674 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.433683 kubelet[2725]: E0707 00:07:22.433684 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.433954 kubelet[2725]: E0707 00:07:22.433908 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.433954 kubelet[2725]: W0707 00:07:22.433923 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.433954 kubelet[2725]: E0707 00:07:22.433934 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.435038 kubelet[2725]: E0707 00:07:22.434628 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.435038 kubelet[2725]: W0707 00:07:22.434645 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.435038 kubelet[2725]: E0707 00:07:22.434657 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.435038 kubelet[2725]: E0707 00:07:22.434914 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.435038 kubelet[2725]: W0707 00:07:22.434923 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.435038 kubelet[2725]: E0707 00:07:22.434939 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.435199 kubelet[2725]: E0707 00:07:22.435154 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.435199 kubelet[2725]: W0707 00:07:22.435164 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.435199 kubelet[2725]: E0707 00:07:22.435177 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.435432 kubelet[2725]: E0707 00:07:22.435400 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.435432 kubelet[2725]: W0707 00:07:22.435426 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.435498 kubelet[2725]: E0707 00:07:22.435437 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.435660 kubelet[2725]: E0707 00:07:22.435639 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.435660 kubelet[2725]: W0707 00:07:22.435654 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.435734 kubelet[2725]: E0707 00:07:22.435663 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.451509 kubelet[2725]: E0707 00:07:22.451473 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.451509 kubelet[2725]: W0707 00:07:22.451502 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.451694 kubelet[2725]: E0707 00:07:22.451532 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.451694 kubelet[2725]: I0707 00:07:22.451574 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2-socket-dir\") pod \"csi-node-driver-7gjph\" (UID: \"e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2\") " pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:22.451882 kubelet[2725]: E0707 00:07:22.451809 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.451882 kubelet[2725]: W0707 00:07:22.451838 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.451882 kubelet[2725]: E0707 00:07:22.451850 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.451882 kubelet[2725]: I0707 00:07:22.451870 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2-varrun\") pod \"csi-node-driver-7gjph\" (UID: \"e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2\") " pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:22.452990 kubelet[2725]: E0707 00:07:22.452925 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.452990 kubelet[2725]: W0707 00:07:22.452942 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.453298 kubelet[2725]: E0707 00:07:22.453059 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.453559 kubelet[2725]: E0707 00:07:22.453436 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.453559 kubelet[2725]: W0707 00:07:22.453492 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.453559 kubelet[2725]: E0707 00:07:22.453532 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.454224 kubelet[2725]: E0707 00:07:22.454188 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.454224 kubelet[2725]: W0707 00:07:22.454205 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.454524 kubelet[2725]: E0707 00:07:22.454370 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.454665 kubelet[2725]: I0707 00:07:22.454405 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2-kubelet-dir\") pod \"csi-node-driver-7gjph\" (UID: \"e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2\") " pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:22.454665 kubelet[2725]: E0707 00:07:22.454584 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.454665 kubelet[2725]: W0707 00:07:22.454594 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.454665 kubelet[2725]: E0707 00:07:22.454605 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.455134 kubelet[2725]: E0707 00:07:22.455099 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.455240 kubelet[2725]: W0707 00:07:22.455161 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.455500 kubelet[2725]: E0707 00:07:22.455423 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.455837 kubelet[2725]: E0707 00:07:22.455789 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.456104 kubelet[2725]: W0707 00:07:22.455940 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.456254 kubelet[2725]: E0707 00:07:22.456179 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.456384 kubelet[2725]: I0707 00:07:22.456211 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2-registration-dir\") pod \"csi-node-driver-7gjph\" (UID: \"e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2\") " pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:22.456928 kubelet[2725]: E0707 00:07:22.456844 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.456928 kubelet[2725]: W0707 00:07:22.456860 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.456928 kubelet[2725]: E0707 00:07:22.456872 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.457867 kubelet[2725]: E0707 00:07:22.457846 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.457867 kubelet[2725]: W0707 00:07:22.457863 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.457867 kubelet[2725]: E0707 00:07:22.457987 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.460152 kubelet[2725]: E0707 00:07:22.460127 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.460354 kubelet[2725]: W0707 00:07:22.460256 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.460511 kubelet[2725]: E0707 00:07:22.460449 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.460711 kubelet[2725]: I0707 00:07:22.460636 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsnh7\" (UniqueName: \"kubernetes.io/projected/e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2-kube-api-access-hsnh7\") pod \"csi-node-driver-7gjph\" (UID: \"e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2\") " pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:22.460949 kubelet[2725]: E0707 00:07:22.460891 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.460949 kubelet[2725]: W0707 00:07:22.460905 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.461231 kubelet[2725]: E0707 00:07:22.461077 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.461813 kubelet[2725]: E0707 00:07:22.461779 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.461813 kubelet[2725]: W0707 00:07:22.461791 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.462819 kubelet[2725]: E0707 00:07:22.462738 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.464443 kubelet[2725]: E0707 00:07:22.463846 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.464443 kubelet[2725]: W0707 00:07:22.463889 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.464443 kubelet[2725]: E0707 00:07:22.463927 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.466174 kubelet[2725]: E0707 00:07:22.466130 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.466174 kubelet[2725]: W0707 00:07:22.466148 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.466174 kubelet[2725]: E0707 00:07:22.466160 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.472094 containerd[1575]: time="2025-07-07T00:07:22.471938769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lblsh,Uid:a2d464d1-7281-4db2-a574-8aaa4704417a,Namespace:calico-system,Attempt:0,} returns sandbox id \"85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e\"" Jul 7 00:07:22.561674 kubelet[2725]: E0707 00:07:22.561637 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.561674 kubelet[2725]: W0707 00:07:22.561663 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.561937 kubelet[2725]: E0707 00:07:22.561690 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.561937 kubelet[2725]: E0707 00:07:22.561920 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.561937 kubelet[2725]: W0707 00:07:22.561931 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.562047 kubelet[2725]: E0707 00:07:22.561947 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.562253 kubelet[2725]: E0707 00:07:22.562234 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.562253 kubelet[2725]: W0707 00:07:22.562251 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.562327 kubelet[2725]: E0707 00:07:22.562268 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.562499 kubelet[2725]: E0707 00:07:22.562475 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.562499 kubelet[2725]: W0707 00:07:22.562488 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.562552 kubelet[2725]: E0707 00:07:22.562503 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.562691 kubelet[2725]: E0707 00:07:22.562676 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.562691 kubelet[2725]: W0707 00:07:22.562688 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.562748 kubelet[2725]: E0707 00:07:22.562703 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.562946 kubelet[2725]: E0707 00:07:22.562933 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.562946 kubelet[2725]: W0707 00:07:22.562944 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.563025 kubelet[2725]: E0707 00:07:22.562961 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.563180 kubelet[2725]: E0707 00:07:22.563166 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.563180 kubelet[2725]: W0707 00:07:22.563178 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.563234 kubelet[2725]: E0707 00:07:22.563195 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.563432 kubelet[2725]: E0707 00:07:22.563417 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.563432 kubelet[2725]: W0707 00:07:22.563430 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.563495 kubelet[2725]: E0707 00:07:22.563472 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.563648 kubelet[2725]: E0707 00:07:22.563634 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.563648 kubelet[2725]: W0707 00:07:22.563646 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.563709 kubelet[2725]: E0707 00:07:22.563673 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.563835 kubelet[2725]: E0707 00:07:22.563818 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.563835 kubelet[2725]: W0707 00:07:22.563829 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.563905 kubelet[2725]: E0707 00:07:22.563843 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.564054 kubelet[2725]: E0707 00:07:22.564039 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.564054 kubelet[2725]: W0707 00:07:22.564050 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.564103 kubelet[2725]: E0707 00:07:22.564064 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.564254 kubelet[2725]: E0707 00:07:22.564238 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.564254 kubelet[2725]: W0707 00:07:22.564252 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.564312 kubelet[2725]: E0707 00:07:22.564267 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.564580 kubelet[2725]: E0707 00:07:22.564565 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.564580 kubelet[2725]: W0707 00:07:22.564578 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.564638 kubelet[2725]: E0707 00:07:22.564592 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.564814 kubelet[2725]: E0707 00:07:22.564800 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.564814 kubelet[2725]: W0707 00:07:22.564812 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.564896 kubelet[2725]: E0707 00:07:22.564870 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.565025 kubelet[2725]: E0707 00:07:22.564994 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.565025 kubelet[2725]: W0707 00:07:22.565006 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.565088 kubelet[2725]: E0707 00:07:22.565064 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.565227 kubelet[2725]: E0707 00:07:22.565212 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.565227 kubelet[2725]: W0707 00:07:22.565224 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.565285 kubelet[2725]: E0707 00:07:22.565256 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.565440 kubelet[2725]: E0707 00:07:22.565424 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.565440 kubelet[2725]: W0707 00:07:22.565437 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.565495 kubelet[2725]: E0707 00:07:22.565453 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.565634 kubelet[2725]: E0707 00:07:22.565620 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.565634 kubelet[2725]: W0707 00:07:22.565631 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.565695 kubelet[2725]: E0707 00:07:22.565644 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.565869 kubelet[2725]: E0707 00:07:22.565854 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.565869 kubelet[2725]: W0707 00:07:22.565867 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.565931 kubelet[2725]: E0707 00:07:22.565881 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.566095 kubelet[2725]: E0707 00:07:22.566077 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.566095 kubelet[2725]: W0707 00:07:22.566092 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.566174 kubelet[2725]: E0707 00:07:22.566106 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.566307 kubelet[2725]: E0707 00:07:22.566292 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.566307 kubelet[2725]: W0707 00:07:22.566304 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.566371 kubelet[2725]: E0707 00:07:22.566319 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.566559 kubelet[2725]: E0707 00:07:22.566543 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.566559 kubelet[2725]: W0707 00:07:22.566555 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.566632 kubelet[2725]: E0707 00:07:22.566581 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.566776 kubelet[2725]: E0707 00:07:22.566750 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.566776 kubelet[2725]: W0707 00:07:22.566762 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.566776 kubelet[2725]: E0707 00:07:22.566773 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.567021 kubelet[2725]: E0707 00:07:22.566995 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.567058 kubelet[2725]: W0707 00:07:22.567006 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.567058 kubelet[2725]: E0707 00:07:22.567034 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.573138 kubelet[2725]: E0707 00:07:22.573111 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.573138 kubelet[2725]: W0707 00:07:22.573126 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.573233 kubelet[2725]: E0707 00:07:22.573141 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:22.575810 kubelet[2725]: E0707 00:07:22.575792 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:22.575810 kubelet[2725]: W0707 00:07:22.575805 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:22.576193 kubelet[2725]: E0707 00:07:22.575817 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:23.615511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038176406.mount: Deactivated successfully. Jul 7 00:07:23.635913 kubelet[2725]: E0707 00:07:23.635833 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:25.439199 containerd[1575]: time="2025-07-07T00:07:25.439148867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:25.440053 containerd[1575]: time="2025-07-07T00:07:25.440025440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:07:25.441163 containerd[1575]: time="2025-07-07T00:07:25.441128059Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:25.443250 containerd[1575]: time="2025-07-07T00:07:25.443199153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:25.443815 containerd[1575]: time="2025-07-07T00:07:25.443784236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.279377595s" Jul 7 00:07:25.443815 containerd[1575]: time="2025-07-07T00:07:25.443814804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:07:25.444805 containerd[1575]: time="2025-07-07T00:07:25.444746921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:07:25.455148 containerd[1575]: time="2025-07-07T00:07:25.455098264Z" level=info msg="CreateContainer within sandbox \"602444057de93eb48317061de994f708c85a793b47e59025f4ed212be08a9624\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:07:25.464091 containerd[1575]: time="2025-07-07T00:07:25.464050190Z" level=info msg="Container e65b798ad497b4d39310694513ea199de9c806022afd42b7a060fc11eb275c4c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:25.474975 containerd[1575]: time="2025-07-07T00:07:25.474925682Z" level=info msg="CreateContainer within sandbox \"602444057de93eb48317061de994f708c85a793b47e59025f4ed212be08a9624\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e65b798ad497b4d39310694513ea199de9c806022afd42b7a060fc11eb275c4c\"" Jul 7 00:07:25.475598 containerd[1575]: time="2025-07-07T00:07:25.475556821Z" level=info msg="StartContainer for \"e65b798ad497b4d39310694513ea199de9c806022afd42b7a060fc11eb275c4c\"" Jul 7 00:07:25.476875 containerd[1575]: time="2025-07-07T00:07:25.476838207Z" level=info msg="connecting to shim e65b798ad497b4d39310694513ea199de9c806022afd42b7a060fc11eb275c4c" address="unix:///run/containerd/s/76698f5fd9a6207a4bbf7689c223dde9c5a985de803f58f80e13c62a92b4a41b" protocol=ttrpc version=3 Jul 7 00:07:25.505250 systemd[1]: Started cri-containerd-e65b798ad497b4d39310694513ea199de9c806022afd42b7a060fc11eb275c4c.scope - libcontainer container e65b798ad497b4d39310694513ea199de9c806022afd42b7a060fc11eb275c4c. Jul 7 00:07:25.557738 containerd[1575]: time="2025-07-07T00:07:25.557680905Z" level=info msg="StartContainer for \"e65b798ad497b4d39310694513ea199de9c806022afd42b7a060fc11eb275c4c\" returns successfully" Jul 7 00:07:25.636506 kubelet[2725]: E0707 00:07:25.636338 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:25.699969 kubelet[2725]: E0707 00:07:25.699853 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:25.756568 kubelet[2725]: E0707 00:07:25.756516 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.756568 kubelet[2725]: W0707 00:07:25.756545 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.756568 kubelet[2725]: E0707 00:07:25.756571 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.756804 kubelet[2725]: E0707 00:07:25.756776 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.756804 kubelet[2725]: W0707 00:07:25.756784 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.756804 kubelet[2725]: E0707 00:07:25.756793 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.757070 kubelet[2725]: E0707 00:07:25.756969 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.757070 kubelet[2725]: W0707 00:07:25.756979 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.757070 kubelet[2725]: E0707 00:07:25.756987 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.757233 kubelet[2725]: E0707 00:07:25.757202 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.757233 kubelet[2725]: W0707 00:07:25.757210 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.757233 kubelet[2725]: E0707 00:07:25.757219 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.757418 kubelet[2725]: E0707 00:07:25.757401 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.757418 kubelet[2725]: W0707 00:07:25.757412 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.757493 kubelet[2725]: E0707 00:07:25.757421 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.757614 kubelet[2725]: E0707 00:07:25.757586 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.757614 kubelet[2725]: W0707 00:07:25.757597 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.757614 kubelet[2725]: E0707 00:07:25.757608 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.757796 kubelet[2725]: E0707 00:07:25.757763 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.757796 kubelet[2725]: W0707 00:07:25.757773 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.757796 kubelet[2725]: E0707 00:07:25.757781 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.757988 kubelet[2725]: E0707 00:07:25.757964 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.757988 kubelet[2725]: W0707 00:07:25.757973 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.757988 kubelet[2725]: E0707 00:07:25.757981 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.758236 kubelet[2725]: E0707 00:07:25.758211 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.758236 kubelet[2725]: W0707 00:07:25.758219 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.758236 kubelet[2725]: E0707 00:07:25.758227 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.758566 kubelet[2725]: E0707 00:07:25.758531 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.758566 kubelet[2725]: W0707 00:07:25.758545 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.758566 kubelet[2725]: E0707 00:07:25.758555 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.758800 kubelet[2725]: E0707 00:07:25.758773 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.758800 kubelet[2725]: W0707 00:07:25.758785 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.758800 kubelet[2725]: E0707 00:07:25.758794 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.758976 kubelet[2725]: E0707 00:07:25.758954 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.758976 kubelet[2725]: W0707 00:07:25.758965 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.758976 kubelet[2725]: E0707 00:07:25.758973 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.759184 kubelet[2725]: E0707 00:07:25.759161 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.759184 kubelet[2725]: W0707 00:07:25.759182 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.759250 kubelet[2725]: E0707 00:07:25.759190 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.759414 kubelet[2725]: E0707 00:07:25.759378 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.759414 kubelet[2725]: W0707 00:07:25.759406 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.759414 kubelet[2725]: E0707 00:07:25.759416 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.759641 kubelet[2725]: E0707 00:07:25.759618 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.759641 kubelet[2725]: W0707 00:07:25.759633 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.759724 kubelet[2725]: E0707 00:07:25.759657 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.786181 kubelet[2725]: E0707 00:07:25.786155 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.786181 kubelet[2725]: W0707 00:07:25.786171 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.786181 kubelet[2725]: E0707 00:07:25.786186 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.786429 kubelet[2725]: E0707 00:07:25.786413 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.786429 kubelet[2725]: W0707 00:07:25.786424 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.786491 kubelet[2725]: E0707 00:07:25.786438 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.786719 kubelet[2725]: E0707 00:07:25.786677 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.786719 kubelet[2725]: W0707 00:07:25.786710 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.786791 kubelet[2725]: E0707 00:07:25.786746 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.787114 kubelet[2725]: E0707 00:07:25.787088 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.787114 kubelet[2725]: W0707 00:07:25.787106 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.787187 kubelet[2725]: E0707 00:07:25.787123 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.787341 kubelet[2725]: E0707 00:07:25.787322 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.787341 kubelet[2725]: W0707 00:07:25.787335 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.787407 kubelet[2725]: E0707 00:07:25.787359 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.787559 kubelet[2725]: E0707 00:07:25.787543 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.787559 kubelet[2725]: W0707 00:07:25.787553 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.787610 kubelet[2725]: E0707 00:07:25.787566 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.787740 kubelet[2725]: E0707 00:07:25.787722 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.787740 kubelet[2725]: W0707 00:07:25.787732 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.787786 kubelet[2725]: E0707 00:07:25.787746 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.787970 kubelet[2725]: E0707 00:07:25.787950 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.787970 kubelet[2725]: W0707 00:07:25.787961 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.788073 kubelet[2725]: E0707 00:07:25.787976 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.788245 kubelet[2725]: E0707 00:07:25.788226 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.788245 kubelet[2725]: W0707 00:07:25.788239 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.788312 kubelet[2725]: E0707 00:07:25.788254 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.788456 kubelet[2725]: E0707 00:07:25.788439 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.788456 kubelet[2725]: W0707 00:07:25.788451 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.788510 kubelet[2725]: E0707 00:07:25.788465 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.788655 kubelet[2725]: E0707 00:07:25.788639 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.788655 kubelet[2725]: W0707 00:07:25.788649 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.788710 kubelet[2725]: E0707 00:07:25.788663 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.788865 kubelet[2725]: E0707 00:07:25.788849 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.788865 kubelet[2725]: W0707 00:07:25.788860 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.788918 kubelet[2725]: E0707 00:07:25.788873 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.789138 kubelet[2725]: E0707 00:07:25.789120 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.789138 kubelet[2725]: W0707 00:07:25.789133 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.789198 kubelet[2725]: E0707 00:07:25.789146 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.789344 kubelet[2725]: E0707 00:07:25.789328 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.789344 kubelet[2725]: W0707 00:07:25.789339 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.789413 kubelet[2725]: E0707 00:07:25.789363 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.789561 kubelet[2725]: E0707 00:07:25.789544 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.789561 kubelet[2725]: W0707 00:07:25.789555 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.789616 kubelet[2725]: E0707 00:07:25.789566 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.789755 kubelet[2725]: E0707 00:07:25.789737 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.789755 kubelet[2725]: W0707 00:07:25.789751 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.789816 kubelet[2725]: E0707 00:07:25.789765 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.789968 kubelet[2725]: E0707 00:07:25.789952 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.789968 kubelet[2725]: W0707 00:07:25.789963 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.790030 kubelet[2725]: E0707 00:07:25.789974 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:25.790400 kubelet[2725]: E0707 00:07:25.790373 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:25.790400 kubelet[2725]: W0707 00:07:25.790386 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:25.790400 kubelet[2725]: E0707 00:07:25.790395 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.700917 kubelet[2725]: I0707 00:07:26.700837 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:07:26.701496 kubelet[2725]: E0707 00:07:26.701223 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:26.766607 kubelet[2725]: E0707 00:07:26.766564 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.766607 kubelet[2725]: W0707 00:07:26.766587 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.766607 kubelet[2725]: E0707 00:07:26.766614 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.766918 kubelet[2725]: E0707 00:07:26.766893 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.766918 kubelet[2725]: W0707 00:07:26.766905 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.766999 kubelet[2725]: E0707 00:07:26.766917 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.767228 kubelet[2725]: E0707 00:07:26.767203 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.767228 kubelet[2725]: W0707 00:07:26.767216 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.767228 kubelet[2725]: E0707 00:07:26.767227 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.767491 kubelet[2725]: E0707 00:07:26.767466 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.767491 kubelet[2725]: W0707 00:07:26.767477 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.767491 kubelet[2725]: E0707 00:07:26.767489 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.767735 kubelet[2725]: E0707 00:07:26.767711 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.767735 kubelet[2725]: W0707 00:07:26.767722 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.767735 kubelet[2725]: E0707 00:07:26.767733 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.767948 kubelet[2725]: E0707 00:07:26.767927 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.767948 kubelet[2725]: W0707 00:07:26.767939 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.767948 kubelet[2725]: E0707 00:07:26.767950 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.768194 kubelet[2725]: E0707 00:07:26.768173 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.768194 kubelet[2725]: W0707 00:07:26.768185 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.768194 kubelet[2725]: E0707 00:07:26.768196 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.768434 kubelet[2725]: E0707 00:07:26.768413 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.768434 kubelet[2725]: W0707 00:07:26.768424 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.768434 kubelet[2725]: E0707 00:07:26.768436 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.768675 kubelet[2725]: E0707 00:07:26.768651 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.768675 kubelet[2725]: W0707 00:07:26.768662 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.768675 kubelet[2725]: E0707 00:07:26.768675 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.768896 kubelet[2725]: E0707 00:07:26.768876 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.768896 kubelet[2725]: W0707 00:07:26.768893 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.768967 kubelet[2725]: E0707 00:07:26.768905 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.769142 kubelet[2725]: E0707 00:07:26.769117 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.769142 kubelet[2725]: W0707 00:07:26.769128 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.769142 kubelet[2725]: E0707 00:07:26.769139 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.769378 kubelet[2725]: E0707 00:07:26.769355 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.769378 kubelet[2725]: W0707 00:07:26.769370 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.769462 kubelet[2725]: E0707 00:07:26.769381 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.769604 kubelet[2725]: E0707 00:07:26.769582 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.769604 kubelet[2725]: W0707 00:07:26.769594 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.769604 kubelet[2725]: E0707 00:07:26.769605 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.769819 kubelet[2725]: E0707 00:07:26.769795 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.769819 kubelet[2725]: W0707 00:07:26.769807 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.769819 kubelet[2725]: E0707 00:07:26.769818 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.770053 kubelet[2725]: E0707 00:07:26.770027 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.770053 kubelet[2725]: W0707 00:07:26.770039 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.770053 kubelet[2725]: E0707 00:07:26.770052 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.793420 kubelet[2725]: E0707 00:07:26.793397 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.793420 kubelet[2725]: W0707 00:07:26.793411 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.793520 kubelet[2725]: E0707 00:07:26.793425 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.793720 kubelet[2725]: E0707 00:07:26.793699 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.793720 kubelet[2725]: W0707 00:07:26.793713 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.793819 kubelet[2725]: E0707 00:07:26.793733 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.794002 kubelet[2725]: E0707 00:07:26.793984 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.794002 kubelet[2725]: W0707 00:07:26.793997 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.794115 kubelet[2725]: E0707 00:07:26.794042 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.794369 kubelet[2725]: E0707 00:07:26.794314 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.794369 kubelet[2725]: W0707 00:07:26.794351 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.794457 kubelet[2725]: E0707 00:07:26.794384 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.794570 kubelet[2725]: E0707 00:07:26.794551 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.794570 kubelet[2725]: W0707 00:07:26.794562 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.794636 kubelet[2725]: E0707 00:07:26.794575 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.794789 kubelet[2725]: E0707 00:07:26.794769 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.794789 kubelet[2725]: W0707 00:07:26.794780 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.794879 kubelet[2725]: E0707 00:07:26.794793 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.795044 kubelet[2725]: E0707 00:07:26.794996 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.795044 kubelet[2725]: W0707 00:07:26.795026 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.795044 kubelet[2725]: E0707 00:07:26.795040 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.795293 kubelet[2725]: E0707 00:07:26.795269 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.795293 kubelet[2725]: W0707 00:07:26.795286 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.795384 kubelet[2725]: E0707 00:07:26.795304 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.795529 kubelet[2725]: E0707 00:07:26.795509 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.795529 kubelet[2725]: W0707 00:07:26.795521 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.795591 kubelet[2725]: E0707 00:07:26.795536 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.795749 kubelet[2725]: E0707 00:07:26.795730 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.795749 kubelet[2725]: W0707 00:07:26.795742 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.795836 kubelet[2725]: E0707 00:07:26.795754 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.795994 kubelet[2725]: E0707 00:07:26.795971 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.795994 kubelet[2725]: W0707 00:07:26.795987 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.796113 kubelet[2725]: E0707 00:07:26.796005 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.796290 kubelet[2725]: E0707 00:07:26.796268 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.796290 kubelet[2725]: W0707 00:07:26.796283 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.796396 kubelet[2725]: E0707 00:07:26.796303 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.796541 kubelet[2725]: E0707 00:07:26.796523 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.796541 kubelet[2725]: W0707 00:07:26.796534 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.796612 kubelet[2725]: E0707 00:07:26.796553 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.796762 kubelet[2725]: E0707 00:07:26.796741 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.796762 kubelet[2725]: W0707 00:07:26.796754 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.796834 kubelet[2725]: E0707 00:07:26.796769 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.797076 kubelet[2725]: E0707 00:07:26.797055 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.797076 kubelet[2725]: W0707 00:07:26.797069 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.797165 kubelet[2725]: E0707 00:07:26.797090 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.797357 kubelet[2725]: E0707 00:07:26.797336 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.797357 kubelet[2725]: W0707 00:07:26.797349 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.797441 kubelet[2725]: E0707 00:07:26.797367 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.797545 kubelet[2725]: E0707 00:07:26.797523 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.797545 kubelet[2725]: W0707 00:07:26.797536 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.797545 kubelet[2725]: E0707 00:07:26.797545 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:26.797792 kubelet[2725]: E0707 00:07:26.797772 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:07:26.797792 kubelet[2725]: W0707 00:07:26.797784 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:07:26.797875 kubelet[2725]: E0707 00:07:26.797796 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:07:27.195845 containerd[1575]: time="2025-07-07T00:07:27.195783957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:27.196574 containerd[1575]: time="2025-07-07T00:07:27.196536715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:07:27.197568 containerd[1575]: time="2025-07-07T00:07:27.197527692Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:27.199550 containerd[1575]: time="2025-07-07T00:07:27.199504076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:27.200100 containerd[1575]: time="2025-07-07T00:07:27.200071244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.755293726s" Jul 7 00:07:27.200138 containerd[1575]: time="2025-07-07T00:07:27.200102603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:07:27.202246 containerd[1575]: time="2025-07-07T00:07:27.202208300Z" level=info msg="CreateContainer within sandbox \"85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:07:27.213655 containerd[1575]: time="2025-07-07T00:07:27.213064755Z" level=info msg="Container a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:27.222001 containerd[1575]: time="2025-07-07T00:07:27.221955474Z" level=info msg="CreateContainer within sandbox \"85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720\"" Jul 7 00:07:27.222475 containerd[1575]: time="2025-07-07T00:07:27.222449124Z" level=info msg="StartContainer for \"a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720\"" Jul 7 00:07:27.223802 containerd[1575]: time="2025-07-07T00:07:27.223771306Z" level=info msg="connecting to shim a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720" address="unix:///run/containerd/s/94dd3ff8415c68cc6dc61320d2e4378c00641bd8a17202593c67d5fd7cf3bb92" protocol=ttrpc version=3 Jul 7 00:07:27.258192 systemd[1]: Started cri-containerd-a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720.scope - libcontainer container a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720. Jul 7 00:07:27.305754 containerd[1575]: time="2025-07-07T00:07:27.305679894Z" level=info msg="StartContainer for \"a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720\" returns successfully" Jul 7 00:07:27.315398 systemd[1]: cri-containerd-a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720.scope: Deactivated successfully. Jul 7 00:07:27.316922 containerd[1575]: time="2025-07-07T00:07:27.316877220Z" level=info msg="received exit event container_id:\"a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720\" id:\"a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720\" pid:3459 exited_at:{seconds:1751846847 nanos:316463490}" Jul 7 00:07:27.317089 containerd[1575]: time="2025-07-07T00:07:27.317033794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720\" id:\"a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720\" pid:3459 exited_at:{seconds:1751846847 nanos:316463490}" Jul 7 00:07:27.347126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4915c6f426a1e14fef054eb87a8858820a308be4ffad64525b311d3b4212720-rootfs.mount: Deactivated successfully. Jul 7 00:07:27.635278 kubelet[2725]: E0707 00:07:27.635207 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:27.799676 kubelet[2725]: I0707 00:07:27.799480 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84465f67b6-mvs8n" podStartSLOduration=3.518836853 podStartE2EDuration="6.799437876s" podCreationTimestamp="2025-07-07 00:07:21 +0000 UTC" firstStartedPulling="2025-07-07 00:07:22.163987249 +0000 UTC m=+18.614054911" lastFinishedPulling="2025-07-07 00:07:25.444588282 +0000 UTC m=+21.894655934" observedRunningTime="2025-07-07 00:07:25.709736728 +0000 UTC m=+22.159804380" watchObservedRunningTime="2025-07-07 00:07:27.799437876 +0000 UTC m=+24.249505528" Jul 7 00:07:28.790005 containerd[1575]: time="2025-07-07T00:07:28.789938591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:07:29.635454 kubelet[2725]: E0707 00:07:29.635341 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:31.635374 kubelet[2725]: E0707 00:07:31.635289 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:33.467142 kubelet[2725]: I0707 00:07:33.467074 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:07:33.468814 kubelet[2725]: E0707 00:07:33.467554 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:33.636240 kubelet[2725]: E0707 00:07:33.636193 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:33.800727 kubelet[2725]: E0707 00:07:33.800609 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:33.964986 containerd[1575]: time="2025-07-07T00:07:33.964910922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:33.965710 containerd[1575]: time="2025-07-07T00:07:33.965651796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:07:33.967030 containerd[1575]: time="2025-07-07T00:07:33.966967060Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:33.969282 containerd[1575]: time="2025-07-07T00:07:33.969235578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:33.970007 containerd[1575]: time="2025-07-07T00:07:33.969968577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.179979923s" Jul 7 00:07:33.970007 containerd[1575]: time="2025-07-07T00:07:33.970002933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:07:33.972571 containerd[1575]: time="2025-07-07T00:07:33.972535477Z" level=info msg="CreateContainer within sandbox \"85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:07:33.982026 containerd[1575]: time="2025-07-07T00:07:33.981959337Z" level=info msg="Container 75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:33.995476 containerd[1575]: time="2025-07-07T00:07:33.995418571Z" level=info msg="CreateContainer within sandbox \"85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d\"" Jul 7 00:07:33.996126 containerd[1575]: time="2025-07-07T00:07:33.996079855Z" level=info msg="StartContainer for \"75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d\"" Jul 7 00:07:33.997820 containerd[1575]: time="2025-07-07T00:07:33.997793930Z" level=info msg="connecting to shim 75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d" address="unix:///run/containerd/s/94dd3ff8415c68cc6dc61320d2e4378c00641bd8a17202593c67d5fd7cf3bb92" protocol=ttrpc version=3 Jul 7 00:07:34.027332 systemd[1]: Started cri-containerd-75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d.scope - libcontainer container 75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d. Jul 7 00:07:34.083989 containerd[1575]: time="2025-07-07T00:07:34.083933752Z" level=info msg="StartContainer for \"75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d\" returns successfully" Jul 7 00:07:35.655383 kubelet[2725]: E0707 00:07:35.655259 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:36.007139 containerd[1575]: time="2025-07-07T00:07:36.006917371Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jul 7 00:07:36.010670 systemd[1]: cri-containerd-75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d.scope: Deactivated successfully. Jul 7 00:07:36.011446 systemd[1]: cri-containerd-75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d.scope: Consumed 677ms CPU time, 177.2M memory peak, 3.2M read from disk, 171.2M written to disk. Jul 7 00:07:36.012827 containerd[1575]: time="2025-07-07T00:07:36.012770888Z" level=info msg="received exit event container_id:\"75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d\" id:\"75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d\" pid:3523 exited_at:{seconds:1751846856 nanos:12542729}" Jul 7 00:07:36.013039 containerd[1575]: time="2025-07-07T00:07:36.012888128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d\" id:\"75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d\" pid:3523 exited_at:{seconds:1751846856 nanos:12542729}" Jul 7 00:07:36.040862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75345ac604dedaa9cc378f59ef7c68201d8f029a781199bba405169147cd631d-rootfs.mount: Deactivated successfully. Jul 7 00:07:36.090256 kubelet[2725]: I0707 00:07:36.090208 2725 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 00:07:36.230938 systemd[1]: Created slice kubepods-burstable-pod59a91234_bc75_4ad7_bf18_197cbb403576.slice - libcontainer container kubepods-burstable-pod59a91234_bc75_4ad7_bf18_197cbb403576.slice. Jul 7 00:07:36.237568 systemd[1]: Created slice kubepods-burstable-poddd46f789_c107_49d4_8d93_8d4643fff672.slice - libcontainer container kubepods-burstable-poddd46f789_c107_49d4_8d93_8d4643fff672.slice. Jul 7 00:07:36.242900 systemd[1]: Created slice kubepods-besteffort-podabf225d5_7bc4_4ca5_bceb_0815f17fd350.slice - libcontainer container kubepods-besteffort-podabf225d5_7bc4_4ca5_bceb_0815f17fd350.slice. Jul 7 00:07:36.249526 systemd[1]: Created slice kubepods-besteffort-pod5d8fa112_6a01_4e51_b931_c7ec59ec77a8.slice - libcontainer container kubepods-besteffort-pod5d8fa112_6a01_4e51_b931_c7ec59ec77a8.slice. Jul 7 00:07:36.255460 systemd[1]: Created slice kubepods-besteffort-pod3c397933_3c8f_4270_8b5a_23cb516115f4.slice - libcontainer container kubepods-besteffort-pod3c397933_3c8f_4270_8b5a_23cb516115f4.slice. Jul 7 00:07:36.261883 systemd[1]: Created slice kubepods-besteffort-pode1f9d356_e654_4c17_9394_6754ad117035.slice - libcontainer container kubepods-besteffort-pode1f9d356_e654_4c17_9394_6754ad117035.slice. Jul 7 00:07:36.266565 systemd[1]: Created slice kubepods-besteffort-podc458bbc8_ed83_4245_b6c3_225052b35912.slice - libcontainer container kubepods-besteffort-podc458bbc8_ed83_4245_b6c3_225052b35912.slice. Jul 7 00:07:36.375172 kubelet[2725]: I0707 00:07:36.375103 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1f9d356-e654-4c17-9394-6754ad117035-whisker-ca-bundle\") pod \"whisker-6c459f4946-l6s2s\" (UID: \"e1f9d356-e654-4c17-9394-6754ad117035\") " pod="calico-system/whisker-6c459f4946-l6s2s" Jul 7 00:07:36.375172 kubelet[2725]: I0707 00:07:36.375179 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3c397933-3c8f-4270-8b5a-23cb516115f4-goldmane-key-pair\") pod \"goldmane-58fd7646b9-rnpsd\" (UID: \"3c397933-3c8f-4270-8b5a-23cb516115f4\") " pod="calico-system/goldmane-58fd7646b9-rnpsd" Jul 7 00:07:36.375406 kubelet[2725]: I0707 00:07:36.375215 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c458bbc8-ed83-4245-b6c3-225052b35912-calico-apiserver-certs\") pod \"calico-apiserver-57c5b65fcc-f8m9l\" (UID: \"c458bbc8-ed83-4245-b6c3-225052b35912\") " pod="calico-apiserver/calico-apiserver-57c5b65fcc-f8m9l" Jul 7 00:07:36.375406 kubelet[2725]: I0707 00:07:36.375246 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd46f789-c107-49d4-8d93-8d4643fff672-config-volume\") pod \"coredns-7c65d6cfc9-6bb9f\" (UID: \"dd46f789-c107-49d4-8d93-8d4643fff672\") " pod="kube-system/coredns-7c65d6cfc9-6bb9f" Jul 7 00:07:36.375406 kubelet[2725]: I0707 00:07:36.375271 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8cdm\" (UniqueName: \"kubernetes.io/projected/dd46f789-c107-49d4-8d93-8d4643fff672-kube-api-access-k8cdm\") pod \"coredns-7c65d6cfc9-6bb9f\" (UID: \"dd46f789-c107-49d4-8d93-8d4643fff672\") " pod="kube-system/coredns-7c65d6cfc9-6bb9f" Jul 7 00:07:36.375406 kubelet[2725]: I0707 00:07:36.375294 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5nd\" (UniqueName: \"kubernetes.io/projected/c458bbc8-ed83-4245-b6c3-225052b35912-kube-api-access-5g5nd\") pod \"calico-apiserver-57c5b65fcc-f8m9l\" (UID: \"c458bbc8-ed83-4245-b6c3-225052b35912\") " pod="calico-apiserver/calico-apiserver-57c5b65fcc-f8m9l" Jul 7 00:07:36.375406 kubelet[2725]: I0707 00:07:36.375320 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c397933-3c8f-4270-8b5a-23cb516115f4-config\") pod \"goldmane-58fd7646b9-rnpsd\" (UID: \"3c397933-3c8f-4270-8b5a-23cb516115f4\") " pod="calico-system/goldmane-58fd7646b9-rnpsd" Jul 7 00:07:36.375561 kubelet[2725]: I0707 00:07:36.375344 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rr7l\" (UniqueName: \"kubernetes.io/projected/abf225d5-7bc4-4ca5-bceb-0815f17fd350-kube-api-access-5rr7l\") pod \"calico-kube-controllers-7f4666f86c-r9hqc\" (UID: \"abf225d5-7bc4-4ca5-bceb-0815f17fd350\") " pod="calico-system/calico-kube-controllers-7f4666f86c-r9hqc" Jul 7 00:07:36.375561 kubelet[2725]: I0707 00:07:36.375372 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d8fa112-6a01-4e51-b931-c7ec59ec77a8-calico-apiserver-certs\") pod \"calico-apiserver-57c5b65fcc-2jf6l\" (UID: \"5d8fa112-6a01-4e51-b931-c7ec59ec77a8\") " pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" Jul 7 00:07:36.375561 kubelet[2725]: I0707 00:07:36.375395 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xfbh\" (UniqueName: \"kubernetes.io/projected/3c397933-3c8f-4270-8b5a-23cb516115f4-kube-api-access-5xfbh\") pod \"goldmane-58fd7646b9-rnpsd\" (UID: \"3c397933-3c8f-4270-8b5a-23cb516115f4\") " pod="calico-system/goldmane-58fd7646b9-rnpsd" Jul 7 00:07:36.375561 kubelet[2725]: I0707 00:07:36.375456 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq62v\" (UniqueName: \"kubernetes.io/projected/5d8fa112-6a01-4e51-b931-c7ec59ec77a8-kube-api-access-wq62v\") pod \"calico-apiserver-57c5b65fcc-2jf6l\" (UID: \"5d8fa112-6a01-4e51-b931-c7ec59ec77a8\") " pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" Jul 7 00:07:36.375680 kubelet[2725]: I0707 00:07:36.375558 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvjlc\" (UniqueName: \"kubernetes.io/projected/e1f9d356-e654-4c17-9394-6754ad117035-kube-api-access-vvjlc\") pod \"whisker-6c459f4946-l6s2s\" (UID: \"e1f9d356-e654-4c17-9394-6754ad117035\") " pod="calico-system/whisker-6c459f4946-l6s2s" Jul 7 00:07:36.375680 kubelet[2725]: I0707 00:07:36.375618 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lll7\" (UniqueName: \"kubernetes.io/projected/59a91234-bc75-4ad7-bf18-197cbb403576-kube-api-access-7lll7\") pod \"coredns-7c65d6cfc9-ghpfm\" (UID: \"59a91234-bc75-4ad7-bf18-197cbb403576\") " pod="kube-system/coredns-7c65d6cfc9-ghpfm" Jul 7 00:07:36.375680 kubelet[2725]: I0707 00:07:36.375641 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abf225d5-7bc4-4ca5-bceb-0815f17fd350-tigera-ca-bundle\") pod \"calico-kube-controllers-7f4666f86c-r9hqc\" (UID: \"abf225d5-7bc4-4ca5-bceb-0815f17fd350\") " pod="calico-system/calico-kube-controllers-7f4666f86c-r9hqc" Jul 7 00:07:36.375838 kubelet[2725]: I0707 00:07:36.375706 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c397933-3c8f-4270-8b5a-23cb516115f4-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-rnpsd\" (UID: \"3c397933-3c8f-4270-8b5a-23cb516115f4\") " pod="calico-system/goldmane-58fd7646b9-rnpsd" Jul 7 00:07:36.375838 kubelet[2725]: I0707 00:07:36.375765 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59a91234-bc75-4ad7-bf18-197cbb403576-config-volume\") pod \"coredns-7c65d6cfc9-ghpfm\" (UID: \"59a91234-bc75-4ad7-bf18-197cbb403576\") " pod="kube-system/coredns-7c65d6cfc9-ghpfm" Jul 7 00:07:36.375838 kubelet[2725]: I0707 00:07:36.375795 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1f9d356-e654-4c17-9394-6754ad117035-whisker-backend-key-pair\") pod \"whisker-6c459f4946-l6s2s\" (UID: \"e1f9d356-e654-4c17-9394-6754ad117035\") " pod="calico-system/whisker-6c459f4946-l6s2s" Jul 7 00:07:36.535803 kubelet[2725]: E0707 00:07:36.535203 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:36.536139 containerd[1575]: time="2025-07-07T00:07:36.536074281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghpfm,Uid:59a91234-bc75-4ad7-bf18-197cbb403576,Namespace:kube-system,Attempt:0,}" Jul 7 00:07:36.540984 kubelet[2725]: E0707 00:07:36.540663 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:36.542176 containerd[1575]: time="2025-07-07T00:07:36.542135369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bb9f,Uid:dd46f789-c107-49d4-8d93-8d4643fff672,Namespace:kube-system,Attempt:0,}" Jul 7 00:07:36.547261 containerd[1575]: time="2025-07-07T00:07:36.547219550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4666f86c-r9hqc,Uid:abf225d5-7bc4-4ca5-bceb-0815f17fd350,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:36.553786 containerd[1575]: time="2025-07-07T00:07:36.553651305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-2jf6l,Uid:5d8fa112-6a01-4e51-b931-c7ec59ec77a8,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:07:36.561822 containerd[1575]: time="2025-07-07T00:07:36.561715129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-rnpsd,Uid:3c397933-3c8f-4270-8b5a-23cb516115f4,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:36.568041 containerd[1575]: time="2025-07-07T00:07:36.567149769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c459f4946-l6s2s,Uid:e1f9d356-e654-4c17-9394-6754ad117035,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:36.570246 containerd[1575]: time="2025-07-07T00:07:36.570214493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-f8m9l,Uid:c458bbc8-ed83-4245-b6c3-225052b35912,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:07:36.695288 containerd[1575]: time="2025-07-07T00:07:36.695222903Z" level=error msg="Failed to destroy network for sandbox \"5452377f55a4ce0f8709e423bbfc39973a8e11eaf4eb8e524e79df64d7f18be5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.700480 containerd[1575]: time="2025-07-07T00:07:36.700399097Z" level=error msg="Failed to destroy network for sandbox \"64ebc0120b3373650f2939a214f1258ad0e87cf208cb87c8efb0d56d61920c48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.711708 containerd[1575]: time="2025-07-07T00:07:36.711654213Z" level=error msg="Failed to destroy network for sandbox \"8ddf9b5e32aa1ffbf450e39d1b11fe9a5eb65438aeebd19e417f0fb1eff09d7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.719926 containerd[1575]: time="2025-07-07T00:07:36.719853592Z" level=error msg="Failed to destroy network for sandbox \"40c4624118fc96b504d3efaa3897bd45dc9bed2f89992a2efb858d66c2a80e60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.730140 containerd[1575]: time="2025-07-07T00:07:36.730065706Z" level=error msg="Failed to destroy network for sandbox \"75b4f5b8b8d7a64d9ea9c4e037ba760d77be68195c18214421983ad144ff9a29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.732132 containerd[1575]: time="2025-07-07T00:07:36.732072801Z" level=error msg="Failed to destroy network for sandbox \"dbc3f954af20976483cbb5871b79561aabc7ba92be1b437bbc9c8c946d36c81c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.734103 containerd[1575]: time="2025-07-07T00:07:36.734043246Z" level=error msg="Failed to destroy network for sandbox \"c616ed60b9ff5d934b46c1b1025394f18aaf4d5acb8cc0d138ff9461cf8696a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.817995 containerd[1575]: time="2025-07-07T00:07:36.817531106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:07:36.892538 containerd[1575]: time="2025-07-07T00:07:36.892450184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-2jf6l,Uid:5d8fa112-6a01-4e51-b931-c7ec59ec77a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5452377f55a4ce0f8709e423bbfc39973a8e11eaf4eb8e524e79df64d7f18be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.953008 kubelet[2725]: E0707 00:07:36.952845 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5452377f55a4ce0f8709e423bbfc39973a8e11eaf4eb8e524e79df64d7f18be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:36.953008 kubelet[2725]: E0707 00:07:36.953033 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5452377f55a4ce0f8709e423bbfc39973a8e11eaf4eb8e524e79df64d7f18be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" Jul 7 00:07:36.953662 kubelet[2725]: E0707 00:07:36.953062 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5452377f55a4ce0f8709e423bbfc39973a8e11eaf4eb8e524e79df64d7f18be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" Jul 7 00:07:36.953662 kubelet[2725]: E0707 00:07:36.953135 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c5b65fcc-2jf6l_calico-apiserver(5d8fa112-6a01-4e51-b931-c7ec59ec77a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c5b65fcc-2jf6l_calico-apiserver(5d8fa112-6a01-4e51-b931-c7ec59ec77a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5452377f55a4ce0f8709e423bbfc39973a8e11eaf4eb8e524e79df64d7f18be5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" podUID="5d8fa112-6a01-4e51-b931-c7ec59ec77a8" Jul 7 00:07:37.045144 containerd[1575]: time="2025-07-07T00:07:37.045056322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bb9f,Uid:dd46f789-c107-49d4-8d93-8d4643fff672,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ebc0120b3373650f2939a214f1258ad0e87cf208cb87c8efb0d56d61920c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.045674 kubelet[2725]: E0707 00:07:37.045419 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ebc0120b3373650f2939a214f1258ad0e87cf208cb87c8efb0d56d61920c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.045674 kubelet[2725]: E0707 00:07:37.045507 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ebc0120b3373650f2939a214f1258ad0e87cf208cb87c8efb0d56d61920c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6bb9f" Jul 7 00:07:37.045674 kubelet[2725]: E0707 00:07:37.045536 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ebc0120b3373650f2939a214f1258ad0e87cf208cb87c8efb0d56d61920c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6bb9f" Jul 7 00:07:37.045993 kubelet[2725]: E0707 00:07:37.045953 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6bb9f_kube-system(dd46f789-c107-49d4-8d93-8d4643fff672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6bb9f_kube-system(dd46f789-c107-49d4-8d93-8d4643fff672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64ebc0120b3373650f2939a214f1258ad0e87cf208cb87c8efb0d56d61920c48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6bb9f" podUID="dd46f789-c107-49d4-8d93-8d4643fff672" Jul 7 00:07:37.109642 containerd[1575]: time="2025-07-07T00:07:37.109426943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghpfm,Uid:59a91234-bc75-4ad7-bf18-197cbb403576,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ddf9b5e32aa1ffbf450e39d1b11fe9a5eb65438aeebd19e417f0fb1eff09d7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.109809 kubelet[2725]: E0707 00:07:37.109753 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ddf9b5e32aa1ffbf450e39d1b11fe9a5eb65438aeebd19e417f0fb1eff09d7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.109872 kubelet[2725]: E0707 00:07:37.109842 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ddf9b5e32aa1ffbf450e39d1b11fe9a5eb65438aeebd19e417f0fb1eff09d7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ghpfm" Jul 7 00:07:37.110083 kubelet[2725]: E0707 00:07:37.109870 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ddf9b5e32aa1ffbf450e39d1b11fe9a5eb65438aeebd19e417f0fb1eff09d7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ghpfm" Jul 7 00:07:37.110083 kubelet[2725]: E0707 00:07:37.109929 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ghpfm_kube-system(59a91234-bc75-4ad7-bf18-197cbb403576)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ghpfm_kube-system(59a91234-bc75-4ad7-bf18-197cbb403576)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ddf9b5e32aa1ffbf450e39d1b11fe9a5eb65438aeebd19e417f0fb1eff09d7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ghpfm" podUID="59a91234-bc75-4ad7-bf18-197cbb403576" Jul 7 00:07:37.126265 containerd[1575]: time="2025-07-07T00:07:37.126173190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4666f86c-r9hqc,Uid:abf225d5-7bc4-4ca5-bceb-0815f17fd350,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c4624118fc96b504d3efaa3897bd45dc9bed2f89992a2efb858d66c2a80e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.126616 kubelet[2725]: E0707 00:07:37.126544 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c4624118fc96b504d3efaa3897bd45dc9bed2f89992a2efb858d66c2a80e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.126793 kubelet[2725]: E0707 00:07:37.126631 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c4624118fc96b504d3efaa3897bd45dc9bed2f89992a2efb858d66c2a80e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f4666f86c-r9hqc" Jul 7 00:07:37.126793 kubelet[2725]: E0707 00:07:37.126658 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c4624118fc96b504d3efaa3897bd45dc9bed2f89992a2efb858d66c2a80e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f4666f86c-r9hqc" Jul 7 00:07:37.126793 kubelet[2725]: E0707 00:07:37.126714 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f4666f86c-r9hqc_calico-system(abf225d5-7bc4-4ca5-bceb-0815f17fd350)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f4666f86c-r9hqc_calico-system(abf225d5-7bc4-4ca5-bceb-0815f17fd350)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40c4624118fc96b504d3efaa3897bd45dc9bed2f89992a2efb858d66c2a80e60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f4666f86c-r9hqc" podUID="abf225d5-7bc4-4ca5-bceb-0815f17fd350" Jul 7 00:07:37.142703 containerd[1575]: time="2025-07-07T00:07:37.142630354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-rnpsd,Uid:3c397933-3c8f-4270-8b5a-23cb516115f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b4f5b8b8d7a64d9ea9c4e037ba760d77be68195c18214421983ad144ff9a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.143078 kubelet[2725]: E0707 00:07:37.143032 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b4f5b8b8d7a64d9ea9c4e037ba760d77be68195c18214421983ad144ff9a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.143156 kubelet[2725]: E0707 00:07:37.143141 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b4f5b8b8d7a64d9ea9c4e037ba760d77be68195c18214421983ad144ff9a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-rnpsd" Jul 7 00:07:37.143199 kubelet[2725]: E0707 00:07:37.143169 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75b4f5b8b8d7a64d9ea9c4e037ba760d77be68195c18214421983ad144ff9a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-rnpsd" Jul 7 00:07:37.143267 kubelet[2725]: E0707 00:07:37.143228 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-rnpsd_calico-system(3c397933-3c8f-4270-8b5a-23cb516115f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-rnpsd_calico-system(3c397933-3c8f-4270-8b5a-23cb516115f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75b4f5b8b8d7a64d9ea9c4e037ba760d77be68195c18214421983ad144ff9a29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-rnpsd" podUID="3c397933-3c8f-4270-8b5a-23cb516115f4" Jul 7 00:07:37.231524 containerd[1575]: time="2025-07-07T00:07:37.231441862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c459f4946-l6s2s,Uid:e1f9d356-e654-4c17-9394-6754ad117035,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc3f954af20976483cbb5871b79561aabc7ba92be1b437bbc9c8c946d36c81c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.231885 kubelet[2725]: E0707 00:07:37.231813 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc3f954af20976483cbb5871b79561aabc7ba92be1b437bbc9c8c946d36c81c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.231949 kubelet[2725]: E0707 00:07:37.231891 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc3f954af20976483cbb5871b79561aabc7ba92be1b437bbc9c8c946d36c81c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c459f4946-l6s2s" Jul 7 00:07:37.231949 kubelet[2725]: E0707 00:07:37.231914 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc3f954af20976483cbb5871b79561aabc7ba92be1b437bbc9c8c946d36c81c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c459f4946-l6s2s" Jul 7 00:07:37.232032 kubelet[2725]: E0707 00:07:37.231967 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c459f4946-l6s2s_calico-system(e1f9d356-e654-4c17-9394-6754ad117035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c459f4946-l6s2s_calico-system(e1f9d356-e654-4c17-9394-6754ad117035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbc3f954af20976483cbb5871b79561aabc7ba92be1b437bbc9c8c946d36c81c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c459f4946-l6s2s" podUID="e1f9d356-e654-4c17-9394-6754ad117035" Jul 7 00:07:37.315563 containerd[1575]: time="2025-07-07T00:07:37.315460837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-f8m9l,Uid:c458bbc8-ed83-4245-b6c3-225052b35912,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c616ed60b9ff5d934b46c1b1025394f18aaf4d5acb8cc0d138ff9461cf8696a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.315944 kubelet[2725]: E0707 00:07:37.315867 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c616ed60b9ff5d934b46c1b1025394f18aaf4d5acb8cc0d138ff9461cf8696a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.316038 kubelet[2725]: E0707 00:07:37.315971 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c616ed60b9ff5d934b46c1b1025394f18aaf4d5acb8cc0d138ff9461cf8696a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c5b65fcc-f8m9l" Jul 7 00:07:37.316038 kubelet[2725]: E0707 00:07:37.316000 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c616ed60b9ff5d934b46c1b1025394f18aaf4d5acb8cc0d138ff9461cf8696a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c5b65fcc-f8m9l" Jul 7 00:07:37.316145 kubelet[2725]: E0707 00:07:37.316072 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c5b65fcc-f8m9l_calico-apiserver(c458bbc8-ed83-4245-b6c3-225052b35912)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c5b65fcc-f8m9l_calico-apiserver(c458bbc8-ed83-4245-b6c3-225052b35912)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c616ed60b9ff5d934b46c1b1025394f18aaf4d5acb8cc0d138ff9461cf8696a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c5b65fcc-f8m9l" podUID="c458bbc8-ed83-4245-b6c3-225052b35912" Jul 7 00:07:37.643252 systemd[1]: Created slice kubepods-besteffort-pode9f3e3a2_bf82_4e74_a3d2_023b2f0edaa2.slice - libcontainer container kubepods-besteffort-pode9f3e3a2_bf82_4e74_a3d2_023b2f0edaa2.slice. Jul 7 00:07:37.645825 containerd[1575]: time="2025-07-07T00:07:37.645785338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjph,Uid:e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:37.706886 containerd[1575]: time="2025-07-07T00:07:37.706806222Z" level=error msg="Failed to destroy network for sandbox \"0b07e32e73c423970d33fcdb7b9e5e5cf5b3c457994fdcddbaf9687df03e63ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.709302 containerd[1575]: time="2025-07-07T00:07:37.709247542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjph,Uid:e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b07e32e73c423970d33fcdb7b9e5e5cf5b3c457994fdcddbaf9687df03e63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.709765 kubelet[2725]: E0707 00:07:37.709697 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b07e32e73c423970d33fcdb7b9e5e5cf5b3c457994fdcddbaf9687df03e63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:37.709847 kubelet[2725]: E0707 00:07:37.709796 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b07e32e73c423970d33fcdb7b9e5e5cf5b3c457994fdcddbaf9687df03e63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:37.709847 kubelet[2725]: E0707 00:07:37.709828 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b07e32e73c423970d33fcdb7b9e5e5cf5b3c457994fdcddbaf9687df03e63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:37.709929 kubelet[2725]: E0707 00:07:37.709889 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7gjph_calico-system(e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7gjph_calico-system(e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b07e32e73c423970d33fcdb7b9e5e5cf5b3c457994fdcddbaf9687df03e63ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:37.710039 systemd[1]: run-netns-cni\x2dc1c3ed61\x2d4280\x2de8b7\x2dda27\x2d3d29af8aca16.mount: Deactivated successfully. Jul 7 00:07:47.088544 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:51266.service - OpenSSH per-connection server daemon (10.0.0.1:51266). Jul 7 00:07:47.153831 sshd[3832]: Accepted publickey for core from 10.0.0.1 port 51266 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:07:47.155435 sshd-session[3832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:07:47.161201 systemd-logind[1560]: New session 10 of user core. Jul 7 00:07:47.169165 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:07:47.197715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689537038.mount: Deactivated successfully. Jul 7 00:07:47.637041 containerd[1575]: time="2025-07-07T00:07:47.636952997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c459f4946-l6s2s,Uid:e1f9d356-e654-4c17-9394-6754ad117035,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:48.636808 containerd[1575]: time="2025-07-07T00:07:48.636736352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-2jf6l,Uid:5d8fa112-6a01-4e51-b931-c7ec59ec77a8,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:07:48.637141 containerd[1575]: time="2025-07-07T00:07:48.637095396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjph,Uid:e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:49.178705 sshd[3834]: Connection closed by 10.0.0.1 port 51266 Jul 7 00:07:49.179131 sshd-session[3832]: pam_unix(sshd:session): session closed for user core Jul 7 00:07:49.184228 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:51266.service: Deactivated successfully. Jul 7 00:07:49.190487 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:07:49.191811 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:07:49.193340 systemd-logind[1560]: Removed session 10. Jul 7 00:07:49.301049 containerd[1575]: time="2025-07-07T00:07:49.300750574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:49.304984 containerd[1575]: time="2025-07-07T00:07:49.304954218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:07:49.307039 containerd[1575]: time="2025-07-07T00:07:49.306456028Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:49.316594 containerd[1575]: time="2025-07-07T00:07:49.316511081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:49.319367 containerd[1575]: time="2025-07-07T00:07:49.319290240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.501711554s" Jul 7 00:07:49.319995 containerd[1575]: time="2025-07-07T00:07:49.319961442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:07:49.341039 containerd[1575]: time="2025-07-07T00:07:49.340955157Z" level=info msg="CreateContainer within sandbox \"85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:07:49.345283 containerd[1575]: time="2025-07-07T00:07:49.345216259Z" level=error msg="Failed to destroy network for sandbox \"deff1dfb675dbb103a8b83064423d83c688d7f2beeec6b34af578056dd38009d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.377589 containerd[1575]: time="2025-07-07T00:07:49.377522541Z" level=error msg="Failed to destroy network for sandbox \"89b835bd02cd376030a123173f279b2a9c194291e1b94fc552621c93d7d85ce1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.380260 containerd[1575]: time="2025-07-07T00:07:49.380212092Z" level=error msg="Failed to destroy network for sandbox \"f79d10ca4ff6631cbeaca607b6db5de2fc0865890a986e8fed36c17046ad7878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.459247 containerd[1575]: time="2025-07-07T00:07:49.458805189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c459f4946-l6s2s,Uid:e1f9d356-e654-4c17-9394-6754ad117035,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"deff1dfb675dbb103a8b83064423d83c688d7f2beeec6b34af578056dd38009d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.459425 kubelet[2725]: E0707 00:07:49.459271 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deff1dfb675dbb103a8b83064423d83c688d7f2beeec6b34af578056dd38009d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.459425 kubelet[2725]: E0707 00:07:49.459361 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deff1dfb675dbb103a8b83064423d83c688d7f2beeec6b34af578056dd38009d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c459f4946-l6s2s" Jul 7 00:07:49.459425 kubelet[2725]: E0707 00:07:49.459386 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deff1dfb675dbb103a8b83064423d83c688d7f2beeec6b34af578056dd38009d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c459f4946-l6s2s" Jul 7 00:07:49.460002 kubelet[2725]: E0707 00:07:49.459435 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c459f4946-l6s2s_calico-system(e1f9d356-e654-4c17-9394-6754ad117035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c459f4946-l6s2s_calico-system(e1f9d356-e654-4c17-9394-6754ad117035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"deff1dfb675dbb103a8b83064423d83c688d7f2beeec6b34af578056dd38009d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c459f4946-l6s2s" podUID="e1f9d356-e654-4c17-9394-6754ad117035" Jul 7 00:07:49.498253 containerd[1575]: time="2025-07-07T00:07:49.498163183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-2jf6l,Uid:5d8fa112-6a01-4e51-b931-c7ec59ec77a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89b835bd02cd376030a123173f279b2a9c194291e1b94fc552621c93d7d85ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.498588 kubelet[2725]: E0707 00:07:49.498536 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89b835bd02cd376030a123173f279b2a9c194291e1b94fc552621c93d7d85ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.498661 kubelet[2725]: E0707 00:07:49.498611 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89b835bd02cd376030a123173f279b2a9c194291e1b94fc552621c93d7d85ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" Jul 7 00:07:49.498661 kubelet[2725]: E0707 00:07:49.498638 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89b835bd02cd376030a123173f279b2a9c194291e1b94fc552621c93d7d85ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" Jul 7 00:07:49.498745 kubelet[2725]: E0707 00:07:49.498693 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c5b65fcc-2jf6l_calico-apiserver(5d8fa112-6a01-4e51-b931-c7ec59ec77a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c5b65fcc-2jf6l_calico-apiserver(5d8fa112-6a01-4e51-b931-c7ec59ec77a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89b835bd02cd376030a123173f279b2a9c194291e1b94fc552621c93d7d85ce1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" podUID="5d8fa112-6a01-4e51-b931-c7ec59ec77a8" Jul 7 00:07:49.551208 containerd[1575]: time="2025-07-07T00:07:49.551109316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjph,Uid:e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79d10ca4ff6631cbeaca607b6db5de2fc0865890a986e8fed36c17046ad7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.551518 kubelet[2725]: E0707 00:07:49.551460 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79d10ca4ff6631cbeaca607b6db5de2fc0865890a986e8fed36c17046ad7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.551593 kubelet[2725]: E0707 00:07:49.551544 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79d10ca4ff6631cbeaca607b6db5de2fc0865890a986e8fed36c17046ad7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:49.551593 kubelet[2725]: E0707 00:07:49.551568 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f79d10ca4ff6631cbeaca607b6db5de2fc0865890a986e8fed36c17046ad7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gjph" Jul 7 00:07:49.551669 kubelet[2725]: E0707 00:07:49.551623 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7gjph_calico-system(e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7gjph_calico-system(e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f79d10ca4ff6631cbeaca607b6db5de2fc0865890a986e8fed36c17046ad7878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gjph" podUID="e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2" Jul 7 00:07:49.635756 kubelet[2725]: E0707 00:07:49.635530 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:49.636153 containerd[1575]: time="2025-07-07T00:07:49.636098044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bb9f,Uid:dd46f789-c107-49d4-8d93-8d4643fff672,Namespace:kube-system,Attempt:0,}" Jul 7 00:07:49.636153 containerd[1575]: time="2025-07-07T00:07:49.636148148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-rnpsd,Uid:3c397933-3c8f-4270-8b5a-23cb516115f4,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:49.652436 containerd[1575]: time="2025-07-07T00:07:49.652392475Z" level=info msg="Container 3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:49.717045 containerd[1575]: time="2025-07-07T00:07:49.716773221Z" level=info msg="CreateContainer within sandbox \"85975f2a24c7f0c5299f213bc35588f5c1939c9c7e00340b8e0ffeff74def10e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7\"" Jul 7 00:07:49.718844 containerd[1575]: time="2025-07-07T00:07:49.718815306Z" level=info msg="StartContainer for \"3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7\"" Jul 7 00:07:49.721493 containerd[1575]: time="2025-07-07T00:07:49.721450575Z" level=info msg="connecting to shim 3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7" address="unix:///run/containerd/s/94dd3ff8415c68cc6dc61320d2e4378c00641bd8a17202593c67d5fd7cf3bb92" protocol=ttrpc version=3 Jul 7 00:07:49.775981 containerd[1575]: time="2025-07-07T00:07:49.775897056Z" level=error msg="Failed to destroy network for sandbox \"3e1eb2cd60f492ee6cf38dbae837ca263082d6f86771fea40e64b98460a9d081\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.777852 containerd[1575]: time="2025-07-07T00:07:49.777633607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-rnpsd,Uid:3c397933-3c8f-4270-8b5a-23cb516115f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1eb2cd60f492ee6cf38dbae837ca263082d6f86771fea40e64b98460a9d081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.777953 kubelet[2725]: E0707 00:07:49.777863 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1eb2cd60f492ee6cf38dbae837ca263082d6f86771fea40e64b98460a9d081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.777953 kubelet[2725]: E0707 00:07:49.777921 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1eb2cd60f492ee6cf38dbae837ca263082d6f86771fea40e64b98460a9d081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-rnpsd" Jul 7 00:07:49.777953 kubelet[2725]: E0707 00:07:49.777949 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1eb2cd60f492ee6cf38dbae837ca263082d6f86771fea40e64b98460a9d081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-rnpsd" Jul 7 00:07:49.778394 kubelet[2725]: E0707 00:07:49.777994 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-rnpsd_calico-system(3c397933-3c8f-4270-8b5a-23cb516115f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-rnpsd_calico-system(3c397933-3c8f-4270-8b5a-23cb516115f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e1eb2cd60f492ee6cf38dbae837ca263082d6f86771fea40e64b98460a9d081\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-rnpsd" podUID="3c397933-3c8f-4270-8b5a-23cb516115f4" Jul 7 00:07:49.782046 containerd[1575]: time="2025-07-07T00:07:49.781893417Z" level=error msg="Failed to destroy network for sandbox \"b29ce48618daf0315da2a70ab0bbf24dafc75475087cb67a8a9b09c4747dc56f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.783278 containerd[1575]: time="2025-07-07T00:07:49.783224466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bb9f,Uid:dd46f789-c107-49d4-8d93-8d4643fff672,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29ce48618daf0315da2a70ab0bbf24dafc75475087cb67a8a9b09c4747dc56f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.783506 kubelet[2725]: E0707 00:07:49.783424 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29ce48618daf0315da2a70ab0bbf24dafc75475087cb67a8a9b09c4747dc56f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:07:49.783565 kubelet[2725]: E0707 00:07:49.783519 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29ce48618daf0315da2a70ab0bbf24dafc75475087cb67a8a9b09c4747dc56f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6bb9f" Jul 7 00:07:49.783565 kubelet[2725]: E0707 00:07:49.783552 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29ce48618daf0315da2a70ab0bbf24dafc75475087cb67a8a9b09c4747dc56f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6bb9f" Jul 7 00:07:49.783636 kubelet[2725]: E0707 00:07:49.783600 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6bb9f_kube-system(dd46f789-c107-49d4-8d93-8d4643fff672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6bb9f_kube-system(dd46f789-c107-49d4-8d93-8d4643fff672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b29ce48618daf0315da2a70ab0bbf24dafc75475087cb67a8a9b09c4747dc56f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6bb9f" podUID="dd46f789-c107-49d4-8d93-8d4643fff672" Jul 7 00:07:49.788248 systemd[1]: Started cri-containerd-3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7.scope - libcontainer container 3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7. Jul 7 00:07:49.844096 containerd[1575]: time="2025-07-07T00:07:49.844046570Z" level=info msg="StartContainer for \"3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7\" returns successfully" Jul 7 00:07:49.935111 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:07:49.935244 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:07:50.033365 kubelet[2725]: I0707 00:07:50.032446 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lblsh" podStartSLOduration=1.18416086 podStartE2EDuration="28.032424428s" podCreationTimestamp="2025-07-07 00:07:22 +0000 UTC" firstStartedPulling="2025-07-07 00:07:22.474876433 +0000 UTC m=+18.924944085" lastFinishedPulling="2025-07-07 00:07:49.323140001 +0000 UTC m=+45.773207653" observedRunningTime="2025-07-07 00:07:49.88757731 +0000 UTC m=+46.337644962" watchObservedRunningTime="2025-07-07 00:07:50.032424428 +0000 UTC m=+46.482492080" Jul 7 00:07:50.094662 containerd[1575]: time="2025-07-07T00:07:50.094616708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7\" id:\"dc049d84564fbfc8e4cf1a424160453bd7356c107a2cb85e330ea05e58e6f9e8\" pid:4078 exit_status:1 exited_at:{seconds:1751846870 nanos:93913146}" Jul 7 00:07:50.162981 kubelet[2725]: I0707 00:07:50.162940 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1f9d356-e654-4c17-9394-6754ad117035-whisker-backend-key-pair\") pod \"e1f9d356-e654-4c17-9394-6754ad117035\" (UID: \"e1f9d356-e654-4c17-9394-6754ad117035\") " Jul 7 00:07:50.162981 kubelet[2725]: I0707 00:07:50.162998 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvjlc\" (UniqueName: \"kubernetes.io/projected/e1f9d356-e654-4c17-9394-6754ad117035-kube-api-access-vvjlc\") pod \"e1f9d356-e654-4c17-9394-6754ad117035\" (UID: \"e1f9d356-e654-4c17-9394-6754ad117035\") " Jul 7 00:07:50.163266 kubelet[2725]: I0707 00:07:50.163052 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1f9d356-e654-4c17-9394-6754ad117035-whisker-ca-bundle\") pod \"e1f9d356-e654-4c17-9394-6754ad117035\" (UID: \"e1f9d356-e654-4c17-9394-6754ad117035\") " Jul 7 00:07:50.163584 kubelet[2725]: I0707 00:07:50.163562 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f9d356-e654-4c17-9394-6754ad117035-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e1f9d356-e654-4c17-9394-6754ad117035" (UID: "e1f9d356-e654-4c17-9394-6754ad117035"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:07:50.167433 kubelet[2725]: I0707 00:07:50.167402 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9d356-e654-4c17-9394-6754ad117035-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e1f9d356-e654-4c17-9394-6754ad117035" (UID: "e1f9d356-e654-4c17-9394-6754ad117035"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:07:50.167667 kubelet[2725]: I0707 00:07:50.167632 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f9d356-e654-4c17-9394-6754ad117035-kube-api-access-vvjlc" (OuterVolumeSpecName: "kube-api-access-vvjlc") pod "e1f9d356-e654-4c17-9394-6754ad117035" (UID: "e1f9d356-e654-4c17-9394-6754ad117035"). InnerVolumeSpecName "kube-api-access-vvjlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:07:50.175517 systemd[1]: run-netns-cni\x2da26fc975\x2d3ba5\x2d234a\x2dc298\x2db4481a1e8490.mount: Deactivated successfully. Jul 7 00:07:50.175620 systemd[1]: run-netns-cni\x2df8e57efb\x2d3daf\x2d708a\x2d948d\x2dd1da2831a255.mount: Deactivated successfully. Jul 7 00:07:50.175692 systemd[1]: run-netns-cni\x2d81b6a38e\x2de2a4\x2d289e\x2d074f\x2d984e855c3f6a.mount: Deactivated successfully. Jul 7 00:07:50.175768 systemd[1]: var-lib-kubelet-pods-e1f9d356\x2de654\x2d4c17\x2d9394\x2d6754ad117035-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvvjlc.mount: Deactivated successfully. Jul 7 00:07:50.175869 systemd[1]: var-lib-kubelet-pods-e1f9d356\x2de654\x2d4c17\x2d9394\x2d6754ad117035-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:07:50.263828 kubelet[2725]: I0707 00:07:50.263760 2725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvjlc\" (UniqueName: \"kubernetes.io/projected/e1f9d356-e654-4c17-9394-6754ad117035-kube-api-access-vvjlc\") on node \"localhost\" DevicePath \"\"" Jul 7 00:07:50.263828 kubelet[2725]: I0707 00:07:50.263810 2725 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1f9d356-e654-4c17-9394-6754ad117035-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 00:07:50.263828 kubelet[2725]: I0707 00:07:50.263824 2725 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1f9d356-e654-4c17-9394-6754ad117035-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 00:07:50.635849 containerd[1575]: time="2025-07-07T00:07:50.635798179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-f8m9l,Uid:c458bbc8-ed83-4245-b6c3-225052b35912,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:07:50.815838 systemd-networkd[1493]: cali370566e16a1: Link UP Jul 7 00:07:50.816664 systemd-networkd[1493]: cali370566e16a1: Gained carrier Jul 7 00:07:50.835230 containerd[1575]: 2025-07-07 00:07:50.661 [INFO][4114] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:07:50.835230 containerd[1575]: 2025-07-07 00:07:50.680 [INFO][4114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0 calico-apiserver-57c5b65fcc- calico-apiserver c458bbc8-ed83-4245-b6c3-225052b35912 894 0 2025-07-07 00:07:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57c5b65fcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57c5b65fcc-f8m9l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali370566e16a1 [] [] }} ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-f8m9l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-" Jul 7 00:07:50.835230 containerd[1575]: 2025-07-07 00:07:50.680 [INFO][4114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-f8m9l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" Jul 7 00:07:50.835230 containerd[1575]: 2025-07-07 00:07:50.755 [INFO][4130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" HandleID="k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Workload="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.756 [INFO][4130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" HandleID="k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Workload="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004372b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57c5b65fcc-f8m9l", "timestamp":"2025-07-07 00:07:50.755380369 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.756 [INFO][4130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.756 [INFO][4130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.756 [INFO][4130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.765 [INFO][4130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" host="localhost" Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.779 [INFO][4130] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.784 [INFO][4130] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.786 [INFO][4130] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.788 [INFO][4130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:07:50.835869 containerd[1575]: 2025-07-07 00:07:50.788 [INFO][4130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" host="localhost" Jul 7 00:07:50.836215 containerd[1575]: 2025-07-07 00:07:50.790 [INFO][4130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe Jul 7 00:07:50.836215 containerd[1575]: 2025-07-07 00:07:50.794 [INFO][4130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" host="localhost" Jul 7 00:07:50.836215 containerd[1575]: 2025-07-07 00:07:50.802 [INFO][4130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" host="localhost" Jul 7 00:07:50.836215 containerd[1575]: 2025-07-07 00:07:50.802 [INFO][4130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" host="localhost" Jul 7 00:07:50.836215 containerd[1575]: 2025-07-07 00:07:50.802 [INFO][4130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:07:50.836215 containerd[1575]: 2025-07-07 00:07:50.802 [INFO][4130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" HandleID="k8s-pod-network.cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Workload="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" Jul 7 00:07:50.836378 containerd[1575]: 2025-07-07 00:07:50.807 [INFO][4114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-f8m9l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0", GenerateName:"calico-apiserver-57c5b65fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"c458bbc8-ed83-4245-b6c3-225052b35912", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c5b65fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57c5b65fcc-f8m9l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali370566e16a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:07:50.836452 containerd[1575]: 2025-07-07 00:07:50.807 [INFO][4114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-f8m9l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" Jul 7 00:07:50.836452 containerd[1575]: 2025-07-07 00:07:50.807 [INFO][4114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali370566e16a1 ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-f8m9l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" Jul 7 00:07:50.836452 containerd[1575]: 2025-07-07 00:07:50.816 [INFO][4114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-f8m9l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" Jul 7 00:07:50.836524 containerd[1575]: 2025-07-07 00:07:50.819 [INFO][4114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-f8m9l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0", GenerateName:"calico-apiserver-57c5b65fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"c458bbc8-ed83-4245-b6c3-225052b35912", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c5b65fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe", Pod:"calico-apiserver-57c5b65fcc-f8m9l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali370566e16a1", MAC:"6e:05:62:59:ad:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:07:50.836598 containerd[1575]: 2025-07-07 00:07:50.831 [INFO][4114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-f8m9l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--f8m9l-eth0" Jul 7 00:07:50.896414 systemd[1]: Removed slice kubepods-besteffort-pode1f9d356_e654_4c17_9394_6754ad117035.slice - libcontainer container kubepods-besteffort-pode1f9d356_e654_4c17_9394_6754ad117035.slice. Jul 7 00:07:50.973522 systemd[1]: Created slice kubepods-besteffort-pod8deabd15_60a9_4008_92e4_c3ec0e1b6375.slice - libcontainer container kubepods-besteffort-pod8deabd15_60a9_4008_92e4_c3ec0e1b6375.slice. Jul 7 00:07:50.980973 containerd[1575]: time="2025-07-07T00:07:50.980350905Z" level=info msg="connecting to shim cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe" address="unix:///run/containerd/s/6c465052bee535e50c0fb1bfe37b2e1e8c2a9a6bd3dad687bae0750cbb8cd384" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:07:50.994331 containerd[1575]: time="2025-07-07T00:07:50.994281125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7\" id:\"eda907946a82265acbddd430ca1bc3da7e690f9592a0b11ff74934cc592a6130\" pid:4158 exit_status:1 exited_at:{seconds:1751846870 nanos:992646855}" Jul 7 00:07:51.032168 systemd[1]: Started cri-containerd-cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe.scope - libcontainer container cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe. Jul 7 00:07:51.045459 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:07:51.070391 kubelet[2725]: I0707 00:07:51.070353 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2x8w\" (UniqueName: \"kubernetes.io/projected/8deabd15-60a9-4008-92e4-c3ec0e1b6375-kube-api-access-f2x8w\") pod \"whisker-584998b6cd-prdh2\" (UID: \"8deabd15-60a9-4008-92e4-c3ec0e1b6375\") " pod="calico-system/whisker-584998b6cd-prdh2" Jul 7 00:07:51.070783 kubelet[2725]: I0707 00:07:51.070401 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8deabd15-60a9-4008-92e4-c3ec0e1b6375-whisker-ca-bundle\") pod \"whisker-584998b6cd-prdh2\" (UID: \"8deabd15-60a9-4008-92e4-c3ec0e1b6375\") " pod="calico-system/whisker-584998b6cd-prdh2" Jul 7 00:07:51.070783 kubelet[2725]: I0707 00:07:51.070431 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8deabd15-60a9-4008-92e4-c3ec0e1b6375-whisker-backend-key-pair\") pod \"whisker-584998b6cd-prdh2\" (UID: \"8deabd15-60a9-4008-92e4-c3ec0e1b6375\") " pod="calico-system/whisker-584998b6cd-prdh2" Jul 7 00:07:51.079403 containerd[1575]: time="2025-07-07T00:07:51.079362638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-f8m9l,Uid:c458bbc8-ed83-4245-b6c3-225052b35912,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe\"" Jul 7 00:07:51.085220 containerd[1575]: time="2025-07-07T00:07:51.085175844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:07:51.286784 containerd[1575]: time="2025-07-07T00:07:51.286632732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-584998b6cd-prdh2,Uid:8deabd15-60a9-4008-92e4-c3ec0e1b6375,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:51.541200 systemd-networkd[1493]: cali8a35882d2ec: Link UP Jul 7 00:07:51.542042 systemd-networkd[1493]: cali8a35882d2ec: Gained carrier Jul 7 00:07:51.556417 containerd[1575]: 2025-07-07 00:07:51.333 [INFO][4222] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:07:51.556417 containerd[1575]: 2025-07-07 00:07:51.349 [INFO][4222] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--584998b6cd--prdh2-eth0 whisker-584998b6cd- calico-system 8deabd15-60a9-4008-92e4-c3ec0e1b6375 1035 0 2025-07-07 00:07:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:584998b6cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-584998b6cd-prdh2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8a35882d2ec [] [] }} ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Namespace="calico-system" Pod="whisker-584998b6cd-prdh2" WorkloadEndpoint="localhost-k8s-whisker--584998b6cd--prdh2-" Jul 7 00:07:51.556417 containerd[1575]: 2025-07-07 00:07:51.350 [INFO][4222] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Namespace="calico-system" Pod="whisker-584998b6cd-prdh2" WorkloadEndpoint="localhost-k8s-whisker--584998b6cd--prdh2-eth0" Jul 7 00:07:51.556417 containerd[1575]: 2025-07-07 00:07:51.496 [INFO][4325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" HandleID="k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Workload="localhost-k8s-whisker--584998b6cd--prdh2-eth0" Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.496 [INFO][4325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" HandleID="k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Workload="localhost-k8s-whisker--584998b6cd--prdh2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-584998b6cd-prdh2", "timestamp":"2025-07-07 00:07:51.496511754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.496 [INFO][4325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.496 [INFO][4325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.496 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.504 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" host="localhost" Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.509 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.514 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.516 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.519 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:07:51.556783 containerd[1575]: 2025-07-07 00:07:51.519 [INFO][4325] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" host="localhost" Jul 7 00:07:51.557106 containerd[1575]: 2025-07-07 00:07:51.521 [INFO][4325] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff Jul 7 00:07:51.557106 containerd[1575]: 2025-07-07 00:07:51.528 [INFO][4325] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" host="localhost" Jul 7 00:07:51.557106 containerd[1575]: 2025-07-07 00:07:51.533 [INFO][4325] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" host="localhost" Jul 7 00:07:51.557106 containerd[1575]: 2025-07-07 00:07:51.533 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" host="localhost" Jul 7 00:07:51.557106 containerd[1575]: 2025-07-07 00:07:51.533 [INFO][4325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:07:51.557106 containerd[1575]: 2025-07-07 00:07:51.533 [INFO][4325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" HandleID="k8s-pod-network.d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Workload="localhost-k8s-whisker--584998b6cd--prdh2-eth0" Jul 7 00:07:51.557260 containerd[1575]: 2025-07-07 00:07:51.539 [INFO][4222] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Namespace="calico-system" Pod="whisker-584998b6cd-prdh2" WorkloadEndpoint="localhost-k8s-whisker--584998b6cd--prdh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--584998b6cd--prdh2-eth0", GenerateName:"whisker-584998b6cd-", Namespace:"calico-system", SelfLink:"", UID:"8deabd15-60a9-4008-92e4-c3ec0e1b6375", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"584998b6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-584998b6cd-prdh2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8a35882d2ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:07:51.557260 containerd[1575]: 2025-07-07 00:07:51.539 [INFO][4222] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Namespace="calico-system" Pod="whisker-584998b6cd-prdh2" WorkloadEndpoint="localhost-k8s-whisker--584998b6cd--prdh2-eth0" Jul 7 00:07:51.557355 containerd[1575]: 2025-07-07 00:07:51.539 [INFO][4222] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a35882d2ec ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Namespace="calico-system" Pod="whisker-584998b6cd-prdh2" WorkloadEndpoint="localhost-k8s-whisker--584998b6cd--prdh2-eth0" Jul 7 00:07:51.557355 containerd[1575]: 2025-07-07 00:07:51.542 [INFO][4222] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Namespace="calico-system" Pod="whisker-584998b6cd-prdh2" WorkloadEndpoint="localhost-k8s-whisker--584998b6cd--prdh2-eth0" Jul 7 00:07:51.557407 containerd[1575]: 2025-07-07 00:07:51.543 [INFO][4222] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Namespace="calico-system" Pod="whisker-584998b6cd-prdh2" WorkloadEndpoint="localhost-k8s-whisker--584998b6cd--prdh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--584998b6cd--prdh2-eth0", GenerateName:"whisker-584998b6cd-", Namespace:"calico-system", SelfLink:"", UID:"8deabd15-60a9-4008-92e4-c3ec0e1b6375", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"584998b6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff", Pod:"whisker-584998b6cd-prdh2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8a35882d2ec", MAC:"be:e0:2f:9e:17:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:07:51.557473 containerd[1575]: 2025-07-07 00:07:51.551 [INFO][4222] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" Namespace="calico-system" Pod="whisker-584998b6cd-prdh2" WorkloadEndpoint="localhost-k8s-whisker--584998b6cd--prdh2-eth0" Jul 7 00:07:51.584235 containerd[1575]: time="2025-07-07T00:07:51.584165347Z" level=info msg="connecting to shim d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff" address="unix:///run/containerd/s/f56f7d121ff98434229fb8c3fdedd0adb20aa591e28e04b6d927e9af254ee016" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:07:51.617208 systemd[1]: Started cri-containerd-d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff.scope - libcontainer container d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff. Jul 7 00:07:51.634453 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:07:51.640609 kubelet[2725]: I0707 00:07:51.640570 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f9d356-e654-4c17-9394-6754ad117035" path="/var/lib/kubelet/pods/e1f9d356-e654-4c17-9394-6754ad117035/volumes" Jul 7 00:07:51.721290 containerd[1575]: time="2025-07-07T00:07:51.721239179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-584998b6cd-prdh2,Uid:8deabd15-60a9-4008-92e4-c3ec0e1b6375,Namespace:calico-system,Attempt:0,} returns sandbox id \"d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff\"" Jul 7 00:07:51.951373 systemd-networkd[1493]: vxlan.calico: Link UP Jul 7 00:07:51.951382 systemd-networkd[1493]: vxlan.calico: Gained carrier Jul 7 00:07:52.099235 systemd-networkd[1493]: cali370566e16a1: Gained IPv6LL Jul 7 00:07:52.636758 kubelet[2725]: E0707 00:07:52.635780 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:52.637524 containerd[1575]: time="2025-07-07T00:07:52.636381961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghpfm,Uid:59a91234-bc75-4ad7-bf18-197cbb403576,Namespace:kube-system,Attempt:0,}" Jul 7 00:07:52.638979 containerd[1575]: time="2025-07-07T00:07:52.637973880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4666f86c-r9hqc,Uid:abf225d5-7bc4-4ca5-bceb-0815f17fd350,Namespace:calico-system,Attempt:0,}" Jul 7 00:07:52.757853 systemd-networkd[1493]: cali830f9e70e8c: Link UP Jul 7 00:07:52.758539 systemd-networkd[1493]: cali830f9e70e8c: Gained carrier Jul 7 00:07:52.773650 containerd[1575]: 2025-07-07 00:07:52.687 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0 coredns-7c65d6cfc9- kube-system 59a91234-bc75-4ad7-bf18-197cbb403576 887 0 2025-07-07 00:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-ghpfm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali830f9e70e8c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ghpfm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ghpfm-" Jul 7 00:07:52.773650 containerd[1575]: 2025-07-07 00:07:52.687 [INFO][4493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ghpfm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" Jul 7 00:07:52.773650 containerd[1575]: 2025-07-07 00:07:52.717 [INFO][4520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" HandleID="k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Workload="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.717 [INFO][4520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" HandleID="k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Workload="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-ghpfm", "timestamp":"2025-07-07 00:07:52.717536271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.717 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.718 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.718 [INFO][4520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.725 [INFO][4520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" host="localhost" Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.730 [INFO][4520] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.734 [INFO][4520] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.736 [INFO][4520] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.739 [INFO][4520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:07:52.774242 containerd[1575]: 2025-07-07 00:07:52.739 [INFO][4520] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" host="localhost" Jul 7 00:07:52.774476 containerd[1575]: 2025-07-07 00:07:52.741 [INFO][4520] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559 Jul 7 00:07:52.774476 containerd[1575]: 2025-07-07 00:07:52.745 [INFO][4520] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" host="localhost" Jul 7 00:07:52.774476 containerd[1575]: 2025-07-07 00:07:52.750 [INFO][4520] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" host="localhost" Jul 7 00:07:52.774476 containerd[1575]: 2025-07-07 00:07:52.750 [INFO][4520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" host="localhost" Jul 7 00:07:52.774476 containerd[1575]: 2025-07-07 00:07:52.750 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:07:52.774476 containerd[1575]: 2025-07-07 00:07:52.750 [INFO][4520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" HandleID="k8s-pod-network.b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Workload="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" Jul 7 00:07:52.774595 containerd[1575]: 2025-07-07 00:07:52.755 [INFO][4493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ghpfm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"59a91234-bc75-4ad7-bf18-197cbb403576", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-ghpfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali830f9e70e8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:07:52.774668 containerd[1575]: 2025-07-07 00:07:52.755 [INFO][4493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ghpfm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" Jul 7 00:07:52.774668 containerd[1575]: 2025-07-07 00:07:52.755 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali830f9e70e8c ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ghpfm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" Jul 7 00:07:52.774668 containerd[1575]: 2025-07-07 00:07:52.759 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ghpfm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" Jul 7 00:07:52.774768 containerd[1575]: 2025-07-07 00:07:52.759 [INFO][4493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ghpfm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"59a91234-bc75-4ad7-bf18-197cbb403576", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559", Pod:"coredns-7c65d6cfc9-ghpfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali830f9e70e8c", MAC:"c6:03:0e:97:18:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:07:52.774768 containerd[1575]: 2025-07-07 00:07:52.770 [INFO][4493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ghpfm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ghpfm-eth0" Jul 7 00:07:52.811341 containerd[1575]: time="2025-07-07T00:07:52.811294878Z" level=info msg="connecting to shim b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559" address="unix:///run/containerd/s/375d919219326e07243985379b698f9ef7b89bbe5cfe89907328b059404a8bc3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:07:52.837368 systemd[1]: Started cri-containerd-b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559.scope - libcontainer container b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559. Jul 7 00:07:52.854907 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:07:52.860884 systemd-networkd[1493]: calib52da26db64: Link UP Jul 7 00:07:52.861406 systemd-networkd[1493]: calib52da26db64: Gained carrier Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.693 [INFO][4503] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0 calico-kube-controllers-7f4666f86c- calico-system abf225d5-7bc4-4ca5-bceb-0815f17fd350 891 0 2025-07-07 00:07:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f4666f86c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f4666f86c-r9hqc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib52da26db64 [] [] }} ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Namespace="calico-system" Pod="calico-kube-controllers-7f4666f86c-r9hqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.693 [INFO][4503] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Namespace="calico-system" Pod="calico-kube-controllers-7f4666f86c-r9hqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.729 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" HandleID="k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Workload="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.729 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" HandleID="k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Workload="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f4666f86c-r9hqc", "timestamp":"2025-07-07 00:07:52.728982303 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.729 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.750 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.751 [INFO][4526] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.826 [INFO][4526] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.831 [INFO][4526] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.836 [INFO][4526] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.837 [INFO][4526] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.839 [INFO][4526] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.839 [INFO][4526] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.842 [INFO][4526] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31 Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.848 [INFO][4526] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.853 [INFO][4526] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.853 [INFO][4526] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" host="localhost" Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.853 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:07:52.875513 containerd[1575]: 2025-07-07 00:07:52.853 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" HandleID="k8s-pod-network.4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Workload="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" Jul 7 00:07:52.876301 containerd[1575]: 2025-07-07 00:07:52.858 [INFO][4503] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Namespace="calico-system" Pod="calico-kube-controllers-7f4666f86c-r9hqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0", GenerateName:"calico-kube-controllers-7f4666f86c-", Namespace:"calico-system", SelfLink:"", UID:"abf225d5-7bc4-4ca5-bceb-0815f17fd350", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4666f86c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f4666f86c-r9hqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib52da26db64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:07:52.876301 containerd[1575]: 2025-07-07 00:07:52.858 [INFO][4503] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Namespace="calico-system" Pod="calico-kube-controllers-7f4666f86c-r9hqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" Jul 7 00:07:52.876301 containerd[1575]: 2025-07-07 00:07:52.858 [INFO][4503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib52da26db64 ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Namespace="calico-system" Pod="calico-kube-controllers-7f4666f86c-r9hqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" Jul 7 00:07:52.876301 containerd[1575]: 2025-07-07 00:07:52.861 [INFO][4503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Namespace="calico-system" Pod="calico-kube-controllers-7f4666f86c-r9hqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" Jul 7 00:07:52.876301 containerd[1575]: 2025-07-07 00:07:52.862 [INFO][4503] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Namespace="calico-system" Pod="calico-kube-controllers-7f4666f86c-r9hqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0", GenerateName:"calico-kube-controllers-7f4666f86c-", Namespace:"calico-system", SelfLink:"", UID:"abf225d5-7bc4-4ca5-bceb-0815f17fd350", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4666f86c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31", Pod:"calico-kube-controllers-7f4666f86c-r9hqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib52da26db64", MAC:"b2:63:c8:ac:ec:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:07:52.876301 containerd[1575]: 2025-07-07 00:07:52.870 [INFO][4503] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" Namespace="calico-system" Pod="calico-kube-controllers-7f4666f86c-r9hqc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f4666f86c--r9hqc-eth0" Jul 7 00:07:52.900909 containerd[1575]: time="2025-07-07T00:07:52.900610611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghpfm,Uid:59a91234-bc75-4ad7-bf18-197cbb403576,Namespace:kube-system,Attempt:0,} returns sandbox id \"b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559\"" Jul 7 00:07:52.901924 kubelet[2725]: E0707 00:07:52.901559 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:52.904824 containerd[1575]: time="2025-07-07T00:07:52.904781323Z" level=info msg="connecting to shim 4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31" address="unix:///run/containerd/s/1106da93ef391515c72b8c2e79f4746fe56cf89bbde1ff85481d1a72f9a234f1" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:07:52.906757 containerd[1575]: time="2025-07-07T00:07:52.906723990Z" level=info msg="CreateContainer within sandbox \"b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:07:52.926935 containerd[1575]: time="2025-07-07T00:07:52.926889735Z" level=info msg="Container 65dd17a98e322b79fe2f80536d63485566d08fdd947eb22d9a3f1829685a0ae9: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:52.932160 systemd[1]: Started cri-containerd-4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31.scope - libcontainer container 4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31. Jul 7 00:07:52.933431 containerd[1575]: time="2025-07-07T00:07:52.933384219Z" level=info msg="CreateContainer within sandbox \"b136d8f846f0e3daa755dfbbecbdc1f1f7ac5ef1a9415b930a028dbe3bccd559\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65dd17a98e322b79fe2f80536d63485566d08fdd947eb22d9a3f1829685a0ae9\"" Jul 7 00:07:52.934562 containerd[1575]: time="2025-07-07T00:07:52.934156580Z" level=info msg="StartContainer for \"65dd17a98e322b79fe2f80536d63485566d08fdd947eb22d9a3f1829685a0ae9\"" Jul 7 00:07:52.935285 containerd[1575]: time="2025-07-07T00:07:52.935261073Z" level=info msg="connecting to shim 65dd17a98e322b79fe2f80536d63485566d08fdd947eb22d9a3f1829685a0ae9" address="unix:///run/containerd/s/375d919219326e07243985379b698f9ef7b89bbe5cfe89907328b059404a8bc3" protocol=ttrpc version=3 Jul 7 00:07:52.950834 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:07:52.960242 systemd[1]: Started cri-containerd-65dd17a98e322b79fe2f80536d63485566d08fdd947eb22d9a3f1829685a0ae9.scope - libcontainer container 65dd17a98e322b79fe2f80536d63485566d08fdd947eb22d9a3f1829685a0ae9. Jul 7 00:07:52.988679 containerd[1575]: time="2025-07-07T00:07:52.988567585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4666f86c-r9hqc,Uid:abf225d5-7bc4-4ca5-bceb-0815f17fd350,Namespace:calico-system,Attempt:0,} returns sandbox id \"4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31\"" Jul 7 00:07:53.011123 containerd[1575]: time="2025-07-07T00:07:53.011045440Z" level=info msg="StartContainer for \"65dd17a98e322b79fe2f80536d63485566d08fdd947eb22d9a3f1829685a0ae9\" returns successfully" Jul 7 00:07:53.507789 systemd-networkd[1493]: vxlan.calico: Gained IPv6LL Jul 7 00:07:53.571337 systemd-networkd[1493]: cali8a35882d2ec: Gained IPv6LL Jul 7 00:07:53.890007 kubelet[2725]: E0707 00:07:53.889888 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:53.904730 kubelet[2725]: I0707 00:07:53.904237 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ghpfm" podStartSLOduration=43.90421778 podStartE2EDuration="43.90421778s" podCreationTimestamp="2025-07-07 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:07:53.903698345 +0000 UTC m=+50.353765997" watchObservedRunningTime="2025-07-07 00:07:53.90421778 +0000 UTC m=+50.354285432" Jul 7 00:07:54.195400 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:51278.service - OpenSSH per-connection server daemon (10.0.0.1:51278). Jul 7 00:07:54.255862 sshd[4697]: Accepted publickey for core from 10.0.0.1 port 51278 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:07:54.257614 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:07:54.262931 systemd-logind[1560]: New session 11 of user core. Jul 7 00:07:54.273133 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:07:54.404206 systemd-networkd[1493]: calib52da26db64: Gained IPv6LL Jul 7 00:07:54.423641 sshd[4699]: Connection closed by 10.0.0.1 port 51278 Jul 7 00:07:54.424073 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Jul 7 00:07:54.429448 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:51278.service: Deactivated successfully. Jul 7 00:07:54.432007 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:07:54.433066 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:07:54.434897 systemd-logind[1560]: Removed session 11. Jul 7 00:07:54.531251 systemd-networkd[1493]: cali830f9e70e8c: Gained IPv6LL Jul 7 00:07:54.603713 containerd[1575]: time="2025-07-07T00:07:54.603632215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:54.604297 containerd[1575]: time="2025-07-07T00:07:54.604237481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:07:54.605335 containerd[1575]: time="2025-07-07T00:07:54.605277714Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:54.607343 containerd[1575]: time="2025-07-07T00:07:54.607265205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:54.613790 containerd[1575]: time="2025-07-07T00:07:54.613759017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.528530375s" Jul 7 00:07:54.613790 containerd[1575]: time="2025-07-07T00:07:54.613789334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:07:54.619932 containerd[1575]: time="2025-07-07T00:07:54.619897632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:07:54.628672 containerd[1575]: time="2025-07-07T00:07:54.628619818Z" level=info msg="CreateContainer within sandbox \"cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:07:54.636394 containerd[1575]: time="2025-07-07T00:07:54.636361192Z" level=info msg="Container a0a76e782cfea7cd3b3c988c69e140181d75a9d04072b7d90246ba001e37aa3d: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:54.646195 containerd[1575]: time="2025-07-07T00:07:54.646160249Z" level=info msg="CreateContainer within sandbox \"cb5cbc02eec665c195c8230279c68e275443f10016707fe747582a7f629ccebe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a0a76e782cfea7cd3b3c988c69e140181d75a9d04072b7d90246ba001e37aa3d\"" Jul 7 00:07:54.647097 containerd[1575]: time="2025-07-07T00:07:54.647067673Z" level=info msg="StartContainer for \"a0a76e782cfea7cd3b3c988c69e140181d75a9d04072b7d90246ba001e37aa3d\"" Jul 7 00:07:54.648523 containerd[1575]: time="2025-07-07T00:07:54.648466599Z" level=info msg="connecting to shim a0a76e782cfea7cd3b3c988c69e140181d75a9d04072b7d90246ba001e37aa3d" address="unix:///run/containerd/s/6c465052bee535e50c0fb1bfe37b2e1e8c2a9a6bd3dad687bae0750cbb8cd384" protocol=ttrpc version=3 Jul 7 00:07:54.679203 systemd[1]: Started cri-containerd-a0a76e782cfea7cd3b3c988c69e140181d75a9d04072b7d90246ba001e37aa3d.scope - libcontainer container a0a76e782cfea7cd3b3c988c69e140181d75a9d04072b7d90246ba001e37aa3d. Jul 7 00:07:54.749058 containerd[1575]: time="2025-07-07T00:07:54.748959021Z" level=info msg="StartContainer for \"a0a76e782cfea7cd3b3c988c69e140181d75a9d04072b7d90246ba001e37aa3d\" returns successfully" Jul 7 00:07:54.894952 kubelet[2725]: E0707 00:07:54.894047 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:54.903908 kubelet[2725]: I0707 00:07:54.903852 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57c5b65fcc-f8m9l" podStartSLOduration=33.368628435 podStartE2EDuration="36.903722823s" podCreationTimestamp="2025-07-07 00:07:18 +0000 UTC" firstStartedPulling="2025-07-07 00:07:51.084664944 +0000 UTC m=+47.534732596" lastFinishedPulling="2025-07-07 00:07:54.619759332 +0000 UTC m=+51.069826984" observedRunningTime="2025-07-07 00:07:54.903351807 +0000 UTC m=+51.353419459" watchObservedRunningTime="2025-07-07 00:07:54.903722823 +0000 UTC m=+51.353790475" Jul 7 00:07:55.895399 kubelet[2725]: I0707 00:07:55.895361 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:07:55.895925 kubelet[2725]: E0707 00:07:55.895758 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:07:56.050029 containerd[1575]: time="2025-07-07T00:07:56.049951832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:56.050801 containerd[1575]: time="2025-07-07T00:07:56.050765700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:07:56.051951 containerd[1575]: time="2025-07-07T00:07:56.051920167Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:56.054107 containerd[1575]: time="2025-07-07T00:07:56.054055786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:56.055929 containerd[1575]: time="2025-07-07T00:07:56.055894197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.435967841s" Jul 7 00:07:56.056052 containerd[1575]: time="2025-07-07T00:07:56.056033659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:07:56.058781 containerd[1575]: time="2025-07-07T00:07:56.058755439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:07:56.059677 containerd[1575]: time="2025-07-07T00:07:56.059642534Z" level=info msg="CreateContainer within sandbox \"d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:07:56.067784 containerd[1575]: time="2025-07-07T00:07:56.067736369Z" level=info msg="Container d25099796750b9ffb4ff787f59f539d695fd59853425b608656727b207d8985a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:56.075647 containerd[1575]: time="2025-07-07T00:07:56.075588840Z" level=info msg="CreateContainer within sandbox \"d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d25099796750b9ffb4ff787f59f539d695fd59853425b608656727b207d8985a\"" Jul 7 00:07:56.076144 containerd[1575]: time="2025-07-07T00:07:56.076110409Z" level=info msg="StartContainer for \"d25099796750b9ffb4ff787f59f539d695fd59853425b608656727b207d8985a\"" Jul 7 00:07:56.077338 containerd[1575]: time="2025-07-07T00:07:56.077306755Z" level=info msg="connecting to shim d25099796750b9ffb4ff787f59f539d695fd59853425b608656727b207d8985a" address="unix:///run/containerd/s/f56f7d121ff98434229fb8c3fdedd0adb20aa591e28e04b6d927e9af254ee016" protocol=ttrpc version=3 Jul 7 00:07:56.105229 systemd[1]: Started cri-containerd-d25099796750b9ffb4ff787f59f539d695fd59853425b608656727b207d8985a.scope - libcontainer container d25099796750b9ffb4ff787f59f539d695fd59853425b608656727b207d8985a. Jul 7 00:07:56.285963 containerd[1575]: time="2025-07-07T00:07:56.285844873Z" level=info msg="StartContainer for \"d25099796750b9ffb4ff787f59f539d695fd59853425b608656727b207d8985a\" returns successfully" Jul 7 00:07:59.066314 containerd[1575]: time="2025-07-07T00:07:59.066229186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:59.067156 containerd[1575]: time="2025-07-07T00:07:59.067113726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:07:59.068321 containerd[1575]: time="2025-07-07T00:07:59.068269756Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:59.070391 containerd[1575]: time="2025-07-07T00:07:59.070350933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:07:59.070823 containerd[1575]: time="2025-07-07T00:07:59.070781040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.011858056s" Jul 7 00:07:59.070823 containerd[1575]: time="2025-07-07T00:07:59.070812879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:07:59.071986 containerd[1575]: time="2025-07-07T00:07:59.071962237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:07:59.080409 containerd[1575]: time="2025-07-07T00:07:59.080360982Z" level=info msg="CreateContainer within sandbox \"4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:07:59.093655 containerd[1575]: time="2025-07-07T00:07:59.092714971Z" level=info msg="Container e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:07:59.101781 containerd[1575]: time="2025-07-07T00:07:59.101733089Z" level=info msg="CreateContainer within sandbox \"4525f141d66feecc5de3afc77429aa2bafcc13c44bc4f15acfacbdbf960d9d31\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a\"" Jul 7 00:07:59.102406 containerd[1575]: time="2025-07-07T00:07:59.102374002Z" level=info msg="StartContainer for \"e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a\"" Jul 7 00:07:59.103840 containerd[1575]: time="2025-07-07T00:07:59.103791093Z" level=info msg="connecting to shim e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a" address="unix:///run/containerd/s/1106da93ef391515c72b8c2e79f4746fe56cf89bbde1ff85481d1a72f9a234f1" protocol=ttrpc version=3 Jul 7 00:07:59.157184 systemd[1]: Started cri-containerd-e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a.scope - libcontainer container e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a. Jul 7 00:07:59.214865 containerd[1575]: time="2025-07-07T00:07:59.214811437Z" level=info msg="StartContainer for \"e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a\" returns successfully" Jul 7 00:07:59.443143 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:40584.service - OpenSSH per-connection server daemon (10.0.0.1:40584). Jul 7 00:07:59.591633 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 40584 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:07:59.593278 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:07:59.598690 systemd-logind[1560]: New session 12 of user core. Jul 7 00:07:59.613234 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:07:59.780800 sshd[4855]: Connection closed by 10.0.0.1 port 40584 Jul 7 00:07:59.781144 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Jul 7 00:07:59.793811 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:40584.service: Deactivated successfully. Jul 7 00:07:59.796457 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:07:59.797353 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:07:59.801706 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:40598.service - OpenSSH per-connection server daemon (10.0.0.1:40598). Jul 7 00:07:59.802463 systemd-logind[1560]: Removed session 12. Jul 7 00:07:59.860151 sshd[4871]: Accepted publickey for core from 10.0.0.1 port 40598 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:07:59.862166 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:07:59.867355 systemd-logind[1560]: New session 13 of user core. Jul 7 00:07:59.882204 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:07:59.945748 kubelet[2725]: I0707 00:07:59.945134 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f4666f86c-r9hqc" podStartSLOduration=31.862948399 podStartE2EDuration="37.945000921s" podCreationTimestamp="2025-07-07 00:07:22 +0000 UTC" firstStartedPulling="2025-07-07 00:07:52.989730849 +0000 UTC m=+49.439798501" lastFinishedPulling="2025-07-07 00:07:59.071783351 +0000 UTC m=+55.521851023" observedRunningTime="2025-07-07 00:07:59.944256133 +0000 UTC m=+56.394323785" watchObservedRunningTime="2025-07-07 00:07:59.945000921 +0000 UTC m=+56.395068573" Jul 7 00:08:00.223652 sshd[4873]: Connection closed by 10.0.0.1 port 40598 Jul 7 00:08:00.224741 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:00.244067 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:40598.service: Deactivated successfully. Jul 7 00:08:00.248773 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:08:00.249598 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:08:00.254754 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:40608.service - OpenSSH per-connection server daemon (10.0.0.1:40608). Jul 7 00:08:00.256701 systemd-logind[1560]: Removed session 13. Jul 7 00:08:00.298899 sshd[4899]: Accepted publickey for core from 10.0.0.1 port 40608 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:00.300785 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:00.306969 systemd-logind[1560]: New session 14 of user core. Jul 7 00:08:00.313263 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:08:00.327937 containerd[1575]: time="2025-07-07T00:08:00.327893670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7\" id:\"340f7a1a03eb78053e0aa7e7bb45554f4d8f6c920561ce0326e779c04755b7ab\" pid:4896 exited_at:{seconds:1751846880 nanos:327540518}" Jul 7 00:08:00.436822 sshd[4914]: Connection closed by 10.0.0.1 port 40608 Jul 7 00:08:00.437193 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:00.442070 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:40608.service: Deactivated successfully. Jul 7 00:08:00.444373 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:08:00.445184 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:08:00.446734 systemd-logind[1560]: Removed session 14. Jul 7 00:08:00.635904 kubelet[2725]: E0707 00:08:00.635860 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:00.636325 containerd[1575]: time="2025-07-07T00:08:00.636289224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-rnpsd,Uid:3c397933-3c8f-4270-8b5a-23cb516115f4,Namespace:calico-system,Attempt:0,}" Jul 7 00:08:00.636454 containerd[1575]: time="2025-07-07T00:08:00.636403328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bb9f,Uid:dd46f789-c107-49d4-8d93-8d4643fff672,Namespace:kube-system,Attempt:0,}" Jul 7 00:08:00.757323 systemd-networkd[1493]: cali1579450e565: Link UP Jul 7 00:08:00.757657 systemd-networkd[1493]: cali1579450e565: Gained carrier Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.686 [INFO][4940] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0 coredns-7c65d6cfc9- kube-system dd46f789-c107-49d4-8d93-8d4643fff672 895 0 2025-07-07 00:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-6bb9f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1579450e565 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6bb9f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6bb9f-" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.686 [INFO][4940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6bb9f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.719 [INFO][4959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" HandleID="k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Workload="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.719 [INFO][4959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" HandleID="k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Workload="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010b930), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-6bb9f", "timestamp":"2025-07-07 00:08:00.719456971 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.719 [INFO][4959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.719 [INFO][4959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.719 [INFO][4959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.727 [INFO][4959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.732 [INFO][4959] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.735 [INFO][4959] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.737 [INFO][4959] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.739 [INFO][4959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.739 [INFO][4959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.740 [INFO][4959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059 Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.745 [INFO][4959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.751 [INFO][4959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.752 [INFO][4959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" host="localhost" Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.752 [INFO][4959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:08:00.773200 containerd[1575]: 2025-07-07 00:08:00.752 [INFO][4959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" HandleID="k8s-pod-network.4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Workload="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" Jul 7 00:08:00.773979 containerd[1575]: 2025-07-07 00:08:00.754 [INFO][4940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6bb9f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dd46f789-c107-49d4-8d93-8d4643fff672", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-6bb9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1579450e565", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:08:00.773979 containerd[1575]: 2025-07-07 00:08:00.754 [INFO][4940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6bb9f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" Jul 7 00:08:00.773979 containerd[1575]: 2025-07-07 00:08:00.754 [INFO][4940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1579450e565 ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6bb9f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" Jul 7 00:08:00.773979 containerd[1575]: 2025-07-07 00:08:00.757 [INFO][4940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6bb9f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" Jul 7 00:08:00.773979 containerd[1575]: 2025-07-07 00:08:00.757 [INFO][4940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6bb9f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dd46f789-c107-49d4-8d93-8d4643fff672", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059", Pod:"coredns-7c65d6cfc9-6bb9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1579450e565", MAC:"8e:ab:b2:10:4b:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:08:00.773979 containerd[1575]: 2025-07-07 00:08:00.768 [INFO][4940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6bb9f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6bb9f-eth0" Jul 7 00:08:00.843663 containerd[1575]: time="2025-07-07T00:08:00.843576004Z" level=info msg="connecting to shim 4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059" address="unix:///run/containerd/s/010c110b5f682b2f49ce39f3430f791e0c7be348149d7d90d811e35caff737c8" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:08:00.877388 systemd-networkd[1493]: cali7a415b21ec5: Link UP Jul 7 00:08:00.877617 systemd-networkd[1493]: cali7a415b21ec5: Gained carrier Jul 7 00:08:00.882459 systemd[1]: Started cri-containerd-4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059.scope - libcontainer container 4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059. Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.685 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0 goldmane-58fd7646b9- calico-system 3c397933-3c8f-4270-8b5a-23cb516115f4 893 0 2025-07-07 00:07:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-rnpsd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7a415b21ec5 [] [] }} ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Namespace="calico-system" Pod="goldmane-58fd7646b9-rnpsd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--rnpsd-" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.685 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Namespace="calico-system" Pod="goldmane-58fd7646b9-rnpsd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.729 [INFO][4958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" HandleID="k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Workload="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.730 [INFO][4958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" HandleID="k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Workload="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-rnpsd", "timestamp":"2025-07-07 00:08:00.729213464 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.730 [INFO][4958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.752 [INFO][4958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.753 [INFO][4958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.829 [INFO][4958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.839 [INFO][4958] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.844 [INFO][4958] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.846 [INFO][4958] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.848 [INFO][4958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.848 [INFO][4958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.854 [INFO][4958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.861 [INFO][4958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.868 [INFO][4958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.868 [INFO][4958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" host="localhost" Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.868 [INFO][4958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:08:00.895343 containerd[1575]: 2025-07-07 00:08:00.868 [INFO][4958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" HandleID="k8s-pod-network.8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Workload="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" Jul 7 00:08:00.895891 containerd[1575]: 2025-07-07 00:08:00.874 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Namespace="calico-system" Pod="goldmane-58fd7646b9-rnpsd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3c397933-3c8f-4270-8b5a-23cb516115f4", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-rnpsd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a415b21ec5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:08:00.895891 containerd[1575]: 2025-07-07 00:08:00.874 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Namespace="calico-system" Pod="goldmane-58fd7646b9-rnpsd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" Jul 7 00:08:00.895891 containerd[1575]: 2025-07-07 00:08:00.874 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a415b21ec5 ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Namespace="calico-system" Pod="goldmane-58fd7646b9-rnpsd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" Jul 7 00:08:00.895891 containerd[1575]: 2025-07-07 00:08:00.877 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Namespace="calico-system" Pod="goldmane-58fd7646b9-rnpsd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" Jul 7 00:08:00.895891 containerd[1575]: 2025-07-07 00:08:00.880 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Namespace="calico-system" Pod="goldmane-58fd7646b9-rnpsd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3c397933-3c8f-4270-8b5a-23cb516115f4", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b", Pod:"goldmane-58fd7646b9-rnpsd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a415b21ec5", MAC:"c6:59:fe:5d:02:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:08:00.895891 containerd[1575]: 2025-07-07 00:08:00.890 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" Namespace="calico-system" Pod="goldmane-58fd7646b9-rnpsd" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--rnpsd-eth0" Jul 7 00:08:00.902134 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:08:00.924649 containerd[1575]: time="2025-07-07T00:08:00.924308331Z" level=info msg="connecting to shim 8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b" address="unix:///run/containerd/s/773cada4521e47a5bc8711b877e8ac637b163bf84ff1953451b90f90bd3181ee" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:08:00.934866 kubelet[2725]: I0707 00:08:00.934294 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:08:00.962432 systemd[1]: Started cri-containerd-8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b.scope - libcontainer container 8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b. Jul 7 00:08:00.982154 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:08:01.003561 containerd[1575]: time="2025-07-07T00:08:01.003523309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a\" id:\"c978332f3e539fc77eb414ea357e9ac0c6c950cd371a6adc39cba1d3838fb386\" pid:5083 exited_at:{seconds:1751846881 nanos:2933180}" Jul 7 00:08:01.046615 containerd[1575]: time="2025-07-07T00:08:01.046575335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bb9f,Uid:dd46f789-c107-49d4-8d93-8d4643fff672,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059\"" Jul 7 00:08:01.047727 kubelet[2725]: E0707 00:08:01.047455 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:01.052031 containerd[1575]: time="2025-07-07T00:08:01.051972426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-rnpsd,Uid:3c397933-3c8f-4270-8b5a-23cb516115f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b\"" Jul 7 00:08:01.052655 containerd[1575]: time="2025-07-07T00:08:01.052628657Z" level=info msg="CreateContainer within sandbox \"4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:08:01.065133 containerd[1575]: time="2025-07-07T00:08:01.065085668Z" level=info msg="Container 1c610d048a9ad6051025b1a8efcf5046eb507479deebe0d7ae599b11ccb38a81: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:08:01.073828 containerd[1575]: time="2025-07-07T00:08:01.073786650Z" level=info msg="CreateContainer within sandbox \"4b3b1f3961bf94b3e2727bd670ab2ebdf1a11958c2477060c33c028b6ac18059\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c610d048a9ad6051025b1a8efcf5046eb507479deebe0d7ae599b11ccb38a81\"" Jul 7 00:08:01.074791 containerd[1575]: time="2025-07-07T00:08:01.074757672Z" level=info msg="StartContainer for \"1c610d048a9ad6051025b1a8efcf5046eb507479deebe0d7ae599b11ccb38a81\"" Jul 7 00:08:01.075616 containerd[1575]: time="2025-07-07T00:08:01.075591077Z" level=info msg="connecting to shim 1c610d048a9ad6051025b1a8efcf5046eb507479deebe0d7ae599b11ccb38a81" address="unix:///run/containerd/s/010c110b5f682b2f49ce39f3430f791e0c7be348149d7d90d811e35caff737c8" protocol=ttrpc version=3 Jul 7 00:08:01.107180 systemd[1]: Started cri-containerd-1c610d048a9ad6051025b1a8efcf5046eb507479deebe0d7ae599b11ccb38a81.scope - libcontainer container 1c610d048a9ad6051025b1a8efcf5046eb507479deebe0d7ae599b11ccb38a81. Jul 7 00:08:01.139479 containerd[1575]: time="2025-07-07T00:08:01.139428997Z" level=info msg="StartContainer for \"1c610d048a9ad6051025b1a8efcf5046eb507479deebe0d7ae599b11ccb38a81\" returns successfully" Jul 7 00:08:01.925481 kubelet[2725]: E0707 00:08:01.925431 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:01.941010 kubelet[2725]: I0707 00:08:01.940778 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6bb9f" podStartSLOduration=51.940645749 podStartE2EDuration="51.940645749s" podCreationTimestamp="2025-07-07 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:08:01.938203698 +0000 UTC m=+58.388271360" watchObservedRunningTime="2025-07-07 00:08:01.940645749 +0000 UTC m=+58.390713401" Jul 7 00:08:02.415764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177303541.mount: Deactivated successfully. Jul 7 00:08:02.439206 containerd[1575]: time="2025-07-07T00:08:02.439153304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:02.439879 containerd[1575]: time="2025-07-07T00:08:02.439853000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:08:02.441150 containerd[1575]: time="2025-07-07T00:08:02.441107514Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:02.443213 containerd[1575]: time="2025-07-07T00:08:02.443163881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:02.444145 containerd[1575]: time="2025-07-07T00:08:02.444090599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.372099367s" Jul 7 00:08:02.444145 containerd[1575]: time="2025-07-07T00:08:02.444141978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:08:02.447553 containerd[1575]: time="2025-07-07T00:08:02.447518336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:08:02.453503 containerd[1575]: time="2025-07-07T00:08:02.453452454Z" level=info msg="CreateContainer within sandbox \"d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:08:02.636643 containerd[1575]: time="2025-07-07T00:08:02.636586995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjph,Uid:e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2,Namespace:calico-system,Attempt:0,}" Jul 7 00:08:02.659253 systemd-networkd[1493]: cali7a415b21ec5: Gained IPv6LL Jul 7 00:08:02.688788 containerd[1575]: time="2025-07-07T00:08:02.688665362Z" level=info msg="Container e80e650d99b06918034b409a87da64228816445e507e9ff811f4fc5c8a80ec39: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:08:02.723165 systemd-networkd[1493]: cali1579450e565: Gained IPv6LL Jul 7 00:08:02.738879 containerd[1575]: time="2025-07-07T00:08:02.738841010Z" level=info msg="CreateContainer within sandbox \"d10c442a7f7e49aaf8ffe48d004e142864d9cba0d776bbb6eefbec2d225ef0ff\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e80e650d99b06918034b409a87da64228816445e507e9ff811f4fc5c8a80ec39\"" Jul 7 00:08:02.742483 containerd[1575]: time="2025-07-07T00:08:02.742446071Z" level=info msg="StartContainer for \"e80e650d99b06918034b409a87da64228816445e507e9ff811f4fc5c8a80ec39\"" Jul 7 00:08:02.744962 containerd[1575]: time="2025-07-07T00:08:02.744921071Z" level=info msg="connecting to shim e80e650d99b06918034b409a87da64228816445e507e9ff811f4fc5c8a80ec39" address="unix:///run/containerd/s/f56f7d121ff98434229fb8c3fdedd0adb20aa591e28e04b6d927e9af254ee016" protocol=ttrpc version=3 Jul 7 00:08:02.775157 systemd[1]: Started cri-containerd-e80e650d99b06918034b409a87da64228816445e507e9ff811f4fc5c8a80ec39.scope - libcontainer container e80e650d99b06918034b409a87da64228816445e507e9ff811f4fc5c8a80ec39. Jul 7 00:08:02.864519 containerd[1575]: time="2025-07-07T00:08:02.864457072Z" level=info msg="StartContainer for \"e80e650d99b06918034b409a87da64228816445e507e9ff811f4fc5c8a80ec39\" returns successfully" Jul 7 00:08:02.902804 systemd-networkd[1493]: calid50818a4bc3: Link UP Jul 7 00:08:02.904667 systemd-networkd[1493]: calid50818a4bc3: Gained carrier Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.784 [INFO][5161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7gjph-eth0 csi-node-driver- calico-system e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2 753 0 2025-07-07 00:07:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7gjph eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid50818a4bc3 [] [] }} ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Namespace="calico-system" Pod="csi-node-driver-7gjph" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjph-" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.784 [INFO][5161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Namespace="calico-system" Pod="csi-node-driver-7gjph" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjph-eth0" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.861 [INFO][5197] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" HandleID="k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Workload="localhost-k8s-csi--node--driver--7gjph-eth0" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.861 [INFO][5197] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" HandleID="k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Workload="localhost-k8s-csi--node--driver--7gjph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7a00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7gjph", "timestamp":"2025-07-07 00:08:02.861102537 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.861 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.861 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.861 [INFO][5197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.870 [INFO][5197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.874 [INFO][5197] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.878 [INFO][5197] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.880 [INFO][5197] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.882 [INFO][5197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.882 [INFO][5197] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.884 [INFO][5197] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.888 [INFO][5197] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.896 [INFO][5197] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.896 [INFO][5197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" host="localhost" Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.896 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:08:02.922665 containerd[1575]: 2025-07-07 00:08:02.896 [INFO][5197] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" HandleID="k8s-pod-network.73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Workload="localhost-k8s-csi--node--driver--7gjph-eth0" Jul 7 00:08:02.923483 containerd[1575]: 2025-07-07 00:08:02.900 [INFO][5161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Namespace="calico-system" Pod="csi-node-driver-7gjph" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gjph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7gjph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid50818a4bc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:08:02.923483 containerd[1575]: 2025-07-07 00:08:02.900 [INFO][5161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Namespace="calico-system" Pod="csi-node-driver-7gjph" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjph-eth0" Jul 7 00:08:02.923483 containerd[1575]: 2025-07-07 00:08:02.900 [INFO][5161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid50818a4bc3 ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Namespace="calico-system" Pod="csi-node-driver-7gjph" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjph-eth0" Jul 7 00:08:02.923483 containerd[1575]: 2025-07-07 00:08:02.903 [INFO][5161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Namespace="calico-system" Pod="csi-node-driver-7gjph" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjph-eth0" Jul 7 00:08:02.923483 containerd[1575]: 2025-07-07 00:08:02.905 [INFO][5161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Namespace="calico-system" Pod="csi-node-driver-7gjph" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gjph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac", Pod:"csi-node-driver-7gjph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid50818a4bc3", MAC:"1e:2a:b0:a1:33:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:08:02.923483 containerd[1575]: 2025-07-07 00:08:02.918 [INFO][5161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" Namespace="calico-system" Pod="csi-node-driver-7gjph" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjph-eth0" Jul 7 00:08:02.936726 kubelet[2725]: E0707 00:08:02.936659 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:02.951195 kubelet[2725]: I0707 00:08:02.950090 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-584998b6cd-prdh2" podStartSLOduration=2.22549779 podStartE2EDuration="12.950070827s" podCreationTimestamp="2025-07-07 00:07:50 +0000 UTC" firstStartedPulling="2025-07-07 00:07:51.722679533 +0000 UTC m=+48.172747175" lastFinishedPulling="2025-07-07 00:08:02.44725255 +0000 UTC m=+58.897320212" observedRunningTime="2025-07-07 00:08:02.949671392 +0000 UTC m=+59.399739044" watchObservedRunningTime="2025-07-07 00:08:02.950070827 +0000 UTC m=+59.400138479" Jul 7 00:08:02.966319 containerd[1575]: time="2025-07-07T00:08:02.966270313Z" level=info msg="connecting to shim 73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac" address="unix:///run/containerd/s/47c48e81fa98026b19a53a2ec12a4b2bddf7b5020f2dc5fa7446017a42ca1270" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:08:02.996296 systemd[1]: Started cri-containerd-73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac.scope - libcontainer container 73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac. Jul 7 00:08:03.012266 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:08:03.038449 containerd[1575]: time="2025-07-07T00:08:03.038395142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjph,Uid:e9f3e3a2-bf82-4e74-a3d2-023b2f0edaa2,Namespace:calico-system,Attempt:0,} returns sandbox id \"73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac\"" Jul 7 00:08:03.943384 kubelet[2725]: E0707 00:08:03.943332 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:04.067310 systemd-networkd[1493]: calid50818a4bc3: Gained IPv6LL Jul 7 00:08:04.639312 containerd[1575]: time="2025-07-07T00:08:04.639256738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-2jf6l,Uid:5d8fa112-6a01-4e51-b931-c7ec59ec77a8,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:08:04.761143 systemd-networkd[1493]: cali4958071961a: Link UP Jul 7 00:08:04.763202 systemd-networkd[1493]: cali4958071961a: Gained carrier Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.685 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0 calico-apiserver-57c5b65fcc- calico-apiserver 5d8fa112-6a01-4e51-b931-c7ec59ec77a8 896 0 2025-07-07 00:07:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57c5b65fcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57c5b65fcc-2jf6l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4958071961a [] [] }} ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-2jf6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.686 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-2jf6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.716 [INFO][5295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" HandleID="k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Workload="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.716 [INFO][5295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" HandleID="k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Workload="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57c5b65fcc-2jf6l", "timestamp":"2025-07-07 00:08:04.71643685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.716 [INFO][5295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.716 [INFO][5295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.716 [INFO][5295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.724 [INFO][5295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.729 [INFO][5295] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.734 [INFO][5295] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.736 [INFO][5295] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.738 [INFO][5295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.738 [INFO][5295] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.739 [INFO][5295] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.743 [INFO][5295] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.751 [INFO][5295] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.751 [INFO][5295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" host="localhost" Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.751 [INFO][5295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:08:04.784089 containerd[1575]: 2025-07-07 00:08:04.751 [INFO][5295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" HandleID="k8s-pod-network.0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Workload="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" Jul 7 00:08:04.784686 containerd[1575]: 2025-07-07 00:08:04.757 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-2jf6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0", GenerateName:"calico-apiserver-57c5b65fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d8fa112-6a01-4e51-b931-c7ec59ec77a8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c5b65fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57c5b65fcc-2jf6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4958071961a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:08:04.784686 containerd[1575]: 2025-07-07 00:08:04.758 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-2jf6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" Jul 7 00:08:04.784686 containerd[1575]: 2025-07-07 00:08:04.758 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4958071961a ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-2jf6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" Jul 7 00:08:04.784686 containerd[1575]: 2025-07-07 00:08:04.763 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-2jf6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" Jul 7 00:08:04.784686 containerd[1575]: 2025-07-07 00:08:04.765 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-2jf6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0", GenerateName:"calico-apiserver-57c5b65fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d8fa112-6a01-4e51-b931-c7ec59ec77a8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c5b65fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab", Pod:"calico-apiserver-57c5b65fcc-2jf6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4958071961a", MAC:"5e:03:b5:24:de:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:08:04.784686 containerd[1575]: 2025-07-07 00:08:04.779 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" Namespace="calico-apiserver" Pod="calico-apiserver-57c5b65fcc-2jf6l" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c5b65fcc--2jf6l-eth0" Jul 7 00:08:04.813846 containerd[1575]: time="2025-07-07T00:08:04.813802438Z" level=info msg="connecting to shim 0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab" address="unix:///run/containerd/s/8efee3ee0d313ece6278091a5f20bd2bb841191fb7722e4af7835d8e0c680042" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:08:04.850201 systemd[1]: Started cri-containerd-0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab.scope - libcontainer container 0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab. Jul 7 00:08:04.866226 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:08:04.912424 containerd[1575]: time="2025-07-07T00:08:04.912320256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c5b65fcc-2jf6l,Uid:5d8fa112-6a01-4e51-b931-c7ec59ec77a8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab\"" Jul 7 00:08:04.917340 containerd[1575]: time="2025-07-07T00:08:04.917298595Z" level=info msg="CreateContainer within sandbox \"0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:08:04.975549 containerd[1575]: time="2025-07-07T00:08:04.975448505Z" level=info msg="Container 8680a873a12586730d4311295a01ef5a8b9160c691eb34c0714624c815b1732a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:08:04.987553 containerd[1575]: time="2025-07-07T00:08:04.987512590Z" level=info msg="CreateContainer within sandbox \"0d256a6e7e89f63b933ad08ea6fd03226ea09089422d142e9388dffde5848cab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8680a873a12586730d4311295a01ef5a8b9160c691eb34c0714624c815b1732a\"" Jul 7 00:08:04.988187 containerd[1575]: time="2025-07-07T00:08:04.988105769Z" level=info msg="StartContainer for \"8680a873a12586730d4311295a01ef5a8b9160c691eb34c0714624c815b1732a\"" Jul 7 00:08:04.989349 containerd[1575]: time="2025-07-07T00:08:04.989299018Z" level=info msg="connecting to shim 8680a873a12586730d4311295a01ef5a8b9160c691eb34c0714624c815b1732a" address="unix:///run/containerd/s/8efee3ee0d313ece6278091a5f20bd2bb841191fb7722e4af7835d8e0c680042" protocol=ttrpc version=3 Jul 7 00:08:05.019194 systemd[1]: Started cri-containerd-8680a873a12586730d4311295a01ef5a8b9160c691eb34c0714624c815b1732a.scope - libcontainer container 8680a873a12586730d4311295a01ef5a8b9160c691eb34c0714624c815b1732a. Jul 7 00:08:05.077747 containerd[1575]: time="2025-07-07T00:08:05.077688193Z" level=info msg="StartContainer for \"8680a873a12586730d4311295a01ef5a8b9160c691eb34c0714624c815b1732a\" returns successfully" Jul 7 00:08:05.453618 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:40616.service - OpenSSH per-connection server daemon (10.0.0.1:40616). Jul 7 00:08:05.519135 containerd[1575]: time="2025-07-07T00:08:05.519078403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:05.519779 containerd[1575]: time="2025-07-07T00:08:05.519747718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:08:05.520991 containerd[1575]: time="2025-07-07T00:08:05.520955966Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:05.523112 containerd[1575]: time="2025-07-07T00:08:05.523059624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:05.524024 containerd[1575]: time="2025-07-07T00:08:05.523932342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.076376905s" Jul 7 00:08:05.524072 containerd[1575]: time="2025-07-07T00:08:05.524031985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:08:05.525915 containerd[1575]: time="2025-07-07T00:08:05.525189884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:08:05.527280 containerd[1575]: time="2025-07-07T00:08:05.527248626Z" level=info msg="CreateContainer within sandbox \"8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:08:05.528937 sshd[5401]: Accepted publickey for core from 10.0.0.1 port 40616 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:05.529695 sshd-session[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:05.534643 systemd-logind[1560]: New session 15 of user core. Jul 7 00:08:05.537555 containerd[1575]: time="2025-07-07T00:08:05.537519593Z" level=info msg="Container d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:08:05.540141 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:08:05.546355 containerd[1575]: time="2025-07-07T00:08:05.546307922Z" level=info msg="CreateContainer within sandbox \"8c9db9840ff9e5305cc741a2ec5eeb981db69520980d7ec5d86e80b315662f0b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7\"" Jul 7 00:08:05.546925 containerd[1575]: time="2025-07-07T00:08:05.546900689Z" level=info msg="StartContainer for \"d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7\"" Jul 7 00:08:05.548348 containerd[1575]: time="2025-07-07T00:08:05.548317098Z" level=info msg="connecting to shim d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7" address="unix:///run/containerd/s/773cada4521e47a5bc8711b877e8ac637b163bf84ff1953451b90f90bd3181ee" protocol=ttrpc version=3 Jul 7 00:08:05.574163 systemd[1]: Started cri-containerd-d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7.scope - libcontainer container d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7. Jul 7 00:08:05.633850 containerd[1575]: time="2025-07-07T00:08:05.633807597Z" level=info msg="StartContainer for \"d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7\" returns successfully" Jul 7 00:08:05.715686 sshd[5407]: Connection closed by 10.0.0.1 port 40616 Jul 7 00:08:05.715942 sshd-session[5401]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:05.720767 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:40616.service: Deactivated successfully. Jul 7 00:08:05.723052 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:08:05.725074 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:08:05.726890 systemd-logind[1560]: Removed session 15. Jul 7 00:08:05.804234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838790154.mount: Deactivated successfully. Jul 7 00:08:05.859229 systemd-networkd[1493]: cali4958071961a: Gained IPv6LL Jul 7 00:08:05.969956 kubelet[2725]: I0707 00:08:05.969745 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-rnpsd" podStartSLOduration=40.499618334 podStartE2EDuration="44.969726291s" podCreationTimestamp="2025-07-07 00:07:21 +0000 UTC" firstStartedPulling="2025-07-07 00:08:01.054794071 +0000 UTC m=+57.504861713" lastFinishedPulling="2025-07-07 00:08:05.524902018 +0000 UTC m=+61.974969670" observedRunningTime="2025-07-07 00:08:05.969154265 +0000 UTC m=+62.419221917" watchObservedRunningTime="2025-07-07 00:08:05.969726291 +0000 UTC m=+62.419793943" Jul 7 00:08:05.984475 kubelet[2725]: I0707 00:08:05.984394 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57c5b65fcc-2jf6l" podStartSLOduration=47.984376478 podStartE2EDuration="47.984376478s" podCreationTimestamp="2025-07-07 00:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:08:05.983732733 +0000 UTC m=+62.433800385" watchObservedRunningTime="2025-07-07 00:08:05.984376478 +0000 UTC m=+62.434444130" Jul 7 00:08:06.059808 containerd[1575]: time="2025-07-07T00:08:06.059733813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7\" id:\"5a991434f670acf36734beb0f37f2f58c331cd630c45f82a9d4b5f302d8fe890\" pid:5465 exit_status:1 exited_at:{seconds:1751846886 nanos:58631863}" Jul 7 00:08:06.604838 containerd[1575]: time="2025-07-07T00:08:06.604543260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a\" id:\"d55cb38c2f0d03bce1f6791af1321c25c35235721dd0a467cc55cfed6be22e40\" pid:5494 exited_at:{seconds:1751846886 nanos:604113790}" Jul 7 00:08:06.666463 containerd[1575]: time="2025-07-07T00:08:06.666404124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7\" id:\"6ec4b122b18c98a7caecf985e3c0491c98d4f602839c698e75782644ab3fd85c\" pid:5512 exit_status:1 exited_at:{seconds:1751846886 nanos:665947673}" Jul 7 00:08:07.052762 containerd[1575]: time="2025-07-07T00:08:07.052632712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7\" id:\"39c3c1923bdd155d13cea86a87663f16bdd5230ed7dca4d5437733bee48f19f6\" pid:5539 exit_status:1 exited_at:{seconds:1751846887 nanos:51935186}" Jul 7 00:08:07.098200 containerd[1575]: time="2025-07-07T00:08:07.098121476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:07.099101 containerd[1575]: time="2025-07-07T00:08:07.099058495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:08:07.100259 containerd[1575]: time="2025-07-07T00:08:07.100224436Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:07.102232 containerd[1575]: time="2025-07-07T00:08:07.102198829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:07.102727 containerd[1575]: time="2025-07-07T00:08:07.102702892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.577483271s" Jul 7 00:08:07.102769 containerd[1575]: time="2025-07-07T00:08:07.102730737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:08:07.104955 containerd[1575]: time="2025-07-07T00:08:07.104913091Z" level=info msg="CreateContainer within sandbox \"73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:08:07.120238 containerd[1575]: time="2025-07-07T00:08:07.120184461Z" level=info msg="Container bb4555cbb2587e50416193a306075a6db949552a8cd6713225cb4afc2e2bf2d2: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:08:07.139309 containerd[1575]: time="2025-07-07T00:08:07.139265929Z" level=info msg="CreateContainer within sandbox \"73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bb4555cbb2587e50416193a306075a6db949552a8cd6713225cb4afc2e2bf2d2\"" Jul 7 00:08:07.139869 containerd[1575]: time="2025-07-07T00:08:07.139822945Z" level=info msg="StartContainer for \"bb4555cbb2587e50416193a306075a6db949552a8cd6713225cb4afc2e2bf2d2\"" Jul 7 00:08:07.141311 containerd[1575]: time="2025-07-07T00:08:07.141270921Z" level=info msg="connecting to shim bb4555cbb2587e50416193a306075a6db949552a8cd6713225cb4afc2e2bf2d2" address="unix:///run/containerd/s/47c48e81fa98026b19a53a2ec12a4b2bddf7b5020f2dc5fa7446017a42ca1270" protocol=ttrpc version=3 Jul 7 00:08:07.164190 systemd[1]: Started cri-containerd-bb4555cbb2587e50416193a306075a6db949552a8cd6713225cb4afc2e2bf2d2.scope - libcontainer container bb4555cbb2587e50416193a306075a6db949552a8cd6713225cb4afc2e2bf2d2. Jul 7 00:08:07.336703 containerd[1575]: time="2025-07-07T00:08:07.336649985Z" level=info msg="StartContainer for \"bb4555cbb2587e50416193a306075a6db949552a8cd6713225cb4afc2e2bf2d2\" returns successfully" Jul 7 00:08:07.338057 containerd[1575]: time="2025-07-07T00:08:07.338006855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:08:09.438815 containerd[1575]: time="2025-07-07T00:08:09.438753716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:09.439510 containerd[1575]: time="2025-07-07T00:08:09.439450380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:08:09.440556 containerd[1575]: time="2025-07-07T00:08:09.440521906Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:09.442953 containerd[1575]: time="2025-07-07T00:08:09.442909169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:08:09.443359 containerd[1575]: time="2025-07-07T00:08:09.443328167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.105269162s" Jul 7 00:08:09.443432 containerd[1575]: time="2025-07-07T00:08:09.443361902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:08:09.445625 containerd[1575]: time="2025-07-07T00:08:09.445419219Z" level=info msg="CreateContainer within sandbox \"73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:08:09.453650 containerd[1575]: time="2025-07-07T00:08:09.453607026Z" level=info msg="Container d72f05b15bf729b7882b681b7c203e4feb9ee1e33448b91e39ba7f0f649320d2: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:08:09.464604 containerd[1575]: time="2025-07-07T00:08:09.464563151Z" level=info msg="CreateContainer within sandbox \"73c30a908c1e3400ce740c4441fed7829d7dc9164ffdc4935a48a0fb23e123ac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d72f05b15bf729b7882b681b7c203e4feb9ee1e33448b91e39ba7f0f649320d2\"" Jul 7 00:08:09.465067 containerd[1575]: time="2025-07-07T00:08:09.465047224Z" level=info msg="StartContainer for \"d72f05b15bf729b7882b681b7c203e4feb9ee1e33448b91e39ba7f0f649320d2\"" Jul 7 00:08:09.466704 containerd[1575]: time="2025-07-07T00:08:09.466669372Z" level=info msg="connecting to shim d72f05b15bf729b7882b681b7c203e4feb9ee1e33448b91e39ba7f0f649320d2" address="unix:///run/containerd/s/47c48e81fa98026b19a53a2ec12a4b2bddf7b5020f2dc5fa7446017a42ca1270" protocol=ttrpc version=3 Jul 7 00:08:09.494306 systemd[1]: Started cri-containerd-d72f05b15bf729b7882b681b7c203e4feb9ee1e33448b91e39ba7f0f649320d2.scope - libcontainer container d72f05b15bf729b7882b681b7c203e4feb9ee1e33448b91e39ba7f0f649320d2. Jul 7 00:08:09.554343 containerd[1575]: time="2025-07-07T00:08:09.554302501Z" level=info msg="StartContainer for \"d72f05b15bf729b7882b681b7c203e4feb9ee1e33448b91e39ba7f0f649320d2\" returns successfully" Jul 7 00:08:09.711898 kubelet[2725]: I0707 00:08:09.711757 2725 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:08:09.711898 kubelet[2725]: I0707 00:08:09.711803 2725 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:08:09.986587 kubelet[2725]: I0707 00:08:09.986414 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7gjph" podStartSLOduration=41.581982915 podStartE2EDuration="47.986364815s" podCreationTimestamp="2025-07-07 00:07:22 +0000 UTC" firstStartedPulling="2025-07-07 00:08:03.039620637 +0000 UTC m=+59.489688289" lastFinishedPulling="2025-07-07 00:08:09.444002537 +0000 UTC m=+65.894070189" observedRunningTime="2025-07-07 00:08:09.986065137 +0000 UTC m=+66.436132799" watchObservedRunningTime="2025-07-07 00:08:09.986364815 +0000 UTC m=+66.436432487" Jul 7 00:08:10.737177 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:56622.service - OpenSSH per-connection server daemon (10.0.0.1:56622). Jul 7 00:08:10.807292 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 56622 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:10.809067 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:10.814008 systemd-logind[1560]: New session 16 of user core. Jul 7 00:08:10.825169 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:08:10.972125 sshd[5638]: Connection closed by 10.0.0.1 port 56622 Jul 7 00:08:10.972580 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:10.977753 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:56622.service: Deactivated successfully. Jul 7 00:08:10.980201 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:08:10.981188 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:08:10.982532 systemd-logind[1560]: Removed session 16. Jul 7 00:08:15.999873 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:39456.service - OpenSSH per-connection server daemon (10.0.0.1:39456). Jul 7 00:08:16.064618 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 39456 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:16.066539 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:16.071990 systemd-logind[1560]: New session 17 of user core. Jul 7 00:08:16.077223 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:08:16.199356 sshd[5664]: Connection closed by 10.0.0.1 port 39456 Jul 7 00:08:16.199715 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:16.204868 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:39456.service: Deactivated successfully. Jul 7 00:08:16.207096 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:08:16.208069 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:08:16.209539 systemd-logind[1560]: Removed session 17. Jul 7 00:08:21.216130 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:39464.service - OpenSSH per-connection server daemon (10.0.0.1:39464). Jul 7 00:08:21.274901 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 39464 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:21.276338 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:21.280703 systemd-logind[1560]: New session 18 of user core. Jul 7 00:08:21.300175 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:08:21.422347 sshd[5680]: Connection closed by 10.0.0.1 port 39464 Jul 7 00:08:21.422707 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:21.434688 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:39464.service: Deactivated successfully. Jul 7 00:08:21.436557 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:08:21.437429 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:08:21.440615 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:39468.service - OpenSSH per-connection server daemon (10.0.0.1:39468). Jul 7 00:08:21.441518 systemd-logind[1560]: Removed session 18. Jul 7 00:08:21.501482 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 39468 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:21.502873 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:21.507314 systemd-logind[1560]: New session 19 of user core. Jul 7 00:08:21.526142 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:08:21.795700 sshd[5696]: Connection closed by 10.0.0.1 port 39468 Jul 7 00:08:21.796786 sshd-session[5694]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:21.808069 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:39468.service: Deactivated successfully. Jul 7 00:08:21.810160 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:08:21.811103 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:08:21.815039 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:39470.service - OpenSSH per-connection server daemon (10.0.0.1:39470). Jul 7 00:08:21.816099 systemd-logind[1560]: Removed session 19. Jul 7 00:08:21.866791 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 39470 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:21.868282 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:21.872752 systemd-logind[1560]: New session 20 of user core. Jul 7 00:08:21.879143 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:08:23.636683 kubelet[2725]: E0707 00:08:23.636635 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:23.844980 sshd[5709]: Connection closed by 10.0.0.1 port 39470 Jul 7 00:08:23.846317 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:23.859294 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:39472.service - OpenSSH per-connection server daemon (10.0.0.1:39472). Jul 7 00:08:23.859917 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:39470.service: Deactivated successfully. Jul 7 00:08:23.867534 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:08:23.867799 systemd[1]: session-20.scope: Consumed 614ms CPU time, 72.6M memory peak. Jul 7 00:08:23.872490 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:08:23.876957 systemd-logind[1560]: Removed session 20. Jul 7 00:08:23.925230 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 39472 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:23.926931 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:23.933688 systemd-logind[1560]: New session 21 of user core. Jul 7 00:08:23.944270 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:08:24.389146 sshd[5732]: Connection closed by 10.0.0.1 port 39472 Jul 7 00:08:24.389821 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:24.399942 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:39472.service: Deactivated successfully. Jul 7 00:08:24.402728 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:08:24.405618 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:08:24.408513 systemd-logind[1560]: Removed session 21. Jul 7 00:08:24.410067 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:39476.service - OpenSSH per-connection server daemon (10.0.0.1:39476). Jul 7 00:08:24.462176 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 39476 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:24.464295 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:24.469897 systemd-logind[1560]: New session 22 of user core. Jul 7 00:08:24.479252 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:08:24.631591 sshd[5745]: Connection closed by 10.0.0.1 port 39476 Jul 7 00:08:24.632259 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:24.638504 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:39476.service: Deactivated successfully. Jul 7 00:08:24.641200 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:08:24.642694 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:08:24.644411 systemd-logind[1560]: Removed session 22. Jul 7 00:08:28.636706 kubelet[2725]: E0707 00:08:28.636346 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:29.646312 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:53252.service - OpenSSH per-connection server daemon (10.0.0.1:53252). Jul 7 00:08:29.698899 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 53252 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:29.700808 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:29.705314 systemd-logind[1560]: New session 23 of user core. Jul 7 00:08:29.716151 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:08:29.830718 sshd[5766]: Connection closed by 10.0.0.1 port 53252 Jul 7 00:08:29.831092 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:29.836371 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:53252.service: Deactivated successfully. Jul 7 00:08:29.839485 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:08:29.840971 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:08:29.842488 systemd-logind[1560]: Removed session 23. Jul 7 00:08:30.435048 containerd[1575]: time="2025-07-07T00:08:30.433999961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a74b31093f024da34d303f5cbe671481a153a07b76bfc974e6ffa3df0e3b5c7\" id:\"21829a6ec9d6b697e323471a3e1bdf96f661cf5d164af8fc516764e4eeaa9923\" pid:5791 exited_at:{seconds:1751846910 nanos:376384051}" Jul 7 00:08:30.636454 kubelet[2725]: E0707 00:08:30.636399 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:34.845062 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:53268.service - OpenSSH per-connection server daemon (10.0.0.1:53268). Jul 7 00:08:34.899514 sshd[5811]: Accepted publickey for core from 10.0.0.1 port 53268 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:34.901071 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:34.905597 systemd-logind[1560]: New session 24 of user core. Jul 7 00:08:34.915166 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:08:35.025599 sshd[5813]: Connection closed by 10.0.0.1 port 53268 Jul 7 00:08:35.025918 sshd-session[5811]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:35.031322 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:53268.service: Deactivated successfully. Jul 7 00:08:35.034145 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:08:35.035039 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:08:35.037236 systemd-logind[1560]: Removed session 24. Jul 7 00:08:35.637055 kubelet[2725]: E0707 00:08:35.636795 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:08:36.592584 containerd[1575]: time="2025-07-07T00:08:36.592536494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1ff10fc037d0f12a642c24eb4f582a7d6eff9c2b281fab2e72c23aeba2e171a\" id:\"85faea39cc006cb9fa6a37a8ce540c08481dd92d4a94e4403e7377ddc46d88a2\" pid:5837 exited_at:{seconds:1751846916 nanos:591605592}" Jul 7 00:08:36.630422 containerd[1575]: time="2025-07-07T00:08:36.630377956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d15f961bc21a62f9a988b7af5746be10641bd9511b743015e6612752a25484e7\" id:\"7749a5e51baf5d72ee8cc39506a1fb93cefe020758bd3f3a4d5a68f7e9cef6f2\" pid:5856 exited_at:{seconds:1751846916 nanos:630048088}" Jul 7 00:08:40.043071 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:36728.service - OpenSSH per-connection server daemon (10.0.0.1:36728). Jul 7 00:08:40.098198 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 36728 ssh2: RSA SHA256:c2MxDz5KdjOZKHaJdpqg0/zLkxrP0+3r3zCFEYfXQ2Q Jul 7 00:08:40.100228 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:08:40.105094 systemd-logind[1560]: New session 25 of user core. Jul 7 00:08:40.119300 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:08:40.235038 sshd[5877]: Connection closed by 10.0.0.1 port 36728 Jul 7 00:08:40.235369 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Jul 7 00:08:40.240455 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:36728.service: Deactivated successfully. Jul 7 00:08:40.242963 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:08:40.244052 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:08:40.245733 systemd-logind[1560]: Removed session 25.