Jul 7 06:05:45.871568 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:05:45.871597 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:05:45.871607 kernel: BIOS-provided physical RAM map: Jul 7 06:05:45.871613 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 06:05:45.871620 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 06:05:45.871626 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 06:05:45.871634 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 7 06:05:45.871646 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 7 06:05:45.871655 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:05:45.871662 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 06:05:45.871669 kernel: NX (Execute Disable) protection: active Jul 7 06:05:45.871676 kernel: APIC: Static calls initialized Jul 7 06:05:45.871683 kernel: SMBIOS 2.8 present. Jul 7 06:05:45.871691 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 7 06:05:45.871702 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:05:45.871709 kernel: Hypervisor detected: KVM Jul 7 06:05:45.871720 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:05:45.871728 kernel: kvm-clock: using sched offset of 4230451101 cycles Jul 7 06:05:45.871736 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:05:45.871744 kernel: tsc: Detected 2494.140 MHz processor Jul 7 06:05:45.871752 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:05:45.871760 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:05:45.871768 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 7 06:05:45.871779 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 06:05:45.871787 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:05:45.871795 kernel: ACPI: Early table checksum verification disabled Jul 7 06:05:45.871802 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 7 06:05:45.871810 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:05:45.871818 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:05:45.871826 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:05:45.871834 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 7 06:05:45.871842 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:05:45.871852 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:05:45.871860 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:05:45.871867 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:05:45.871875 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 7 06:05:45.871883 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 7 06:05:45.871891 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 7 06:05:45.871899 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 7 06:05:45.871909 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 7 06:05:45.871920 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 7 06:05:45.871929 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 7 06:05:45.871937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 06:05:45.871945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 7 06:05:45.871978 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jul 7 06:05:45.871987 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jul 7 06:05:45.871999 kernel: Zone ranges: Jul 7 06:05:45.872007 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:05:45.872016 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 7 06:05:45.872027 kernel: Normal empty Jul 7 06:05:45.872037 kernel: Device empty Jul 7 06:05:45.872047 kernel: Movable zone start for each node Jul 7 06:05:45.872060 kernel: Early memory node ranges Jul 7 06:05:45.872069 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 06:05:45.872077 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 7 06:05:45.872093 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 7 06:05:45.872107 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:05:45.872115 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 06:05:45.872123 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 7 06:05:45.872132 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:05:45.872140 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:05:45.872152 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:05:45.872160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:05:45.872170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:05:45.872181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:05:45.872191 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:05:45.872200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:05:45.872208 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:05:45.872216 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:05:45.872224 kernel: TSC deadline timer available Jul 7 06:05:45.872232 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:05:45.872241 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:05:45.872249 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:05:45.872257 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:05:45.872273 kernel: CPU topo: Num. cores per package: 2 Jul 7 06:05:45.872285 kernel: CPU topo: Num. threads per package: 2 Jul 7 06:05:45.872299 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 06:05:45.872313 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:05:45.872327 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 7 06:05:45.872340 kernel: Booting paravirtualized kernel on KVM Jul 7 06:05:45.872353 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:05:45.872364 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 06:05:45.872378 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 06:05:45.872395 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 06:05:45.872408 kernel: pcpu-alloc: [0] 0 1 Jul 7 06:05:45.872421 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 7 06:05:45.872437 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:05:45.872452 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:05:45.872463 kernel: random: crng init done Jul 7 06:05:45.872476 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:05:45.872485 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 06:05:45.872496 kernel: Fallback order for Node 0: 0 Jul 7 06:05:45.872508 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jul 7 06:05:45.872519 kernel: Policy zone: DMA32 Jul 7 06:05:45.872531 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:05:45.872540 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:05:45.872548 kernel: Kernel/User page tables isolation: enabled Jul 7 06:05:45.872556 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:05:45.872565 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:05:45.872573 kernel: Dynamic Preempt: voluntary Jul 7 06:05:45.872584 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:05:45.872594 kernel: rcu: RCU event tracing is enabled. Jul 7 06:05:45.872602 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:05:45.872611 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:05:45.872619 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:05:45.872627 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:05:45.872636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:05:45.872644 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:05:45.872652 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:05:45.872667 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:05:45.872690 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:05:45.872702 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 06:05:45.872710 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:05:45.872719 kernel: Console: colour VGA+ 80x25 Jul 7 06:05:45.872727 kernel: printk: legacy console [tty0] enabled Jul 7 06:05:45.872735 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:05:45.872744 kernel: ACPI: Core revision 20240827 Jul 7 06:05:45.872757 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:05:45.872787 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:05:45.872802 kernel: x2apic enabled Jul 7 06:05:45.872816 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:05:45.872843 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:05:45.872865 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 7 06:05:45.872880 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jul 7 06:05:45.872895 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 06:05:45.872910 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 06:05:45.872930 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:05:45.872947 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:05:45.873074 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:05:45.873083 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 7 06:05:45.873092 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:05:45.873101 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:05:45.873110 kernel: MDS: Mitigation: Clear CPU buffers Jul 7 06:05:45.873119 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 06:05:45.873132 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 06:05:45.873141 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:05:45.873150 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:05:45.873159 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:05:45.873168 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:05:45.873177 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 7 06:05:45.873188 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:05:45.873200 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:05:45.873215 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:05:45.873233 kernel: landlock: Up and running. Jul 7 06:05:45.873248 kernel: SELinux: Initializing. Jul 7 06:05:45.873263 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 06:05:45.873280 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 06:05:45.873294 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 7 06:05:45.873308 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 7 06:05:45.873322 kernel: signal: max sigframe size: 1776 Jul 7 06:05:45.873337 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:05:45.873352 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:05:45.873371 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:05:45.873386 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 06:05:45.873399 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:05:45.873413 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:05:45.873426 kernel: .... node #0, CPUs: #1 Jul 7 06:05:45.873435 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:05:45.873447 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jul 7 06:05:45.873461 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 125140K reserved, 0K cma-reserved) Jul 7 06:05:45.873474 kernel: devtmpfs: initialized Jul 7 06:05:45.873486 kernel: x86/mm: Memory block size: 128MB Jul 7 06:05:45.873495 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:05:45.873504 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:05:45.873513 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:05:45.873522 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:05:45.873531 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:05:45.873542 kernel: audit: type=2000 audit(1751868343.334:1): state=initialized audit_enabled=0 res=1 Jul 7 06:05:45.873554 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:05:45.873565 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:05:45.873581 kernel: cpuidle: using governor menu Jul 7 06:05:45.873593 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:05:45.873606 kernel: dca service started, version 1.12.1 Jul 7 06:05:45.873616 kernel: PCI: Using configuration type 1 for base access Jul 7 06:05:45.873630 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:05:45.873647 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:05:45.873656 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:05:45.873665 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:05:45.873679 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:05:45.873698 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:05:45.873712 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:05:45.873728 kernel: ACPI: Interpreter enabled Jul 7 06:05:45.873741 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:05:45.873750 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:05:45.873760 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:05:45.873769 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:05:45.873778 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 06:05:45.873787 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:05:45.874016 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:05:45.874121 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 06:05:45.874230 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 06:05:45.874243 kernel: acpiphp: Slot [3] registered Jul 7 06:05:45.874252 kernel: acpiphp: Slot [4] registered Jul 7 06:05:45.874260 kernel: acpiphp: Slot [5] registered Jul 7 06:05:45.874269 kernel: acpiphp: Slot [6] registered Jul 7 06:05:45.874283 kernel: acpiphp: Slot [7] registered Jul 7 06:05:45.874291 kernel: acpiphp: Slot [8] registered Jul 7 06:05:45.874300 kernel: acpiphp: Slot [9] registered Jul 7 06:05:45.874309 kernel: acpiphp: Slot [10] registered Jul 7 06:05:45.874318 kernel: acpiphp: Slot [11] registered Jul 7 06:05:45.874326 kernel: acpiphp: Slot [12] registered Jul 7 06:05:45.874335 kernel: acpiphp: Slot [13] registered Jul 7 06:05:45.874344 kernel: acpiphp: Slot [14] registered Jul 7 06:05:45.874353 kernel: acpiphp: Slot [15] registered Jul 7 06:05:45.874362 kernel: acpiphp: Slot [16] registered Jul 7 06:05:45.874374 kernel: acpiphp: Slot [17] registered Jul 7 06:05:45.874382 kernel: acpiphp: Slot [18] registered Jul 7 06:05:45.874391 kernel: acpiphp: Slot [19] registered Jul 7 06:05:45.874400 kernel: acpiphp: Slot [20] registered Jul 7 06:05:45.874409 kernel: acpiphp: Slot [21] registered Jul 7 06:05:45.874417 kernel: acpiphp: Slot [22] registered Jul 7 06:05:45.874426 kernel: acpiphp: Slot [23] registered Jul 7 06:05:45.874435 kernel: acpiphp: Slot [24] registered Jul 7 06:05:45.874444 kernel: acpiphp: Slot [25] registered Jul 7 06:05:45.874455 kernel: acpiphp: Slot [26] registered Jul 7 06:05:45.874464 kernel: acpiphp: Slot [27] registered Jul 7 06:05:45.874472 kernel: acpiphp: Slot [28] registered Jul 7 06:05:45.874481 kernel: acpiphp: Slot [29] registered Jul 7 06:05:45.874490 kernel: acpiphp: Slot [30] registered Jul 7 06:05:45.874498 kernel: acpiphp: Slot [31] registered Jul 7 06:05:45.874507 kernel: PCI host bridge to bus 0000:00 Jul 7 06:05:45.874615 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:05:45.874729 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:05:45.874847 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:05:45.874936 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 7 06:05:45.875052 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 7 06:05:45.875166 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:05:45.875326 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:05:45.875445 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:05:45.875559 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 7 06:05:45.875654 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jul 7 06:05:45.875747 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 7 06:05:45.875840 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 7 06:05:45.875945 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 7 06:05:45.877329 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 7 06:05:45.877497 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jul 7 06:05:45.877661 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jul 7 06:05:45.877846 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 7 06:05:45.877986 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 7 06:05:45.878081 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 7 06:05:45.878190 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:05:45.878285 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 7 06:05:45.878384 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jul 7 06:05:45.878477 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jul 7 06:05:45.878570 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jul 7 06:05:45.878663 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:05:45.878777 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:05:45.878871 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jul 7 06:05:45.881526 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jul 7 06:05:45.881727 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jul 7 06:05:45.881899 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:05:45.882108 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jul 7 06:05:45.882267 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jul 7 06:05:45.882430 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 7 06:05:45.882622 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:05:45.882819 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jul 7 06:05:45.884063 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jul 7 06:05:45.884196 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 7 06:05:45.884311 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:05:45.884408 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jul 7 06:05:45.884502 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jul 7 06:05:45.884596 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jul 7 06:05:45.884720 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:05:45.884824 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jul 7 06:05:45.884916 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jul 7 06:05:45.885025 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jul 7 06:05:45.885137 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 06:05:45.885232 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jul 7 06:05:45.885328 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 7 06:05:45.885340 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:05:45.885350 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:05:45.885359 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:05:45.885368 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:05:45.885378 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 06:05:45.885387 kernel: iommu: Default domain type: Translated Jul 7 06:05:45.885396 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:05:45.885405 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:05:45.885418 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:05:45.885427 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 06:05:45.885436 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 7 06:05:45.885535 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 7 06:05:45.885629 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 7 06:05:45.885723 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:05:45.885735 kernel: vgaarb: loaded Jul 7 06:05:45.885745 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:05:45.885754 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:05:45.885767 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:05:45.885776 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:05:45.885786 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:05:45.885795 kernel: pnp: PnP ACPI init Jul 7 06:05:45.885804 kernel: pnp: PnP ACPI: found 4 devices Jul 7 06:05:45.885813 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:05:45.885822 kernel: NET: Registered PF_INET protocol family Jul 7 06:05:45.885831 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:05:45.885840 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 06:05:45.885852 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:05:45.885861 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:05:45.885870 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 06:05:45.885880 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 06:05:45.885889 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 06:05:45.885898 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 06:05:45.885907 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:05:45.885916 kernel: NET: Registered PF_XDP protocol family Jul 7 06:05:45.887626 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:05:45.887750 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:05:45.887852 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:05:45.889023 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 7 06:05:45.889219 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 7 06:05:45.889380 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 7 06:05:45.889498 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 06:05:45.889521 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 06:05:45.889654 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27213 usecs Jul 7 06:05:45.889667 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:05:45.889677 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 06:05:45.889687 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 7 06:05:45.889696 kernel: Initialise system trusted keyrings Jul 7 06:05:45.889705 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 06:05:45.889714 kernel: Key type asymmetric registered Jul 7 06:05:45.889723 kernel: Asymmetric key parser 'x509' registered Jul 7 06:05:45.889733 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:05:45.889746 kernel: io scheduler mq-deadline registered Jul 7 06:05:45.889755 kernel: io scheduler kyber registered Jul 7 06:05:45.889764 kernel: io scheduler bfq registered Jul 7 06:05:45.889773 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:05:45.889782 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 7 06:05:45.889791 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 7 06:05:45.889800 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 7 06:05:45.889809 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:05:45.889818 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:05:45.889829 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:05:45.889838 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:05:45.889847 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:05:45.892039 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 06:05:45.892070 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:05:45.892175 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 06:05:45.892264 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T06:05:45 UTC (1751868345) Jul 7 06:05:45.892350 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 7 06:05:45.892369 kernel: intel_pstate: CPU model not supported Jul 7 06:05:45.892379 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:05:45.892388 kernel: Segment Routing with IPv6 Jul 7 06:05:45.892397 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:05:45.892406 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:05:45.892416 kernel: Key type dns_resolver registered Jul 7 06:05:45.892425 kernel: IPI shorthand broadcast: enabled Jul 7 06:05:45.892435 kernel: sched_clock: Marking stable (3240005639, 113187855)->(3373995985, -20802491) Jul 7 06:05:45.892444 kernel: registered taskstats version 1 Jul 7 06:05:45.892456 kernel: Loading compiled-in X.509 certificates Jul 7 06:05:45.892465 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:05:45.892474 kernel: Demotion targets for Node 0: null Jul 7 06:05:45.892483 kernel: Key type .fscrypt registered Jul 7 06:05:45.892492 kernel: Key type fscrypt-provisioning registered Jul 7 06:05:45.892505 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:05:45.892531 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:05:45.892543 kernel: ima: No architecture policies found Jul 7 06:05:45.892555 kernel: clk: Disabling unused clocks Jul 7 06:05:45.892565 kernel: Warning: unable to open an initial console. Jul 7 06:05:45.892574 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:05:45.892583 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:05:45.892593 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:05:45.892602 kernel: Run /init as init process Jul 7 06:05:45.892611 kernel: with arguments: Jul 7 06:05:45.892621 kernel: /init Jul 7 06:05:45.892630 kernel: with environment: Jul 7 06:05:45.892639 kernel: HOME=/ Jul 7 06:05:45.892652 kernel: TERM=linux Jul 7 06:05:45.892661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:05:45.892687 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:05:45.892707 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:05:45.892717 systemd[1]: Detected virtualization kvm. Jul 7 06:05:45.892726 systemd[1]: Detected architecture x86-64. Jul 7 06:05:45.892736 systemd[1]: Running in initrd. Jul 7 06:05:45.892750 systemd[1]: No hostname configured, using default hostname. Jul 7 06:05:45.892760 systemd[1]: Hostname set to . Jul 7 06:05:45.892769 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:05:45.892779 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:05:45.892789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:05:45.892799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:05:45.892810 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:05:45.892819 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:05:45.892832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:05:45.892843 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:05:45.892854 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:05:45.892866 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:05:45.892878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:05:45.892888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:05:45.892898 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:05:45.892908 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:05:45.892918 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:05:45.892928 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:05:45.892938 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:05:45.892948 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:05:45.892979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:05:45.892989 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:05:45.892999 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:05:45.893009 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:05:45.893019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:05:45.893029 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:05:45.893039 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:05:45.893049 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:05:45.893058 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:05:45.893072 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:05:45.893081 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:05:45.893091 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:05:45.893101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:05:45.893111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:05:45.893120 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:05:45.893133 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:05:45.893143 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:05:45.893185 systemd-journald[211]: Collecting audit messages is disabled. Jul 7 06:05:45.893214 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:05:45.893225 systemd-journald[211]: Journal started Jul 7 06:05:45.893247 systemd-journald[211]: Runtime Journal (/run/log/journal/c2d4554b9bee492fbafc50bf1c767a5e) is 4.9M, max 39.5M, 34.6M free. Jul 7 06:05:45.871183 systemd-modules-load[213]: Inserted module 'overlay' Jul 7 06:05:45.921986 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:05:45.922029 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:05:45.922045 kernel: Bridge firewalling registered Jul 7 06:05:45.921317 systemd-modules-load[213]: Inserted module 'br_netfilter' Jul 7 06:05:45.923420 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:05:45.924042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:45.925162 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:05:45.930726 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:05:45.932403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:05:45.936141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:05:45.940055 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:05:45.957405 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:05:45.964083 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:05:45.966463 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:05:45.970741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:05:45.973743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:05:45.974409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:05:45.977305 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:05:46.003139 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:05:46.022976 systemd-resolved[249]: Positive Trust Anchors: Jul 7 06:05:46.022989 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:05:46.023027 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:05:46.028521 systemd-resolved[249]: Defaulting to hostname 'linux'. Jul 7 06:05:46.030461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:05:46.030834 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:05:46.100981 kernel: SCSI subsystem initialized Jul 7 06:05:46.109980 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:05:46.120994 kernel: iscsi: registered transport (tcp) Jul 7 06:05:46.142985 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:05:46.143065 kernel: QLogic iSCSI HBA Driver Jul 7 06:05:46.165867 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:05:46.183766 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:05:46.186142 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:05:46.238047 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:05:46.240114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:05:46.297987 kernel: raid6: avx2x4 gen() 17443 MB/s Jul 7 06:05:46.315002 kernel: raid6: avx2x2 gen() 17229 MB/s Jul 7 06:05:46.332294 kernel: raid6: avx2x1 gen() 13018 MB/s Jul 7 06:05:46.332378 kernel: raid6: using algorithm avx2x4 gen() 17443 MB/s Jul 7 06:05:46.350054 kernel: raid6: .... xor() 7436 MB/s, rmw enabled Jul 7 06:05:46.350129 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:05:46.371992 kernel: xor: automatically using best checksumming function avx Jul 7 06:05:46.559018 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:05:46.568342 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:05:46.571313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:05:46.603055 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jul 7 06:05:46.610325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:05:46.615236 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:05:46.649006 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 7 06:05:46.680515 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:05:46.682631 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:05:46.765045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:05:46.768869 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:05:46.848990 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jul 7 06:05:46.853979 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 7 06:05:46.856313 kernel: scsi host0: Virtio SCSI HBA Jul 7 06:05:46.863199 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 7 06:05:46.882205 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:05:46.882266 kernel: GPT:9289727 != 125829119 Jul 7 06:05:46.882279 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:05:46.882983 kernel: GPT:9289727 != 125829119 Jul 7 06:05:46.885336 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:05:46.885416 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:05:46.898979 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:05:46.903996 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 7 06:05:46.907989 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jul 7 06:05:46.939981 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:05:46.942738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:05:46.942936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:46.944345 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:05:46.949387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:05:46.950640 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:05:46.965145 kernel: ACPI: bus type USB registered Jul 7 06:05:46.965206 kernel: usbcore: registered new interface driver usbfs Jul 7 06:05:46.965973 kernel: usbcore: registered new interface driver hub Jul 7 06:05:46.966019 kernel: usbcore: registered new device driver usb Jul 7 06:05:46.976974 kernel: AES CTR mode by8 optimization enabled Jul 7 06:05:46.981230 kernel: libata version 3.00 loaded. Jul 7 06:05:46.994046 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 7 06:05:47.008026 kernel: scsi host1: ata_piix Jul 7 06:05:47.012012 kernel: scsi host2: ata_piix Jul 7 06:05:47.014893 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jul 7 06:05:47.014978 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jul 7 06:05:47.074815 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:05:47.087047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:47.104527 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:05:47.115786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:05:47.123890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:05:47.124540 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:05:47.127011 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:05:47.162886 disk-uuid[607]: Primary Header is updated. Jul 7 06:05:47.162886 disk-uuid[607]: Secondary Entries is updated. Jul 7 06:05:47.162886 disk-uuid[607]: Secondary Header is updated. Jul 7 06:05:47.185997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:05:47.208417 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 7 06:05:47.208804 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 7 06:05:47.209042 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 7 06:05:47.210062 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 7 06:05:47.213065 kernel: hub 1-0:1.0: USB hub found Jul 7 06:05:47.213361 kernel: hub 1-0:1.0: 2 ports detected Jul 7 06:05:47.377872 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:05:47.379201 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:05:47.379635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:05:47.380538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:05:47.382627 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:05:47.405987 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:05:48.199063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:05:48.201814 disk-uuid[608]: The operation has completed successfully. Jul 7 06:05:48.264134 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:05:48.264324 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:05:48.301806 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:05:48.323866 sh[634]: Success Jul 7 06:05:48.345195 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:05:48.345309 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:05:48.346000 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:05:48.358320 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 06:05:48.421557 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:05:48.424545 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:05:48.432156 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:05:48.447147 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:05:48.447254 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (646) Jul 7 06:05:48.451652 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:05:48.451747 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:05:48.451761 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:05:48.462607 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:05:48.464079 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:05:48.464890 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:05:48.467205 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:05:48.471245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:05:48.499998 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (675) Jul 7 06:05:48.503320 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:05:48.503426 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:05:48.503449 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:05:48.513991 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:05:48.516118 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:05:48.520157 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:05:48.632152 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:05:48.637252 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:05:48.713522 systemd-networkd[817]: lo: Link UP Jul 7 06:05:48.713536 systemd-networkd[817]: lo: Gained carrier Jul 7 06:05:48.717082 systemd-networkd[817]: Enumeration completed Jul 7 06:05:48.717282 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:05:48.718524 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 7 06:05:48.718531 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 7 06:05:48.720402 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:05:48.720409 systemd-networkd[817]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:05:48.721149 systemd[1]: Reached target network.target - Network. Jul 7 06:05:48.722985 systemd-networkd[817]: eth0: Link UP Jul 7 06:05:48.722992 systemd-networkd[817]: eth0: Gained carrier Jul 7 06:05:48.723012 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 7 06:05:48.728489 systemd-networkd[817]: eth1: Link UP Jul 7 06:05:48.728495 systemd-networkd[817]: eth1: Gained carrier Jul 7 06:05:48.728519 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:05:48.751674 systemd-networkd[817]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 Jul 7 06:05:48.759106 systemd-networkd[817]: eth0: DHCPv4 address 24.199.107.192/20, gateway 24.199.96.1 acquired from 169.254.169.253 Jul 7 06:05:48.779216 ignition[718]: Ignition 2.21.0 Jul 7 06:05:48.779233 ignition[718]: Stage: fetch-offline Jul 7 06:05:48.779311 ignition[718]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:48.779327 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 06:05:48.779477 ignition[718]: parsed url from cmdline: "" Jul 7 06:05:48.779483 ignition[718]: no config URL provided Jul 7 06:05:48.779491 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:05:48.782700 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:05:48.779500 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:05:48.779509 ignition[718]: failed to fetch config: resource requires networking Jul 7 06:05:48.779766 ignition[718]: Ignition finished successfully Jul 7 06:05:48.786408 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:05:48.819230 ignition[826]: Ignition 2.21.0 Jul 7 06:05:48.820109 ignition[826]: Stage: fetch Jul 7 06:05:48.820432 ignition[826]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:48.820448 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 06:05:48.820601 ignition[826]: parsed url from cmdline: "" Jul 7 06:05:48.820606 ignition[826]: no config URL provided Jul 7 06:05:48.820614 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:05:48.820627 ignition[826]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:05:48.820699 ignition[826]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 7 06:05:48.836442 ignition[826]: GET result: OK Jul 7 06:05:48.836622 ignition[826]: parsing config with SHA512: 91a66a650640a2216141a9ecd773cc78e2e88d5f054493e83be79c37679668f7d2dff7fcee799f94e15860bc6f18f2f052661fa52110d86ae918f35c6ecbfa79 Jul 7 06:05:48.842505 unknown[826]: fetched base config from "system" Jul 7 06:05:48.843360 unknown[826]: fetched base config from "system" Jul 7 06:05:48.844192 ignition[826]: fetch: fetch complete Jul 7 06:05:48.843375 unknown[826]: fetched user config from "digitalocean" Jul 7 06:05:48.844204 ignition[826]: fetch: fetch passed Jul 7 06:05:48.844318 ignition[826]: Ignition finished successfully Jul 7 06:05:48.849036 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:05:48.852172 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:05:48.903879 ignition[834]: Ignition 2.21.0 Jul 7 06:05:48.903915 ignition[834]: Stage: kargs Jul 7 06:05:48.904304 ignition[834]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:48.904321 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 06:05:48.906353 ignition[834]: kargs: kargs passed Jul 7 06:05:48.906472 ignition[834]: Ignition finished successfully Jul 7 06:05:48.908150 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:05:48.910997 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:05:48.953673 ignition[840]: Ignition 2.21.0 Jul 7 06:05:48.953688 ignition[840]: Stage: disks Jul 7 06:05:48.954227 ignition[840]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:48.954252 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 06:05:48.958291 ignition[840]: disks: disks passed Jul 7 06:05:48.959605 ignition[840]: Ignition finished successfully Jul 7 06:05:48.961614 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:05:48.963038 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:05:48.963433 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:05:48.963802 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:05:48.964774 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:05:48.965116 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:05:48.967022 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:05:49.000876 systemd-fsck[848]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:05:49.004994 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:05:49.008566 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:05:49.141980 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:05:49.143048 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:05:49.144151 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:05:49.146514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:05:49.148651 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:05:49.156204 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jul 7 06:05:49.161139 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 06:05:49.162243 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:05:49.162345 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:05:49.165561 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:05:49.173017 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (856) Jul 7 06:05:49.173113 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:05:49.175817 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:05:49.175895 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:05:49.177253 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:05:49.198068 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:05:49.258141 coreos-metadata[859]: Jul 07 06:05:49.258 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 7 06:05:49.261689 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:05:49.266917 coreos-metadata[858]: Jul 07 06:05:49.266 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 7 06:05:49.268246 coreos-metadata[859]: Jul 07 06:05:49.268 INFO Fetch successful Jul 7 06:05:49.273243 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:05:49.275009 coreos-metadata[859]: Jul 07 06:05:49.274 INFO wrote hostname ci-4372.0.1-6-9e8df2071f to /sysroot/etc/hostname Jul 7 06:05:49.276464 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:05:49.278789 coreos-metadata[858]: Jul 07 06:05:49.278 INFO Fetch successful Jul 7 06:05:49.280060 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:05:49.285618 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jul 7 06:05:49.286196 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jul 7 06:05:49.287497 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:05:49.389201 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:05:49.391795 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:05:49.393671 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:05:49.408978 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:05:49.425705 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:05:49.443988 ignition[979]: INFO : Ignition 2.21.0 Jul 7 06:05:49.444733 ignition[979]: INFO : Stage: mount Jul 7 06:05:49.445312 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:49.446264 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 06:05:49.447782 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:05:49.452437 ignition[979]: INFO : mount: mount passed Jul 7 06:05:49.453176 ignition[979]: INFO : Ignition finished successfully Jul 7 06:05:49.455514 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:05:49.457149 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:05:49.472537 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:05:49.489982 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (991) Jul 7 06:05:49.490207 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:05:49.492029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:05:49.492982 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:05:49.498401 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:05:49.534960 ignition[1007]: INFO : Ignition 2.21.0 Jul 7 06:05:49.534960 ignition[1007]: INFO : Stage: files Jul 7 06:05:49.535907 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:49.535907 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 06:05:49.536741 ignition[1007]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:05:49.538070 ignition[1007]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:05:49.538070 ignition[1007]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:05:49.542092 ignition[1007]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:05:49.542647 ignition[1007]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:05:49.543240 ignition[1007]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:05:49.542725 unknown[1007]: wrote ssh authorized keys file for user: core Jul 7 06:05:49.545101 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:05:49.545820 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 06:05:49.592461 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:05:49.725799 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:05:49.727176 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:05:49.727176 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:05:49.727176 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:05:49.727176 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:05:49.727176 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:05:49.727176 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:05:49.727176 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:05:49.727176 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:05:49.739905 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:05:49.739905 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:05:49.739905 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:05:49.739905 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:05:49.739905 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:05:49.739905 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 06:05:50.332229 systemd-networkd[817]: eth0: Gained IPv6LL Jul 7 06:05:50.504453 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:05:50.588318 systemd-networkd[817]: eth1: Gained IPv6LL Jul 7 06:05:51.997007 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:05:51.998299 ignition[1007]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:05:51.998299 ignition[1007]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:05:52.001015 ignition[1007]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:05:52.001015 ignition[1007]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:05:52.001015 ignition[1007]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:05:52.001015 ignition[1007]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:05:52.001015 ignition[1007]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:05:52.001015 ignition[1007]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:05:52.001015 ignition[1007]: INFO : files: files passed Jul 7 06:05:52.001015 ignition[1007]: INFO : Ignition finished successfully Jul 7 06:05:52.002298 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:05:52.006128 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:05:52.008894 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:05:52.020849 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:05:52.021031 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:05:52.029620 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:05:52.029620 initrd-setup-root-after-ignition[1038]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:05:52.031970 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:05:52.033642 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:05:52.034568 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:05:52.036143 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:05:52.088713 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:05:52.088852 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:05:52.089758 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:05:52.090311 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:05:52.091015 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:05:52.091887 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:05:52.117635 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:05:52.119640 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:05:52.142334 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:05:52.143391 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:05:52.143821 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:05:52.144229 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:05:52.144357 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:05:52.145436 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:05:52.146129 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:05:52.146768 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:05:52.147417 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:05:52.148098 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:05:52.148760 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:05:52.149427 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:05:52.150113 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:05:52.150773 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:05:52.151437 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:05:52.152078 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:05:52.152654 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:05:52.152793 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:05:52.153611 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:05:52.154282 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:05:52.154881 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:05:52.154990 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:05:52.155693 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:05:52.155856 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:05:52.156827 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:05:52.157021 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:05:52.157700 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:05:52.157804 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:05:52.158331 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 06:05:52.158459 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:05:52.161108 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:05:52.161766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:05:52.161936 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:05:52.165163 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:05:52.165568 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:05:52.165742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:05:52.167331 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:05:52.167486 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:05:52.176407 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:05:52.176500 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:05:52.198900 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:05:52.199793 ignition[1062]: INFO : Ignition 2.21.0 Jul 7 06:05:52.199793 ignition[1062]: INFO : Stage: umount Jul 7 06:05:52.199793 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:52.199793 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 7 06:05:52.202624 ignition[1062]: INFO : umount: umount passed Jul 7 06:05:52.202624 ignition[1062]: INFO : Ignition finished successfully Jul 7 06:05:52.200887 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:05:52.201013 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:05:52.203575 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:05:52.203694 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:05:52.204929 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:05:52.205377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:05:52.205836 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:05:52.205878 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:05:52.206409 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:05:52.206447 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:05:52.206948 systemd[1]: Stopped target network.target - Network. Jul 7 06:05:52.207561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:05:52.207613 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:05:52.208192 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:05:52.208812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:05:52.212058 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:05:52.212515 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:05:52.213318 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:05:52.213999 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:05:52.214048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:05:52.214542 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:05:52.214575 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:05:52.215173 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:05:52.215249 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:05:52.215835 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:05:52.215877 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:05:52.216398 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:05:52.216449 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:05:52.217283 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:05:52.217874 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:05:52.220562 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:05:52.220688 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:05:52.224798 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:05:52.226167 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:05:52.226687 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:05:52.228984 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:05:52.232985 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:05:52.233121 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:05:52.234825 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:05:52.235075 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:05:52.235704 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:05:52.235744 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:05:52.237488 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:05:52.237887 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:05:52.237943 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:05:52.239540 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:05:52.239587 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:05:52.243563 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:05:52.243627 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:05:52.244426 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:05:52.247594 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:05:52.259856 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:05:52.261141 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:05:52.261873 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:05:52.261916 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:05:52.262301 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:05:52.262334 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:05:52.263142 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:05:52.263190 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:05:52.264378 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:05:52.264428 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:05:52.265165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:05:52.265216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:05:52.266664 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:05:52.267370 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:05:52.267426 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:05:52.269600 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:05:52.269650 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:05:52.271410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:05:52.271456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:52.273832 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:05:52.275104 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:05:52.282772 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:05:52.282886 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:05:52.284348 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:05:52.285923 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:05:52.307626 systemd[1]: Switching root. Jul 7 06:05:52.339997 systemd-journald[211]: Journal stopped Jul 7 06:05:53.538659 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). Jul 7 06:05:53.538763 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:05:53.538787 kernel: SELinux: policy capability open_perms=1 Jul 7 06:05:53.538807 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:05:53.538826 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:05:53.538850 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:05:53.538870 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:05:53.538889 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:05:53.538908 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:05:53.538926 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:05:53.548668 kernel: audit: type=1403 audit(1751868352.503:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:05:53.548733 systemd[1]: Successfully loaded SELinux policy in 54.691ms. Jul 7 06:05:53.548776 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.482ms. Jul 7 06:05:53.548800 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:05:53.548827 systemd[1]: Detected virtualization kvm. Jul 7 06:05:53.548848 systemd[1]: Detected architecture x86-64. Jul 7 06:05:53.548868 systemd[1]: Detected first boot. Jul 7 06:05:53.548894 systemd[1]: Hostname set to . Jul 7 06:05:53.548915 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:05:53.548936 zram_generator::config[1110]: No configuration found. Jul 7 06:05:53.548999 kernel: Guest personality initialized and is inactive Jul 7 06:05:53.549020 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:05:53.549041 kernel: Initialized host personality Jul 7 06:05:53.549059 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:05:53.549076 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:05:53.549096 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:05:53.549118 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:05:53.549136 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:05:53.549153 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:05:53.549170 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:05:53.549186 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:05:53.549207 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:05:53.549226 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:05:53.549243 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:05:53.549260 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:05:53.549277 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:05:53.549294 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:05:53.549311 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:05:53.549329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:05:53.549350 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:05:53.549368 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:05:53.549388 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:05:53.549406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:05:53.549423 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:05:53.549441 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:05:53.549464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:05:53.549484 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:05:53.549504 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:05:53.549523 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:05:53.549539 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:05:53.549558 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:05:53.549577 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:05:53.549594 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:05:53.549612 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:05:53.549631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:05:53.549656 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:05:53.549675 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:05:53.549695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:05:53.549716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:05:53.549737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:05:53.549757 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:05:53.549780 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:05:53.549801 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:05:53.549821 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:05:53.549847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:05:53.549866 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:05:53.549883 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:05:53.549898 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:05:53.549918 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:05:53.549938 systemd[1]: Reached target machines.target - Containers. Jul 7 06:05:53.549967 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:05:53.549981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:05:53.550036 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:05:53.550058 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:05:53.550073 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:05:53.550085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:05:53.550097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:05:53.550109 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:05:53.550122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:05:53.550150 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:05:53.550175 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:05:53.550196 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:05:53.550210 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:05:53.550222 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:05:53.550236 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:05:53.550249 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:05:53.550263 kernel: loop: module loaded Jul 7 06:05:53.550277 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:05:53.550289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:05:53.550305 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:05:53.550318 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:05:53.550342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:05:53.550366 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:05:53.550379 systemd[1]: Stopped verity-setup.service. Jul 7 06:05:53.550392 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:05:53.550404 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:05:53.550417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:05:53.550430 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:05:53.550442 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:05:53.550458 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:05:53.550470 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:05:53.550482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:05:53.550494 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:05:53.550507 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:05:53.550519 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:05:53.550531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:05:53.550546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:05:53.550558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:05:53.550577 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:05:53.550596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:05:53.550615 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:05:53.550636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:05:53.550654 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:05:53.550673 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:05:53.550692 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:05:53.550709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:05:53.550727 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:05:53.550750 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:05:53.550769 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:05:53.550791 kernel: fuse: init (API version 7.41) Jul 7 06:05:53.550811 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:05:53.550833 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:05:53.550851 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:05:53.550868 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:05:53.556603 systemd-journald[1177]: Collecting audit messages is disabled. Jul 7 06:05:53.556740 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:05:53.556786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:05:53.556803 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:05:53.556817 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:05:53.556830 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:05:53.556843 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:05:53.556860 systemd-journald[1177]: Journal started Jul 7 06:05:53.556889 systemd-journald[1177]: Runtime Journal (/run/log/journal/c2d4554b9bee492fbafc50bf1c767a5e) is 4.9M, max 39.5M, 34.6M free. Jul 7 06:05:53.205887 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:05:53.231775 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:05:53.564024 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:05:53.232340 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:05:53.566208 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:05:53.608116 kernel: ACPI: bus type drm_connector registered Jul 7 06:05:53.612187 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:05:53.615361 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:05:53.617015 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:05:53.628055 kernel: loop0: detected capacity change from 0 to 8 Jul 7 06:05:53.643001 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:05:53.653984 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:05:53.656096 systemd-journald[1177]: Time spent on flushing to /var/log/journal/c2d4554b9bee492fbafc50bf1c767a5e is 76.334ms for 1003 entries. Jul 7 06:05:53.656096 systemd-journald[1177]: System Journal (/var/log/journal/c2d4554b9bee492fbafc50bf1c767a5e) is 8M, max 195.6M, 187.6M free. Jul 7 06:05:53.757428 systemd-journald[1177]: Received client request to flush runtime journal. Jul 7 06:05:53.757488 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 06:05:53.757513 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:05:53.657183 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:05:53.660934 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:05:53.665034 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:05:53.668166 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:05:53.679302 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:05:53.695348 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:05:53.762046 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:05:53.780794 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:05:53.784245 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:05:53.794296 kernel: loop3: detected capacity change from 0 to 229808 Jul 7 06:05:53.849182 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:05:53.874995 kernel: loop4: detected capacity change from 0 to 8 Jul 7 06:05:53.880260 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 06:05:53.901522 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 7 06:05:53.903013 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 7 06:05:53.908979 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 06:05:53.918380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:05:53.937981 kernel: loop7: detected capacity change from 0 to 229808 Jul 7 06:05:53.953128 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 7 06:05:53.953672 (sd-merge)[1252]: Merged extensions into '/usr'. Jul 7 06:05:53.962573 systemd[1]: Reload requested from client PID 1209 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:05:53.963061 systemd[1]: Reloading... Jul 7 06:05:54.094403 zram_generator::config[1279]: No configuration found. Jul 7 06:05:54.270049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:05:54.300978 ldconfig[1204]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:05:54.363858 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:05:54.364484 systemd[1]: Reloading finished in 400 ms. Jul 7 06:05:54.389793 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:05:54.390983 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:05:54.398930 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:05:54.407162 systemd[1]: Starting ensure-sysext.service... Jul 7 06:05:54.413219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:05:54.429157 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:05:54.440118 systemd[1]: Reload requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:05:54.440137 systemd[1]: Reloading... Jul 7 06:05:54.452339 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:05:54.452381 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:05:54.452677 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:05:54.452940 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:05:54.454149 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:05:54.454623 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jul 7 06:05:54.454704 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jul 7 06:05:54.459687 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:05:54.459695 systemd-tmpfiles[1324]: Skipping /boot Jul 7 06:05:54.475733 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:05:54.475747 systemd-tmpfiles[1324]: Skipping /boot Jul 7 06:05:54.536984 zram_generator::config[1348]: No configuration found. Jul 7 06:05:54.732723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:05:54.886260 systemd[1]: Reloading finished in 445 ms. Jul 7 06:05:54.915425 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:05:54.934417 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:05:54.945321 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:05:54.950249 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:05:54.954541 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:05:54.958655 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:05:54.966531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:05:54.969336 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:05:54.972229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:05:54.972414 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:05:54.976365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:05:54.986358 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:05:54.992411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:05:54.993568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:05:54.993773 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:05:54.993927 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:05:55.002342 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:05:55.006358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:05:55.006597 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:05:55.006799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:05:55.006894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:05:55.007089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:05:55.012766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:05:55.015114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:05:55.016684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:05:55.018290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:05:55.018436 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:05:55.018605 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:05:55.023531 systemd[1]: Finished ensure-sysext.service. Jul 7 06:05:55.036245 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:05:55.047912 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:05:55.058223 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:05:55.064746 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:05:55.065364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:05:55.077716 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:05:55.085334 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:05:55.085558 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:05:55.094999 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:05:55.095675 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:05:55.099725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:05:55.101044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:05:55.101833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:05:55.103145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:05:55.104931 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:05:55.105055 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:05:55.108159 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Jul 7 06:05:55.112849 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:05:55.147346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:05:55.155268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:05:55.173139 augenrules[1455]: No rules Jul 7 06:05:55.175126 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:05:55.175426 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:05:55.185758 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:05:55.264436 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:05:55.265205 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:05:55.356969 systemd-networkd[1441]: lo: Link UP Jul 7 06:05:55.359014 systemd-networkd[1441]: lo: Gained carrier Jul 7 06:05:55.360427 systemd-networkd[1441]: Enumeration completed Jul 7 06:05:55.360563 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:05:55.365288 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:05:55.371273 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:05:55.402171 systemd-resolved[1400]: Positive Trust Anchors: Jul 7 06:05:55.402643 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:05:55.402712 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:05:55.408074 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:05:55.426317 systemd-resolved[1400]: Using system hostname 'ci-4372.0.1-6-9e8df2071f'. Jul 7 06:05:55.447795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:05:55.449400 systemd[1]: Reached target network.target - Network. Jul 7 06:05:55.451096 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:05:55.451628 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:05:55.452469 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:05:55.454386 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:05:55.454917 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:05:55.455660 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:05:55.457274 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:05:55.457934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:05:55.459245 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:05:55.459290 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:05:55.460004 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:05:55.462419 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:05:55.462685 systemd-networkd[1441]: eth1: Configuring with /run/systemd/network/10-16:71:31:e2:e1:7b.network. Jul 7 06:05:55.465916 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:05:55.468808 systemd-networkd[1441]: eth1: Link UP Jul 7 06:05:55.468993 systemd-networkd[1441]: eth1: Gained carrier Jul 7 06:05:55.473479 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:05:55.474347 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:05:55.475554 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:05:55.476478 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:05:55.486545 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:05:55.488793 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:05:55.491140 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:05:55.494064 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:05:55.500612 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jul 7 06:05:55.501505 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:05:55.502824 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:05:55.507089 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 7 06:05:55.509077 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:05:55.509129 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:05:55.512177 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:05:55.519224 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:05:55.522272 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:05:55.528286 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:05:55.531645 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:05:55.539321 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:05:55.540151 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:05:55.549635 jq[1495]: false Jul 7 06:05:55.558840 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:05:55.564327 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:05:55.568854 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:05:55.578199 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:05:55.583482 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:05:55.598068 kernel: ISO 9660 Extensions: RRIP_1991A Jul 7 06:05:55.601656 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:05:55.604078 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:05:55.604754 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:05:55.605742 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:05:55.616853 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:05:55.623005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:05:55.623747 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:05:55.624280 oslogin_cache_refresh[1497]: Refreshing passwd entry cache Jul 7 06:05:55.625227 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Refreshing passwd entry cache Jul 7 06:05:55.623946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:05:55.628330 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:05:55.632182 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:05:55.633496 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Failure getting users, quitting Jul 7 06:05:55.637006 oslogin_cache_refresh[1497]: Failure getting users, quitting Jul 7 06:05:55.637419 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:05:55.637057 oslogin_cache_refresh[1497]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:05:55.639032 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Refreshing group entry cache Jul 7 06:05:55.637755 oslogin_cache_refresh[1497]: Refreshing group entry cache Jul 7 06:05:55.644802 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Failure getting groups, quitting Jul 7 06:05:55.644802 google_oslogin_nss_cache[1497]: oslogin_cache_refresh[1497]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:05:55.640264 oslogin_cache_refresh[1497]: Failure getting groups, quitting Jul 7 06:05:55.640277 oslogin_cache_refresh[1497]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:05:55.647784 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:05:55.654205 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:05:55.682777 jq[1513]: true Jul 7 06:05:55.691987 update_engine[1511]: I20250707 06:05:55.690547 1511 main.cc:92] Flatcar Update Engine starting Jul 7 06:05:55.697117 (ntainerd)[1528]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:05:55.698205 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 7 06:05:55.709869 coreos-metadata[1492]: Jul 07 06:05:55.709 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 7 06:05:55.709869 coreos-metadata[1492]: Jul 07 06:05:55.709 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Jul 7 06:05:55.720858 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:05:55.722122 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:05:55.725377 extend-filesystems[1496]: Found /dev/vda6 Jul 7 06:05:55.729881 tar[1515]: linux-amd64/LICENSE Jul 7 06:05:55.734530 tar[1515]: linux-amd64/helm Jul 7 06:05:55.740096 jq[1533]: true Jul 7 06:05:55.740355 extend-filesystems[1496]: Found /dev/vda9 Jul 7 06:05:55.757681 extend-filesystems[1496]: Checking size of /dev/vda9 Jul 7 06:05:55.758922 dbus-daemon[1493]: [system] SELinux support is enabled Jul 7 06:05:55.760517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:05:55.778338 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:05:55.778379 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:05:55.778916 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:05:55.779016 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 7 06:05:55.779031 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:05:55.808593 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:05:55.812972 update_engine[1511]: I20250707 06:05:55.810742 1511 update_check_scheduler.cc:74] Next update check in 7m46s Jul 7 06:05:55.822225 extend-filesystems[1496]: Resized partition /dev/vda9 Jul 7 06:05:55.829055 extend-filesystems[1556]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:05:55.838981 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 7 06:05:55.846783 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:05:55.853041 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:05:55.858518 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:05:55.890978 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:05:55.913456 systemd-networkd[1441]: eth0: Configuring with /run/systemd/network/10-06:f6:db:b8:5a:7d.network. Jul 7 06:05:55.914943 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:05:55.915674 systemd-networkd[1441]: eth0: Link UP Jul 7 06:05:55.915903 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:05:55.917387 systemd-networkd[1441]: eth0: Gained carrier Jul 7 06:05:55.924160 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:05:55.927451 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:05:55.958028 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:05:55.959197 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:05:55.962831 systemd[1]: Starting sshkeys.service... Jul 7 06:05:56.001514 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:05:56.044452 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 7 06:05:56.073112 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 06:05:56.079680 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 06:05:56.081127 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:05:56.081127 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 7 06:05:56.081127 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 7 06:05:56.087715 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Jul 7 06:05:56.081472 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:05:56.081757 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:05:56.092095 systemd-logind[1509]: New seat seat0. Jul 7 06:05:56.094046 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:05:56.166994 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:05:56.169740 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:05:56.175865 containerd[1528]: time="2025-07-07T06:05:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:05:56.177981 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 7 06:05:56.180989 containerd[1528]: time="2025-07-07T06:05:56.180929476Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:05:56.184020 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:05:56.193655 coreos-metadata[1574]: Jul 07 06:05:56.193 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 7 06:05:56.194146 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:05:56.205998 coreos-metadata[1574]: Jul 07 06:05:56.205 INFO Fetch successful Jul 7 06:05:56.209989 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:05:56.226538 unknown[1574]: wrote ssh authorized keys file for user: core Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.236582425Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.329µs" Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.236637949Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.236661278Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.236827904Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.236843422Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.236876788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.242340856Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.242406642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.242772575Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.242791064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.242818268Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:05:56.242979 containerd[1528]: time="2025-07-07T06:05:56.242828930Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:05:56.243309 containerd[1528]: time="2025-07-07T06:05:56.242917916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:05:56.248051 containerd[1528]: time="2025-07-07T06:05:56.247999617Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:05:56.248235 containerd[1528]: time="2025-07-07T06:05:56.248213560Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:05:56.248796 containerd[1528]: time="2025-07-07T06:05:56.248762814Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:05:56.248960 containerd[1528]: time="2025-07-07T06:05:56.248931873Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:05:56.251363 containerd[1528]: time="2025-07-07T06:05:56.251323927Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:05:56.255518 containerd[1528]: time="2025-07-07T06:05:56.254228500Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264004185Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264089255Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264105181Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264129178Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264142834Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264155418Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264203987Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264218722Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264229478Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264239078Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264248788Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264261447Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264451326Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:05:56.264519 containerd[1528]: time="2025-07-07T06:05:56.264475231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:05:56.265003 containerd[1528]: time="2025-07-07T06:05:56.264493829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:05:56.266209 containerd[1528]: time="2025-07-07T06:05:56.265268390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:05:56.266209 containerd[1528]: time="2025-07-07T06:05:56.265301700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:05:56.266209 containerd[1528]: time="2025-07-07T06:05:56.266075794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:05:56.266209 containerd[1528]: time="2025-07-07T06:05:56.266106952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:05:56.266209 containerd[1528]: time="2025-07-07T06:05:56.266133513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:05:56.266209 containerd[1528]: time="2025-07-07T06:05:56.266149911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:05:56.266209 containerd[1528]: time="2025-07-07T06:05:56.266161095Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:05:56.266209 containerd[1528]: time="2025-07-07T06:05:56.266177493Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:05:56.267499 containerd[1528]: time="2025-07-07T06:05:56.267205122Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:05:56.267499 containerd[1528]: time="2025-07-07T06:05:56.267237076Z" level=info msg="Start snapshots syncer" Jul 7 06:05:56.269679 containerd[1528]: time="2025-07-07T06:05:56.267625785Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:05:56.273366 containerd[1528]: time="2025-07-07T06:05:56.273232162Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:05:56.274550 containerd[1528]: time="2025-07-07T06:05:56.273867736Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:05:56.277086 containerd[1528]: time="2025-07-07T06:05:56.275984157Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:05:56.277086 containerd[1528]: time="2025-07-07T06:05:56.276336211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:05:56.277086 containerd[1528]: time="2025-07-07T06:05:56.276370911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:05:56.277086 containerd[1528]: time="2025-07-07T06:05:56.276383113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:05:56.277086 containerd[1528]: time="2025-07-07T06:05:56.277015362Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:05:56.277086 containerd[1528]: time="2025-07-07T06:05:56.277045978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:05:56.278402 containerd[1528]: time="2025-07-07T06:05:56.277063298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:05:56.278402 containerd[1528]: time="2025-07-07T06:05:56.277261200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:05:56.278402 containerd[1528]: time="2025-07-07T06:05:56.277297261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:05:56.278402 containerd[1528]: time="2025-07-07T06:05:56.277341288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:05:56.278402 containerd[1528]: time="2025-07-07T06:05:56.277355155Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:05:56.283485 containerd[1528]: time="2025-07-07T06:05:56.277400498Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:05:56.283485 containerd[1528]: time="2025-07-07T06:05:56.278567305Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:05:56.283485 containerd[1528]: time="2025-07-07T06:05:56.278582057Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:05:56.280525 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 06:05:56.283814 update-ssh-keys[1590]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:05:56.284144 systemd[1]: Finished sshkeys.service. Jul 7 06:05:56.287408 containerd[1528]: time="2025-07-07T06:05:56.278591576Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:05:56.287408 containerd[1528]: time="2025-07-07T06:05:56.284581266Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:05:56.287408 containerd[1528]: time="2025-07-07T06:05:56.287031744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:05:56.287408 containerd[1528]: time="2025-07-07T06:05:56.287055967Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:05:56.287408 containerd[1528]: time="2025-07-07T06:05:56.287078930Z" level=info msg="runtime interface created" Jul 7 06:05:56.288368 containerd[1528]: time="2025-07-07T06:05:56.287084493Z" level=info msg="created NRI interface" Jul 7 06:05:56.288368 containerd[1528]: time="2025-07-07T06:05:56.287617107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:05:56.288368 containerd[1528]: time="2025-07-07T06:05:56.287647201Z" level=info msg="Connect containerd service" Jul 7 06:05:56.288368 containerd[1528]: time="2025-07-07T06:05:56.287705003Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:05:56.289141 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 7 06:05:56.296063 containerd[1528]: time="2025-07-07T06:05:56.294665093Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:05:56.331982 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 7 06:05:56.341013 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:05:56.383007 kernel: Console: switching to colour dummy device 80x25 Jul 7 06:05:56.389293 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 06:05:56.389394 kernel: [drm] features: -context_init Jul 7 06:05:56.392026 kernel: [drm] number of scanouts: 1 Jul 7 06:05:56.393458 kernel: [drm] number of cap sets: 0 Jul 7 06:05:56.393092 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:05:56.423419 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:05:56.423806 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:05:56.434449 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:05:56.472991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:05:56.473651 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:05:56.481243 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:05:56.483351 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:05:56.484392 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:05:56.506119 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 7 06:05:56.573297 containerd[1528]: time="2025-07-07T06:05:56.573253942Z" level=info msg="Start subscribing containerd event" Jul 7 06:05:56.573460 containerd[1528]: time="2025-07-07T06:05:56.573447850Z" level=info msg="Start recovering state" Jul 7 06:05:56.573583 containerd[1528]: time="2025-07-07T06:05:56.573572424Z" level=info msg="Start event monitor" Jul 7 06:05:56.573661 containerd[1528]: time="2025-07-07T06:05:56.573650804Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:05:56.573703 containerd[1528]: time="2025-07-07T06:05:56.573695803Z" level=info msg="Start streaming server" Jul 7 06:05:56.573745 containerd[1528]: time="2025-07-07T06:05:56.573737107Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:05:56.573798 containerd[1528]: time="2025-07-07T06:05:56.573786114Z" level=info msg="runtime interface starting up..." Jul 7 06:05:56.574190 containerd[1528]: time="2025-07-07T06:05:56.574159857Z" level=info msg="starting plugins..." Jul 7 06:05:56.574319 containerd[1528]: time="2025-07-07T06:05:56.574301344Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:05:56.574504 containerd[1528]: time="2025-07-07T06:05:56.574123362Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:05:56.574790 containerd[1528]: time="2025-07-07T06:05:56.574765748Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:05:56.575061 containerd[1528]: time="2025-07-07T06:05:56.575043764Z" level=info msg="containerd successfully booted in 0.400597s" Jul 7 06:05:56.575272 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:05:56.588660 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:05:56.706500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:56.710050 coreos-metadata[1492]: Jul 07 06:05:56.710 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Jul 7 06:05:56.725324 coreos-metadata[1492]: Jul 07 06:05:56.725 INFO Fetch successful Jul 7 06:05:56.785753 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:05:56.786426 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:05:56.831985 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:05:56.861710 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:05:56.886153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:05:56.886813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:56.888365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:05:56.891017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:05:56.895203 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:05:56.953384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:57.138154 tar[1515]: linux-amd64/README.md Jul 7 06:05:57.157133 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:05:57.244338 systemd-networkd[1441]: eth1: Gained IPv6LL Jul 7 06:05:57.245163 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:05:57.247688 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:05:57.249532 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:05:57.252376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:57.254369 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:05:57.296836 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:05:57.436176 systemd-networkd[1441]: eth0: Gained IPv6LL Jul 7 06:05:57.438746 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:05:58.491594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:58.492361 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:05:58.493312 systemd[1]: Startup finished in 3.319s (kernel) + 6.862s (initrd) + 6.042s (userspace) = 16.224s. Jul 7 06:05:58.504430 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:05:59.075924 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:05:59.079249 systemd[1]: Started sshd@0-24.199.107.192:22-139.178.68.195:51000.service - OpenSSH per-connection server daemon (139.178.68.195:51000). Jul 7 06:05:59.182515 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 51000 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:05:59.184538 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:59.197050 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:05:59.200335 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:05:59.203554 kubelet[1672]: E0707 06:05:59.203494 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:05:59.210199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:05:59.210377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:05:59.213889 systemd[1]: kubelet.service: Consumed 1.376s CPU time, 266.1M memory peak. Jul 7 06:05:59.218798 systemd-logind[1509]: New session 1 of user core. Jul 7 06:05:59.232072 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:05:59.236831 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:05:59.258575 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:05:59.262325 systemd-logind[1509]: New session c1 of user core. Jul 7 06:05:59.442800 systemd[1688]: Queued start job for default target default.target. Jul 7 06:05:59.449062 systemd[1688]: Created slice app.slice - User Application Slice. Jul 7 06:05:59.449115 systemd[1688]: Reached target paths.target - Paths. Jul 7 06:05:59.449169 systemd[1688]: Reached target timers.target - Timers. Jul 7 06:05:59.450793 systemd[1688]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:05:59.465450 systemd[1688]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:05:59.465612 systemd[1688]: Reached target sockets.target - Sockets. Jul 7 06:05:59.465677 systemd[1688]: Reached target basic.target - Basic System. Jul 7 06:05:59.465717 systemd[1688]: Reached target default.target - Main User Target. Jul 7 06:05:59.465753 systemd[1688]: Startup finished in 192ms. Jul 7 06:05:59.466076 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:05:59.474276 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:05:59.548314 systemd[1]: Started sshd@1-24.199.107.192:22-139.178.68.195:51012.service - OpenSSH per-connection server daemon (139.178.68.195:51012). Jul 7 06:05:59.614049 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 51012 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:05:59.616360 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:59.624314 systemd-logind[1509]: New session 2 of user core. Jul 7 06:05:59.634301 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:05:59.697517 sshd[1701]: Connection closed by 139.178.68.195 port 51012 Jul 7 06:05:59.697319 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:59.717274 systemd[1]: sshd@1-24.199.107.192:22-139.178.68.195:51012.service: Deactivated successfully. Jul 7 06:05:59.719795 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:05:59.720771 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:05:59.724265 systemd[1]: Started sshd@2-24.199.107.192:22-139.178.68.195:51018.service - OpenSSH per-connection server daemon (139.178.68.195:51018). Jul 7 06:05:59.727543 systemd-logind[1509]: Removed session 2. Jul 7 06:05:59.807943 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 51018 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:05:59.809622 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:59.816038 systemd-logind[1509]: New session 3 of user core. Jul 7 06:05:59.827227 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:05:59.884901 sshd[1709]: Connection closed by 139.178.68.195 port 51018 Jul 7 06:05:59.885693 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:59.899231 systemd[1]: sshd@2-24.199.107.192:22-139.178.68.195:51018.service: Deactivated successfully. Jul 7 06:05:59.901945 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:05:59.903241 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:05:59.908093 systemd[1]: Started sshd@3-24.199.107.192:22-139.178.68.195:51024.service - OpenSSH per-connection server daemon (139.178.68.195:51024). Jul 7 06:05:59.909470 systemd-logind[1509]: Removed session 3. Jul 7 06:05:59.977486 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 51024 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:05:59.979181 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:59.985191 systemd-logind[1509]: New session 4 of user core. Jul 7 06:05:59.996254 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:06:00.059006 sshd[1717]: Connection closed by 139.178.68.195 port 51024 Jul 7 06:06:00.058820 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:00.073971 systemd[1]: sshd@3-24.199.107.192:22-139.178.68.195:51024.service: Deactivated successfully. Jul 7 06:06:00.076377 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:06:00.077598 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:06:00.080512 systemd-logind[1509]: Removed session 4. Jul 7 06:06:00.082594 systemd[1]: Started sshd@4-24.199.107.192:22-139.178.68.195:51030.service - OpenSSH per-connection server daemon (139.178.68.195:51030). Jul 7 06:06:00.151226 sshd[1723]: Accepted publickey for core from 139.178.68.195 port 51030 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:06:00.153193 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:00.160268 systemd-logind[1509]: New session 5 of user core. Jul 7 06:06:00.166283 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:06:00.240120 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:06:00.240472 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:06:00.257786 sudo[1726]: pam_unix(sudo:session): session closed for user root Jul 7 06:06:00.261108 sshd[1725]: Connection closed by 139.178.68.195 port 51030 Jul 7 06:06:00.262042 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:00.273874 systemd[1]: sshd@4-24.199.107.192:22-139.178.68.195:51030.service: Deactivated successfully. Jul 7 06:06:00.276303 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:06:00.277510 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:06:00.283326 systemd[1]: Started sshd@5-24.199.107.192:22-139.178.68.195:51044.service - OpenSSH per-connection server daemon (139.178.68.195:51044). Jul 7 06:06:00.284732 systemd-logind[1509]: Removed session 5. Jul 7 06:06:00.350581 sshd[1732]: Accepted publickey for core from 139.178.68.195 port 51044 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:06:00.352537 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:00.358827 systemd-logind[1509]: New session 6 of user core. Jul 7 06:06:00.371330 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:06:00.434856 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:06:00.435213 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:06:00.441619 sudo[1736]: pam_unix(sudo:session): session closed for user root Jul 7 06:06:00.450678 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:06:00.451558 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:06:00.465344 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:06:00.518892 augenrules[1758]: No rules Jul 7 06:06:00.521194 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:06:00.521505 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:06:00.523570 sudo[1735]: pam_unix(sudo:session): session closed for user root Jul 7 06:06:00.527259 sshd[1734]: Connection closed by 139.178.68.195 port 51044 Jul 7 06:06:00.527940 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:00.538653 systemd[1]: sshd@5-24.199.107.192:22-139.178.68.195:51044.service: Deactivated successfully. Jul 7 06:06:00.540758 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:06:00.542706 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:06:00.545708 systemd[1]: Started sshd@6-24.199.107.192:22-139.178.68.195:51058.service - OpenSSH per-connection server daemon (139.178.68.195:51058). Jul 7 06:06:00.547521 systemd-logind[1509]: Removed session 6. Jul 7 06:06:00.611439 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 51058 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:06:00.613298 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:00.619481 systemd-logind[1509]: New session 7 of user core. Jul 7 06:06:00.634271 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:06:00.695578 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:06:00.696020 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:06:01.163204 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:06:01.188689 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:06:01.585341 dockerd[1788]: time="2025-07-07T06:06:01.584609372Z" level=info msg="Starting up" Jul 7 06:06:01.587981 dockerd[1788]: time="2025-07-07T06:06:01.587807720Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:06:01.625428 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3061038274-merged.mount: Deactivated successfully. Jul 7 06:06:01.699906 dockerd[1788]: time="2025-07-07T06:06:01.699688516Z" level=info msg="Loading containers: start." Jul 7 06:06:01.710072 kernel: Initializing XFRM netlink socket Jul 7 06:06:01.948158 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:06:01.955742 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:06:02.001769 systemd-networkd[1441]: docker0: Link UP Jul 7 06:06:02.002074 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Jul 7 06:06:02.005457 dockerd[1788]: time="2025-07-07T06:06:02.005395409Z" level=info msg="Loading containers: done." Jul 7 06:06:02.023185 dockerd[1788]: time="2025-07-07T06:06:02.022610554Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:06:02.023185 dockerd[1788]: time="2025-07-07T06:06:02.022767232Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:06:02.023185 dockerd[1788]: time="2025-07-07T06:06:02.022891314Z" level=info msg="Initializing buildkit" Jul 7 06:06:02.049367 dockerd[1788]: time="2025-07-07T06:06:02.049314900Z" level=info msg="Completed buildkit initialization" Jul 7 06:06:02.062309 dockerd[1788]: time="2025-07-07T06:06:02.062244132Z" level=info msg="Daemon has completed initialization" Jul 7 06:06:02.062615 dockerd[1788]: time="2025-07-07T06:06:02.062564547Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:06:02.062811 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:06:02.621722 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1952806117-merged.mount: Deactivated successfully. Jul 7 06:06:02.769469 containerd[1528]: time="2025-07-07T06:06:02.769366883Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 06:06:03.314563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365605213.mount: Deactivated successfully. Jul 7 06:06:04.524064 containerd[1528]: time="2025-07-07T06:06:04.523992430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:04.525760 containerd[1528]: time="2025-07-07T06:06:04.525391348Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 7 06:06:04.526527 containerd[1528]: time="2025-07-07T06:06:04.526487058Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:04.529413 containerd[1528]: time="2025-07-07T06:06:04.529367554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:04.532008 containerd[1528]: time="2025-07-07T06:06:04.530902204Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.761482248s" Jul 7 06:06:04.532008 containerd[1528]: time="2025-07-07T06:06:04.530944588Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 06:06:04.532008 containerd[1528]: time="2025-07-07T06:06:04.531863085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 06:06:06.098770 containerd[1528]: time="2025-07-07T06:06:06.098680096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:06.100156 containerd[1528]: time="2025-07-07T06:06:06.100073292Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 7 06:06:06.100993 containerd[1528]: time="2025-07-07T06:06:06.100507652Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:06.103310 containerd[1528]: time="2025-07-07T06:06:06.103227965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:06.104988 containerd[1528]: time="2025-07-07T06:06:06.104824662Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.572924702s" Jul 7 06:06:06.104988 containerd[1528]: time="2025-07-07T06:06:06.104872293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 06:06:06.105622 containerd[1528]: time="2025-07-07T06:06:06.105589260Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 06:06:07.293261 containerd[1528]: time="2025-07-07T06:06:07.293186268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:07.294736 containerd[1528]: time="2025-07-07T06:06:07.294633924Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:07.294736 containerd[1528]: time="2025-07-07T06:06:07.294705122Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 7 06:06:07.298474 containerd[1528]: time="2025-07-07T06:06:07.298407157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:07.300032 containerd[1528]: time="2025-07-07T06:06:07.299794852Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.194075083s" Jul 7 06:06:07.300032 containerd[1528]: time="2025-07-07T06:06:07.299906844Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 06:06:07.301138 containerd[1528]: time="2025-07-07T06:06:07.301078028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 06:06:08.319315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023451745.mount: Deactivated successfully. Jul 7 06:06:08.890311 containerd[1528]: time="2025-07-07T06:06:08.890246575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:08.890921 containerd[1528]: time="2025-07-07T06:06:08.890885886Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 7 06:06:08.891660 containerd[1528]: time="2025-07-07T06:06:08.891623281Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:08.893613 containerd[1528]: time="2025-07-07T06:06:08.893569811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:08.894408 containerd[1528]: time="2025-07-07T06:06:08.894370122Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.593250078s" Jul 7 06:06:08.894549 containerd[1528]: time="2025-07-07T06:06:08.894528223Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 06:06:08.895122 containerd[1528]: time="2025-07-07T06:06:08.895072737Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 06:06:09.044021 systemd-resolved[1400]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jul 7 06:06:09.389617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:06:09.394736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:06:09.406420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338681420.mount: Deactivated successfully. Jul 7 06:06:09.621589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:06:09.640811 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:06:09.748170 kubelet[2088]: E0707 06:06:09.748017 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:06:09.758705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:06:09.758858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:06:09.759873 systemd[1]: kubelet.service: Consumed 258ms CPU time, 107.8M memory peak. Jul 7 06:06:10.547586 containerd[1528]: time="2025-07-07T06:06:10.547519191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:10.548861 containerd[1528]: time="2025-07-07T06:06:10.548800741Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 7 06:06:10.549784 containerd[1528]: time="2025-07-07T06:06:10.549381057Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:10.554778 containerd[1528]: time="2025-07-07T06:06:10.554700745Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.659595538s" Jul 7 06:06:10.555975 containerd[1528]: time="2025-07-07T06:06:10.555031165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 06:06:10.555975 containerd[1528]: time="2025-07-07T06:06:10.554754614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:10.557228 containerd[1528]: time="2025-07-07T06:06:10.557199376Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:06:10.981812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606658511.mount: Deactivated successfully. Jul 7 06:06:10.992058 containerd[1528]: time="2025-07-07T06:06:10.991979482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:06:10.993271 containerd[1528]: time="2025-07-07T06:06:10.993221016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:06:10.994994 containerd[1528]: time="2025-07-07T06:06:10.994047285Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:06:10.996229 containerd[1528]: time="2025-07-07T06:06:10.996171897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:06:10.997118 containerd[1528]: time="2025-07-07T06:06:10.997079103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 439.658178ms" Jul 7 06:06:10.997118 containerd[1528]: time="2025-07-07T06:06:10.997114931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:06:10.997774 containerd[1528]: time="2025-07-07T06:06:10.997713648Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 06:06:11.501324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818811535.mount: Deactivated successfully. Jul 7 06:06:12.156335 systemd-resolved[1400]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jul 7 06:06:13.118407 containerd[1528]: time="2025-07-07T06:06:13.118341010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:13.120260 containerd[1528]: time="2025-07-07T06:06:13.120217185Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 7 06:06:13.120609 containerd[1528]: time="2025-07-07T06:06:13.120557945Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:13.123588 containerd[1528]: time="2025-07-07T06:06:13.123521050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:13.124927 containerd[1528]: time="2025-07-07T06:06:13.124531593Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.12678428s" Jul 7 06:06:13.124927 containerd[1528]: time="2025-07-07T06:06:13.124622221Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 06:06:16.512189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:06:16.512460 systemd[1]: kubelet.service: Consumed 258ms CPU time, 107.8M memory peak. Jul 7 06:06:16.515926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:06:16.551723 systemd[1]: Reload requested from client PID 2221 ('systemctl') (unit session-7.scope)... Jul 7 06:06:16.551746 systemd[1]: Reloading... Jul 7 06:06:16.706037 zram_generator::config[2264]: No configuration found. Jul 7 06:06:16.827659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:06:17.043862 systemd[1]: Reloading finished in 491 ms. Jul 7 06:06:17.119678 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:06:17.119826 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:06:17.120193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:06:17.120256 systemd[1]: kubelet.service: Consumed 134ms CPU time, 98.2M memory peak. Jul 7 06:06:17.122453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:06:17.291919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:06:17.302869 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:06:17.349600 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:06:17.350978 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:06:17.350978 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:06:17.350978 kubelet[2318]: I0707 06:06:17.350097 2318 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:06:18.465376 kubelet[2318]: I0707 06:06:18.465191 2318 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:06:18.465929 kubelet[2318]: I0707 06:06:18.465911 2318 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:06:18.466335 kubelet[2318]: I0707 06:06:18.466316 2318 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:06:18.496648 kubelet[2318]: I0707 06:06:18.496609 2318 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:06:18.500300 kubelet[2318]: E0707 06:06:18.499242 2318 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://24.199.107.192:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.199.107.192:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 06:06:18.514453 kubelet[2318]: I0707 06:06:18.514393 2318 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:06:18.520010 kubelet[2318]: I0707 06:06:18.519976 2318 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:06:18.523892 kubelet[2318]: I0707 06:06:18.523838 2318 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:06:18.527316 kubelet[2318]: I0707 06:06:18.524062 2318 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-6-9e8df2071f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:06:18.527975 kubelet[2318]: I0707 06:06:18.527568 2318 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:06:18.527975 kubelet[2318]: I0707 06:06:18.527591 2318 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:06:18.527975 kubelet[2318]: I0707 06:06:18.527739 2318 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:06:18.530469 kubelet[2318]: I0707 06:06:18.530434 2318 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:06:18.530634 kubelet[2318]: I0707 06:06:18.530622 2318 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:06:18.530719 kubelet[2318]: I0707 06:06:18.530711 2318 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:06:18.532984 kubelet[2318]: I0707 06:06:18.532685 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:06:18.542309 kubelet[2318]: E0707 06:06:18.541941 2318 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://24.199.107.192:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-6-9e8df2071f&limit=500&resourceVersion=0\": dial tcp 24.199.107.192:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:06:18.542484 kubelet[2318]: I0707 06:06:18.542397 2318 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:06:18.542908 kubelet[2318]: I0707 06:06:18.542887 2318 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:06:18.543618 kubelet[2318]: W0707 06:06:18.543594 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:06:18.550666 kubelet[2318]: I0707 06:06:18.550624 2318 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:06:18.550801 kubelet[2318]: I0707 06:06:18.550718 2318 server.go:1289] "Started kubelet" Jul 7 06:06:18.563614 kubelet[2318]: I0707 06:06:18.563006 2318 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:06:18.563614 kubelet[2318]: I0707 06:06:18.563477 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:06:18.565010 kubelet[2318]: I0707 06:06:18.564027 2318 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:06:18.565612 kubelet[2318]: I0707 06:06:18.565593 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:06:18.568916 kubelet[2318]: I0707 06:06:18.568879 2318 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:06:18.573226 kubelet[2318]: E0707 06:06:18.571115 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.107.192:6443/api/v1/namespaces/default/events\": dial tcp 24.199.107.192:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-6-9e8df2071f.184fe3022cc087d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-6-9e8df2071f,UID:ci-4372.0.1-6-9e8df2071f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-6-9e8df2071f,},FirstTimestamp:2025-07-07 06:06:18.550667219 +0000 UTC m=+1.242672765,LastTimestamp:2025-07-07 06:06:18.550667219 +0000 UTC m=+1.242672765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-6-9e8df2071f,}" Jul 7 06:06:18.573226 kubelet[2318]: E0707 06:06:18.572760 2318 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://24.199.107.192:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.107.192:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:06:18.573226 kubelet[2318]: I0707 06:06:18.573064 2318 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:06:18.580361 kubelet[2318]: E0707 06:06:18.579018 2318 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" Jul 7 06:06:18.580361 kubelet[2318]: I0707 06:06:18.579073 2318 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:06:18.580361 kubelet[2318]: I0707 06:06:18.579335 2318 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:06:18.580361 kubelet[2318]: I0707 06:06:18.579408 2318 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:06:18.580361 kubelet[2318]: E0707 06:06:18.580120 2318 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://24.199.107.192:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.107.192:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:06:18.583218 kubelet[2318]: E0707 06:06:18.582655 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.107.192:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-6-9e8df2071f?timeout=10s\": dial tcp 24.199.107.192:6443: connect: connection refused" interval="200ms" Jul 7 06:06:18.585608 kubelet[2318]: I0707 06:06:18.585385 2318 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:06:18.591912 kubelet[2318]: I0707 06:06:18.589635 2318 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:06:18.591912 kubelet[2318]: I0707 06:06:18.589657 2318 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:06:18.599082 kubelet[2318]: I0707 06:06:18.598422 2318 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:06:18.599675 kubelet[2318]: I0707 06:06:18.599631 2318 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:06:18.599675 kubelet[2318]: I0707 06:06:18.599657 2318 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:06:18.599769 kubelet[2318]: I0707 06:06:18.599683 2318 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:06:18.599769 kubelet[2318]: I0707 06:06:18.599690 2318 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:06:18.599769 kubelet[2318]: E0707 06:06:18.599744 2318 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:06:18.612404 kubelet[2318]: E0707 06:06:18.612375 2318 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:06:18.613021 kubelet[2318]: E0707 06:06:18.612724 2318 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://24.199.107.192:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.107.192:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:06:18.618783 kubelet[2318]: I0707 06:06:18.618759 2318 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:06:18.618977 kubelet[2318]: I0707 06:06:18.618945 2318 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:06:18.619054 kubelet[2318]: I0707 06:06:18.619046 2318 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:06:18.620385 kubelet[2318]: I0707 06:06:18.620362 2318 policy_none.go:49] "None policy: Start" Jul 7 06:06:18.620556 kubelet[2318]: I0707 06:06:18.620538 2318 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:06:18.620650 kubelet[2318]: I0707 06:06:18.620638 2318 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:06:18.627767 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:06:18.637650 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:06:18.641725 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:06:18.653380 kubelet[2318]: E0707 06:06:18.653345 2318 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:06:18.653834 kubelet[2318]: I0707 06:06:18.653608 2318 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:06:18.653834 kubelet[2318]: I0707 06:06:18.653625 2318 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:06:18.654632 kubelet[2318]: I0707 06:06:18.654609 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:06:18.657365 kubelet[2318]: E0707 06:06:18.656897 2318 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:06:18.657365 kubelet[2318]: E0707 06:06:18.656969 2318 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-6-9e8df2071f\" not found" Jul 7 06:06:18.714327 systemd[1]: Created slice kubepods-burstable-podf1a7de27913daf74878204106d7cbce2.slice - libcontainer container kubepods-burstable-podf1a7de27913daf74878204106d7cbce2.slice. Jul 7 06:06:18.729095 kubelet[2318]: E0707 06:06:18.728943 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.735299 systemd[1]: Created slice kubepods-burstable-pode80e2b824e534bec283f1ae670630508.slice - libcontainer container kubepods-burstable-pode80e2b824e534bec283f1ae670630508.slice. Jul 7 06:06:18.747135 kubelet[2318]: E0707 06:06:18.747084 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.749980 systemd[1]: Created slice kubepods-burstable-podf6dc363d78c7e79a46991ec36fe1c78a.slice - libcontainer container kubepods-burstable-podf6dc363d78c7e79a46991ec36fe1c78a.slice. Jul 7 06:06:18.756403 kubelet[2318]: E0707 06:06:18.756205 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.756984 kubelet[2318]: I0707 06:06:18.756891 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.757704 kubelet[2318]: E0707 06:06:18.757651 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.107.192:6443/api/v1/nodes\": dial tcp 24.199.107.192:6443: connect: connection refused" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780069 kubelet[2318]: I0707 06:06:18.780005 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780231 kubelet[2318]: I0707 06:06:18.780075 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780231 kubelet[2318]: I0707 06:06:18.780117 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a7de27913daf74878204106d7cbce2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-6-9e8df2071f\" (UID: \"f1a7de27913daf74878204106d7cbce2\") " pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780231 kubelet[2318]: I0707 06:06:18.780145 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780231 kubelet[2318]: I0707 06:06:18.780171 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780231 kubelet[2318]: I0707 06:06:18.780197 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780381 kubelet[2318]: I0707 06:06:18.780222 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6dc363d78c7e79a46991ec36fe1c78a-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-6-9e8df2071f\" (UID: \"f6dc363d78c7e79a46991ec36fe1c78a\") " pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780381 kubelet[2318]: I0707 06:06:18.780247 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a7de27913daf74878204106d7cbce2-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-6-9e8df2071f\" (UID: \"f1a7de27913daf74878204106d7cbce2\") " pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.780381 kubelet[2318]: I0707 06:06:18.780271 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a7de27913daf74878204106d7cbce2-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-6-9e8df2071f\" (UID: \"f1a7de27913daf74878204106d7cbce2\") " pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.783863 kubelet[2318]: E0707 06:06:18.783800 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.107.192:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-6-9e8df2071f?timeout=10s\": dial tcp 24.199.107.192:6443: connect: connection refused" interval="400ms" Jul 7 06:06:18.959703 kubelet[2318]: I0707 06:06:18.959671 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:18.960182 kubelet[2318]: E0707 06:06:18.960151 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.107.192:6443/api/v1/nodes\": dial tcp 24.199.107.192:6443: connect: connection refused" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:19.030695 kubelet[2318]: E0707 06:06:19.030224 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:19.031448 containerd[1528]: time="2025-07-07T06:06:19.031402447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-6-9e8df2071f,Uid:f1a7de27913daf74878204106d7cbce2,Namespace:kube-system,Attempt:0,}" Jul 7 06:06:19.048420 kubelet[2318]: E0707 06:06:19.048350 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:19.054908 containerd[1528]: time="2025-07-07T06:06:19.054837620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-6-9e8df2071f,Uid:e80e2b824e534bec283f1ae670630508,Namespace:kube-system,Attempt:0,}" Jul 7 06:06:19.059979 kubelet[2318]: E0707 06:06:19.057694 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:19.061343 containerd[1528]: time="2025-07-07T06:06:19.060479283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-6-9e8df2071f,Uid:f6dc363d78c7e79a46991ec36fe1c78a,Namespace:kube-system,Attempt:0,}" Jul 7 06:06:19.154895 containerd[1528]: time="2025-07-07T06:06:19.154785946Z" level=info msg="connecting to shim 429021832213bce8315e2ecaa2a2f5fe49bd3e7046c1f456159782c9da4456ff" address="unix:///run/containerd/s/fa889133d8dcabf43a9131df224316c5cf9749d729e1cd53a65434d979995c97" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:06:19.158187 containerd[1528]: time="2025-07-07T06:06:19.158136222Z" level=info msg="connecting to shim 31c9def5c9dd044f4a4a060ea1969150cf77a4d91c971faf3bd86ee62d2a7b8e" address="unix:///run/containerd/s/bc982c294e80def8a26b6d2ac25e5db7a1620b993f8d7c08ac7f34a4398d4f14" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:06:19.161118 containerd[1528]: time="2025-07-07T06:06:19.161072559Z" level=info msg="connecting to shim 8ade4c52a8055ee0ba5273816010c69ca81e6347a14affc05fe30bb23ab9c1a3" address="unix:///run/containerd/s/855dbaca6782707b0fe837d2b015316342fac94ea03a2eac4c79febacd63fc15" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:06:19.186278 kubelet[2318]: E0707 06:06:19.186238 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.107.192:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-6-9e8df2071f?timeout=10s\": dial tcp 24.199.107.192:6443: connect: connection refused" interval="800ms" Jul 7 06:06:19.266225 systemd[1]: Started cri-containerd-31c9def5c9dd044f4a4a060ea1969150cf77a4d91c971faf3bd86ee62d2a7b8e.scope - libcontainer container 31c9def5c9dd044f4a4a060ea1969150cf77a4d91c971faf3bd86ee62d2a7b8e. Jul 7 06:06:19.268345 systemd[1]: Started cri-containerd-429021832213bce8315e2ecaa2a2f5fe49bd3e7046c1f456159782c9da4456ff.scope - libcontainer container 429021832213bce8315e2ecaa2a2f5fe49bd3e7046c1f456159782c9da4456ff. Jul 7 06:06:19.269867 systemd[1]: Started cri-containerd-8ade4c52a8055ee0ba5273816010c69ca81e6347a14affc05fe30bb23ab9c1a3.scope - libcontainer container 8ade4c52a8055ee0ba5273816010c69ca81e6347a14affc05fe30bb23ab9c1a3. Jul 7 06:06:19.359062 containerd[1528]: time="2025-07-07T06:06:19.358637448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-6-9e8df2071f,Uid:f1a7de27913daf74878204106d7cbce2,Namespace:kube-system,Attempt:0,} returns sandbox id \"429021832213bce8315e2ecaa2a2f5fe49bd3e7046c1f456159782c9da4456ff\"" Jul 7 06:06:19.361059 kubelet[2318]: E0707 06:06:19.361028 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:19.361887 kubelet[2318]: I0707 06:06:19.361802 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:19.363104 kubelet[2318]: E0707 06:06:19.362174 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.107.192:6443/api/v1/nodes\": dial tcp 24.199.107.192:6443: connect: connection refused" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:19.366418 containerd[1528]: time="2025-07-07T06:06:19.366383989Z" level=info msg="CreateContainer within sandbox \"429021832213bce8315e2ecaa2a2f5fe49bd3e7046c1f456159782c9da4456ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:06:19.391987 containerd[1528]: time="2025-07-07T06:06:19.391182329Z" level=info msg="Container 0b6ccfb80e9316ab004b138ad75e1f6e4860d7093c633fe3970e8476e668eea8: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:06:19.403855 containerd[1528]: time="2025-07-07T06:06:19.403796751Z" level=info msg="CreateContainer within sandbox \"429021832213bce8315e2ecaa2a2f5fe49bd3e7046c1f456159782c9da4456ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0b6ccfb80e9316ab004b138ad75e1f6e4860d7093c633fe3970e8476e668eea8\"" Jul 7 06:06:19.407175 containerd[1528]: time="2025-07-07T06:06:19.407113592Z" level=info msg="StartContainer for \"0b6ccfb80e9316ab004b138ad75e1f6e4860d7093c633fe3970e8476e668eea8\"" Jul 7 06:06:19.410107 containerd[1528]: time="2025-07-07T06:06:19.409601138Z" level=info msg="connecting to shim 0b6ccfb80e9316ab004b138ad75e1f6e4860d7093c633fe3970e8476e668eea8" address="unix:///run/containerd/s/fa889133d8dcabf43a9131df224316c5cf9749d729e1cd53a65434d979995c97" protocol=ttrpc version=3 Jul 7 06:06:19.411434 containerd[1528]: time="2025-07-07T06:06:19.411391315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-6-9e8df2071f,Uid:e80e2b824e534bec283f1ae670630508,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ade4c52a8055ee0ba5273816010c69ca81e6347a14affc05fe30bb23ab9c1a3\"" Jul 7 06:06:19.412495 kubelet[2318]: E0707 06:06:19.412464 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:19.417999 containerd[1528]: time="2025-07-07T06:06:19.417053613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-6-9e8df2071f,Uid:f6dc363d78c7e79a46991ec36fe1c78a,Namespace:kube-system,Attempt:0,} returns sandbox id \"31c9def5c9dd044f4a4a060ea1969150cf77a4d91c971faf3bd86ee62d2a7b8e\"" Jul 7 06:06:19.418998 containerd[1528]: time="2025-07-07T06:06:19.418935751Z" level=info msg="CreateContainer within sandbox \"8ade4c52a8055ee0ba5273816010c69ca81e6347a14affc05fe30bb23ab9c1a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:06:19.419212 kubelet[2318]: E0707 06:06:19.419140 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:19.435776 containerd[1528]: time="2025-07-07T06:06:19.435735336Z" level=info msg="Container eb969810e331768125c7cfdc32d705c6b9a8f0ad8e07b3afff588f3509b78a4a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:06:19.440399 systemd[1]: Started cri-containerd-0b6ccfb80e9316ab004b138ad75e1f6e4860d7093c633fe3970e8476e668eea8.scope - libcontainer container 0b6ccfb80e9316ab004b138ad75e1f6e4860d7093c633fe3970e8476e668eea8. Jul 7 06:06:19.448008 containerd[1528]: time="2025-07-07T06:06:19.447798338Z" level=info msg="CreateContainer within sandbox \"8ade4c52a8055ee0ba5273816010c69ca81e6347a14affc05fe30bb23ab9c1a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eb969810e331768125c7cfdc32d705c6b9a8f0ad8e07b3afff588f3509b78a4a\"" Jul 7 06:06:19.448848 containerd[1528]: time="2025-07-07T06:06:19.448795445Z" level=info msg="StartContainer for \"eb969810e331768125c7cfdc32d705c6b9a8f0ad8e07b3afff588f3509b78a4a\"" Jul 7 06:06:19.450929 containerd[1528]: time="2025-07-07T06:06:19.450865420Z" level=info msg="connecting to shim eb969810e331768125c7cfdc32d705c6b9a8f0ad8e07b3afff588f3509b78a4a" address="unix:///run/containerd/s/855dbaca6782707b0fe837d2b015316342fac94ea03a2eac4c79febacd63fc15" protocol=ttrpc version=3 Jul 7 06:06:19.452471 containerd[1528]: time="2025-07-07T06:06:19.452399886Z" level=info msg="CreateContainer within sandbox \"31c9def5c9dd044f4a4a060ea1969150cf77a4d91c971faf3bd86ee62d2a7b8e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:06:19.462229 containerd[1528]: time="2025-07-07T06:06:19.462185364Z" level=info msg="Container e856065d7767537a108cdb1abadd707a61d0cbb71a1f917379381a1eb931ed5d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:06:19.471946 containerd[1528]: time="2025-07-07T06:06:19.471833200Z" level=info msg="CreateContainer within sandbox \"31c9def5c9dd044f4a4a060ea1969150cf77a4d91c971faf3bd86ee62d2a7b8e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e856065d7767537a108cdb1abadd707a61d0cbb71a1f917379381a1eb931ed5d\"" Jul 7 06:06:19.477216 containerd[1528]: time="2025-07-07T06:06:19.477125384Z" level=info msg="StartContainer for \"e856065d7767537a108cdb1abadd707a61d0cbb71a1f917379381a1eb931ed5d\"" Jul 7 06:06:19.480629 containerd[1528]: time="2025-07-07T06:06:19.480573867Z" level=info msg="connecting to shim e856065d7767537a108cdb1abadd707a61d0cbb71a1f917379381a1eb931ed5d" address="unix:///run/containerd/s/bc982c294e80def8a26b6d2ac25e5db7a1620b993f8d7c08ac7f34a4398d4f14" protocol=ttrpc version=3 Jul 7 06:06:19.488274 systemd[1]: Started cri-containerd-eb969810e331768125c7cfdc32d705c6b9a8f0ad8e07b3afff588f3509b78a4a.scope - libcontainer container eb969810e331768125c7cfdc32d705c6b9a8f0ad8e07b3afff588f3509b78a4a. Jul 7 06:06:19.520212 systemd[1]: Started cri-containerd-e856065d7767537a108cdb1abadd707a61d0cbb71a1f917379381a1eb931ed5d.scope - libcontainer container e856065d7767537a108cdb1abadd707a61d0cbb71a1f917379381a1eb931ed5d. Jul 7 06:06:19.552004 containerd[1528]: time="2025-07-07T06:06:19.551925706Z" level=info msg="StartContainer for \"0b6ccfb80e9316ab004b138ad75e1f6e4860d7093c633fe3970e8476e668eea8\" returns successfully" Jul 7 06:06:19.624334 containerd[1528]: time="2025-07-07T06:06:19.623281583Z" level=info msg="StartContainer for \"eb969810e331768125c7cfdc32d705c6b9a8f0ad8e07b3afff588f3509b78a4a\" returns successfully" Jul 7 06:06:19.641010 kubelet[2318]: E0707 06:06:19.640662 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:19.641010 kubelet[2318]: E0707 06:06:19.640938 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:19.643184 kubelet[2318]: E0707 06:06:19.643157 2318 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://24.199.107.192:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.107.192:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:06:19.658614 containerd[1528]: time="2025-07-07T06:06:19.658559833Z" level=info msg="StartContainer for \"e856065d7767537a108cdb1abadd707a61d0cbb71a1f917379381a1eb931ed5d\" returns successfully" Jul 7 06:06:19.747048 kubelet[2318]: E0707 06:06:19.746997 2318 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://24.199.107.192:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.107.192:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:06:20.163439 kubelet[2318]: I0707 06:06:20.163407 2318 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:20.651141 kubelet[2318]: E0707 06:06:20.651028 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:20.651517 kubelet[2318]: E0707 06:06:20.651193 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:20.651517 kubelet[2318]: E0707 06:06:20.651406 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:20.651517 kubelet[2318]: E0707 06:06:20.651489 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:20.652017 kubelet[2318]: E0707 06:06:20.651925 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:20.652093 kubelet[2318]: E0707 06:06:20.652081 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:21.653295 kubelet[2318]: E0707 06:06:21.653259 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:21.654110 kubelet[2318]: E0707 06:06:21.653406 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:21.654110 kubelet[2318]: E0707 06:06:21.653666 2318 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:21.654110 kubelet[2318]: E0707 06:06:21.653748 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:22.512057 kubelet[2318]: E0707 06:06:22.512012 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-6-9e8df2071f\" not found" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.559573 kubelet[2318]: I0707 06:06:22.559508 2318 apiserver.go:52] "Watching apiserver" Jul 7 06:06:22.579833 kubelet[2318]: I0707 06:06:22.579774 2318 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:06:22.600973 kubelet[2318]: I0707 06:06:22.600216 2318 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.600973 kubelet[2318]: E0707 06:06:22.600259 2318 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-6-9e8df2071f\": node \"ci-4372.0.1-6-9e8df2071f\" not found" Jul 7 06:06:22.653529 kubelet[2318]: I0707 06:06:22.653487 2318 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.665311 kubelet[2318]: E0707 06:06:22.665264 2318 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-6-9e8df2071f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.665525 kubelet[2318]: E0707 06:06:22.665505 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:22.680401 kubelet[2318]: I0707 06:06:22.680346 2318 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.684271 kubelet[2318]: E0707 06:06:22.684200 2318 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-6-9e8df2071f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.684271 kubelet[2318]: I0707 06:06:22.684234 2318 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.688610 kubelet[2318]: E0707 06:06:22.688448 2318 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.688946 kubelet[2318]: I0707 06:06:22.688483 2318 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:22.692234 kubelet[2318]: E0707 06:06:22.692194 2318 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-6-9e8df2071f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:24.754666 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-7.scope)... Jul 7 06:06:24.754683 systemd[1]: Reloading... Jul 7 06:06:24.878994 zram_generator::config[2647]: No configuration found. Jul 7 06:06:24.984596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:06:25.132903 systemd[1]: Reloading finished in 377 ms. Jul 7 06:06:25.171883 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:06:25.184764 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:06:25.185151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:06:25.185238 systemd[1]: kubelet.service: Consumed 1.684s CPU time, 128.2M memory peak. Jul 7 06:06:25.187922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:06:25.365638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:06:25.379676 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:06:25.444612 kubelet[2689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:06:25.444612 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:06:25.444612 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:06:25.445099 kubelet[2689]: I0707 06:06:25.445047 2689 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:06:25.454025 kubelet[2689]: I0707 06:06:25.453809 2689 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:06:25.454330 kubelet[2689]: I0707 06:06:25.454316 2689 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:06:25.454907 kubelet[2689]: I0707 06:06:25.454854 2689 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:06:25.456051 kubelet[2689]: I0707 06:06:25.456028 2689 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 06:06:25.462523 kubelet[2689]: I0707 06:06:25.462177 2689 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:06:25.468735 kubelet[2689]: I0707 06:06:25.468704 2689 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:06:25.473506 kubelet[2689]: I0707 06:06:25.473466 2689 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:06:25.473844 kubelet[2689]: I0707 06:06:25.473800 2689 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:06:25.474079 kubelet[2689]: I0707 06:06:25.473845 2689 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-6-9e8df2071f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:06:25.474183 kubelet[2689]: I0707 06:06:25.474086 2689 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:06:25.474183 kubelet[2689]: I0707 06:06:25.474098 2689 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:06:25.474993 kubelet[2689]: I0707 06:06:25.474959 2689 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:06:25.476044 kubelet[2689]: I0707 06:06:25.475198 2689 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:06:25.476044 kubelet[2689]: I0707 06:06:25.475228 2689 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:06:25.476044 kubelet[2689]: I0707 06:06:25.475284 2689 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:06:25.476044 kubelet[2689]: I0707 06:06:25.475346 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:06:25.485670 kubelet[2689]: I0707 06:06:25.485631 2689 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:06:25.486334 kubelet[2689]: I0707 06:06:25.486310 2689 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:06:25.493278 kubelet[2689]: I0707 06:06:25.493248 2689 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:06:25.493392 kubelet[2689]: I0707 06:06:25.493316 2689 server.go:1289] "Started kubelet" Jul 7 06:06:25.495018 kubelet[2689]: I0707 06:06:25.494541 2689 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:06:25.496241 kubelet[2689]: I0707 06:06:25.496172 2689 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:06:25.496603 kubelet[2689]: I0707 06:06:25.496578 2689 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:06:25.498144 kubelet[2689]: I0707 06:06:25.498124 2689 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:06:25.511377 kubelet[2689]: I0707 06:06:25.511353 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:06:25.514158 kubelet[2689]: E0707 06:06:25.513320 2689 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:06:25.514158 kubelet[2689]: I0707 06:06:25.513501 2689 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:06:25.514158 kubelet[2689]: I0707 06:06:25.513617 2689 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:06:25.516195 kubelet[2689]: I0707 06:06:25.516174 2689 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:06:25.517827 kubelet[2689]: I0707 06:06:25.517808 2689 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:06:25.521559 kubelet[2689]: I0707 06:06:25.521533 2689 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:06:25.521667 kubelet[2689]: I0707 06:06:25.521631 2689 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:06:25.524022 kubelet[2689]: I0707 06:06:25.523977 2689 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:06:25.543904 kubelet[2689]: I0707 06:06:25.543853 2689 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:06:25.546682 kubelet[2689]: I0707 06:06:25.546652 2689 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:06:25.546914 kubelet[2689]: I0707 06:06:25.546848 2689 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:06:25.546914 kubelet[2689]: I0707 06:06:25.546881 2689 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:06:25.546914 kubelet[2689]: I0707 06:06:25.546890 2689 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:06:25.547113 kubelet[2689]: E0707 06:06:25.547005 2689 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:06:25.584835 kubelet[2689]: I0707 06:06:25.584808 2689 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:06:25.584835 kubelet[2689]: I0707 06:06:25.584825 2689 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:06:25.585055 kubelet[2689]: I0707 06:06:25.584869 2689 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:06:25.585085 kubelet[2689]: I0707 06:06:25.585061 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:06:25.585114 kubelet[2689]: I0707 06:06:25.585072 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:06:25.585114 kubelet[2689]: I0707 06:06:25.585102 2689 policy_none.go:49] "None policy: Start" Jul 7 06:06:25.585114 kubelet[2689]: I0707 06:06:25.585113 2689 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:06:25.585195 kubelet[2689]: I0707 06:06:25.585124 2689 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:06:25.585270 kubelet[2689]: I0707 06:06:25.585255 2689 state_mem.go:75] "Updated machine memory state" Jul 7 06:06:25.590330 kubelet[2689]: E0707 06:06:25.590298 2689 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:06:25.590503 kubelet[2689]: I0707 06:06:25.590488 2689 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:06:25.590542 kubelet[2689]: I0707 06:06:25.590503 2689 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:06:25.593043 kubelet[2689]: I0707 06:06:25.593016 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:06:25.594822 kubelet[2689]: E0707 06:06:25.594793 2689 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:06:25.648901 kubelet[2689]: I0707 06:06:25.648429 2689 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.648901 kubelet[2689]: I0707 06:06:25.648580 2689 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.649566 kubelet[2689]: I0707 06:06:25.649488 2689 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.654930 kubelet[2689]: I0707 06:06:25.654700 2689 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:06:25.655833 kubelet[2689]: I0707 06:06:25.655775 2689 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:06:25.657154 kubelet[2689]: I0707 06:06:25.657092 2689 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:06:25.701842 kubelet[2689]: I0707 06:06:25.699823 2689 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.708236 kubelet[2689]: I0707 06:06:25.708003 2689 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.708236 kubelet[2689]: I0707 06:06:25.708130 2689 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.719001 kubelet[2689]: I0707 06:06:25.718853 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.719807 kubelet[2689]: I0707 06:06:25.719760 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a7de27913daf74878204106d7cbce2-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-6-9e8df2071f\" (UID: \"f1a7de27913daf74878204106d7cbce2\") " pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.720139 kubelet[2689]: I0707 06:06:25.719969 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a7de27913daf74878204106d7cbce2-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-6-9e8df2071f\" (UID: \"f1a7de27913daf74878204106d7cbce2\") " pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.720139 kubelet[2689]: I0707 06:06:25.720000 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a7de27913daf74878204106d7cbce2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-6-9e8df2071f\" (UID: \"f1a7de27913daf74878204106d7cbce2\") " pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.720139 kubelet[2689]: I0707 06:06:25.720019 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.720139 kubelet[2689]: I0707 06:06:25.720040 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.720139 kubelet[2689]: I0707 06:06:25.720056 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6dc363d78c7e79a46991ec36fe1c78a-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-6-9e8df2071f\" (UID: \"f6dc363d78c7e79a46991ec36fe1c78a\") " pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.720304 kubelet[2689]: I0707 06:06:25.720078 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.720304 kubelet[2689]: I0707 06:06:25.720094 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e80e2b824e534bec283f1ae670630508-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-6-9e8df2071f\" (UID: \"e80e2b824e534bec283f1ae670630508\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:25.959098 kubelet[2689]: E0707 06:06:25.956206 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:25.959699 kubelet[2689]: E0707 06:06:25.959666 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:25.960166 kubelet[2689]: E0707 06:06:25.960147 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:26.476169 kubelet[2689]: I0707 06:06:26.476034 2689 apiserver.go:52] "Watching apiserver" Jul 7 06:06:26.518637 kubelet[2689]: I0707 06:06:26.518573 2689 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:06:26.567872 kubelet[2689]: E0707 06:06:26.567823 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:26.568743 kubelet[2689]: I0707 06:06:26.568408 2689 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:26.569931 kubelet[2689]: I0707 06:06:26.568579 2689 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:26.579830 kubelet[2689]: I0707 06:06:26.579524 2689 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:06:26.579830 kubelet[2689]: E0707 06:06:26.579588 2689 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-6-9e8df2071f\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:26.580302 kubelet[2689]: E0707 06:06:26.580187 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:26.583886 kubelet[2689]: I0707 06:06:26.583564 2689 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:06:26.583886 kubelet[2689]: E0707 06:06:26.583633 2689 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-6-9e8df2071f\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" Jul 7 06:06:26.583886 kubelet[2689]: E0707 06:06:26.583801 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:26.586029 kubelet[2689]: I0707 06:06:26.584832 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-6-9e8df2071f" podStartSLOduration=1.5848002540000001 podStartE2EDuration="1.584800254s" podCreationTimestamp="2025-07-07 06:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:06:26.580868608 +0000 UTC m=+1.191781148" watchObservedRunningTime="2025-07-07 06:06:26.584800254 +0000 UTC m=+1.195712771" Jul 7 06:06:26.600763 kubelet[2689]: I0707 06:06:26.600521 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-6-9e8df2071f" podStartSLOduration=1.6004969 podStartE2EDuration="1.6004969s" podCreationTimestamp="2025-07-07 06:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:06:26.596608543 +0000 UTC m=+1.207521100" watchObservedRunningTime="2025-07-07 06:06:26.6004969 +0000 UTC m=+1.211409437" Jul 7 06:06:26.621641 kubelet[2689]: I0707 06:06:26.621473 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-6-9e8df2071f" podStartSLOduration=1.621454151 podStartE2EDuration="1.621454151s" podCreationTimestamp="2025-07-07 06:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:06:26.61533918 +0000 UTC m=+1.226251719" watchObservedRunningTime="2025-07-07 06:06:26.621454151 +0000 UTC m=+1.232366687" Jul 7 06:06:27.569652 kubelet[2689]: E0707 06:06:27.569392 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:27.569652 kubelet[2689]: E0707 06:06:27.569512 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:28.889583 kubelet[2689]: E0707 06:06:28.889399 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:30.661064 kubelet[2689]: I0707 06:06:30.660935 2689 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:06:30.662675 containerd[1528]: time="2025-07-07T06:06:30.662509294Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:06:30.664081 kubelet[2689]: I0707 06:06:30.663939 2689 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:06:31.738355 systemd[1]: Created slice kubepods-besteffort-poda6f015b7_2e41_4cdb_ae64_8fecb161df2c.slice - libcontainer container kubepods-besteffort-poda6f015b7_2e41_4cdb_ae64_8fecb161df2c.slice. Jul 7 06:06:31.772869 kubelet[2689]: E0707 06:06:31.772833 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:31.796359 kubelet[2689]: I0707 06:06:31.795986 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6f015b7-2e41-4cdb-ae64-8fecb161df2c-kube-proxy\") pod \"kube-proxy-vm8br\" (UID: \"a6f015b7-2e41-4cdb-ae64-8fecb161df2c\") " pod="kube-system/kube-proxy-vm8br" Jul 7 06:06:31.796359 kubelet[2689]: I0707 06:06:31.796032 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnczv\" (UniqueName: \"kubernetes.io/projected/a6f015b7-2e41-4cdb-ae64-8fecb161df2c-kube-api-access-bnczv\") pod \"kube-proxy-vm8br\" (UID: \"a6f015b7-2e41-4cdb-ae64-8fecb161df2c\") " pod="kube-system/kube-proxy-vm8br" Jul 7 06:06:31.796359 kubelet[2689]: I0707 06:06:31.796073 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6f015b7-2e41-4cdb-ae64-8fecb161df2c-xtables-lock\") pod \"kube-proxy-vm8br\" (UID: \"a6f015b7-2e41-4cdb-ae64-8fecb161df2c\") " pod="kube-system/kube-proxy-vm8br" Jul 7 06:06:31.796359 kubelet[2689]: I0707 06:06:31.796100 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6f015b7-2e41-4cdb-ae64-8fecb161df2c-lib-modules\") pod \"kube-proxy-vm8br\" (UID: \"a6f015b7-2e41-4cdb-ae64-8fecb161df2c\") " pod="kube-system/kube-proxy-vm8br" Jul 7 06:06:31.866180 systemd[1]: Created slice kubepods-besteffort-pod8963eb62_315f_4387_9a7a_417932634459.slice - libcontainer container kubepods-besteffort-pod8963eb62_315f_4387_9a7a_417932634459.slice. Jul 7 06:06:31.998063 kubelet[2689]: I0707 06:06:31.997914 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8963eb62-315f-4387-9a7a-417932634459-var-lib-calico\") pod \"tigera-operator-747864d56d-mvzq6\" (UID: \"8963eb62-315f-4387-9a7a-417932634459\") " pod="tigera-operator/tigera-operator-747864d56d-mvzq6" Jul 7 06:06:31.998063 kubelet[2689]: I0707 06:06:31.997975 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfcl\" (UniqueName: \"kubernetes.io/projected/8963eb62-315f-4387-9a7a-417932634459-kube-api-access-fhfcl\") pod \"tigera-operator-747864d56d-mvzq6\" (UID: \"8963eb62-315f-4387-9a7a-417932634459\") " pod="tigera-operator/tigera-operator-747864d56d-mvzq6" Jul 7 06:06:32.048717 kubelet[2689]: E0707 06:06:32.048208 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:32.049505 containerd[1528]: time="2025-07-07T06:06:32.049455641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vm8br,Uid:a6f015b7-2e41-4cdb-ae64-8fecb161df2c,Namespace:kube-system,Attempt:0,}" Jul 7 06:06:32.116670 containerd[1528]: time="2025-07-07T06:06:32.116613794Z" level=info msg="connecting to shim a3fd1f5bb4490a9a866e40c24a4510230c9524afcf80a459c55f868f0adb4a6e" address="unix:///run/containerd/s/4f801d82b1f2ef36a4bb76485c32a7e6fb7719cd6f527d77d1c51d8011e7df24" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:06:32.155314 systemd[1]: Started cri-containerd-a3fd1f5bb4490a9a866e40c24a4510230c9524afcf80a459c55f868f0adb4a6e.scope - libcontainer container a3fd1f5bb4490a9a866e40c24a4510230c9524afcf80a459c55f868f0adb4a6e. Jul 7 06:06:32.171379 containerd[1528]: time="2025-07-07T06:06:32.171322278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mvzq6,Uid:8963eb62-315f-4387-9a7a-417932634459,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:06:32.209898 containerd[1528]: time="2025-07-07T06:06:32.209854997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vm8br,Uid:a6f015b7-2e41-4cdb-ae64-8fecb161df2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3fd1f5bb4490a9a866e40c24a4510230c9524afcf80a459c55f868f0adb4a6e\"" Jul 7 06:06:32.211470 kubelet[2689]: E0707 06:06:32.211429 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:32.219279 containerd[1528]: time="2025-07-07T06:06:32.219222662Z" level=info msg="connecting to shim acce2b308e468502ca88642fbab7df961a181178332e6d391b54632db1b30989" address="unix:///run/containerd/s/b666513617efad4d32697b300196d6a3934b45a54c1f8afae5de01f27f73c38e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:06:32.219778 containerd[1528]: time="2025-07-07T06:06:32.219716610Z" level=info msg="CreateContainer within sandbox \"a3fd1f5bb4490a9a866e40c24a4510230c9524afcf80a459c55f868f0adb4a6e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:06:32.253242 systemd-resolved[1400]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jul 7 06:06:32.255255 systemd[1]: Started cri-containerd-acce2b308e468502ca88642fbab7df961a181178332e6d391b54632db1b30989.scope - libcontainer container acce2b308e468502ca88642fbab7df961a181178332e6d391b54632db1b30989. Jul 7 06:06:32.262056 containerd[1528]: time="2025-07-07T06:06:32.262006319Z" level=info msg="Container 979ddefb4ffdf749abc64cefa63245b9da69cf69c73e5c3dd0c891649cfce584: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:06:32.269523 containerd[1528]: time="2025-07-07T06:06:32.269465125Z" level=info msg="CreateContainer within sandbox \"a3fd1f5bb4490a9a866e40c24a4510230c9524afcf80a459c55f868f0adb4a6e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"979ddefb4ffdf749abc64cefa63245b9da69cf69c73e5c3dd0c891649cfce584\"" Jul 7 06:06:32.270446 containerd[1528]: time="2025-07-07T06:06:32.270417478Z" level=info msg="StartContainer for \"979ddefb4ffdf749abc64cefa63245b9da69cf69c73e5c3dd0c891649cfce584\"" Jul 7 06:06:32.272312 containerd[1528]: time="2025-07-07T06:06:32.272274346Z" level=info msg="connecting to shim 979ddefb4ffdf749abc64cefa63245b9da69cf69c73e5c3dd0c891649cfce584" address="unix:///run/containerd/s/4f801d82b1f2ef36a4bb76485c32a7e6fb7719cd6f527d77d1c51d8011e7df24" protocol=ttrpc version=3 Jul 7 06:06:33.361848 systemd-resolved[1400]: Clock change detected. Flushing caches. Jul 7 06:06:33.362117 systemd-timesyncd[1415]: Contacted time server 15.204.87.223:123 (2.flatcar.pool.ntp.org). Jul 7 06:06:33.362192 systemd-timesyncd[1415]: Initial clock synchronization to Mon 2025-07-07 06:06:33.361501 UTC. Jul 7 06:06:33.399066 systemd[1]: Started cri-containerd-979ddefb4ffdf749abc64cefa63245b9da69cf69c73e5c3dd0c891649cfce584.scope - libcontainer container 979ddefb4ffdf749abc64cefa63245b9da69cf69c73e5c3dd0c891649cfce584. Jul 7 06:06:33.447686 containerd[1528]: time="2025-07-07T06:06:33.447632267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mvzq6,Uid:8963eb62-315f-4387-9a7a-417932634459,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"acce2b308e468502ca88642fbab7df961a181178332e6d391b54632db1b30989\"" Jul 7 06:06:33.452899 containerd[1528]: time="2025-07-07T06:06:33.452760714Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:06:33.492845 containerd[1528]: time="2025-07-07T06:06:33.492796173Z" level=info msg="StartContainer for \"979ddefb4ffdf749abc64cefa63245b9da69cf69c73e5c3dd0c891649cfce584\" returns successfully" Jul 7 06:06:33.675293 kubelet[2689]: E0707 06:06:33.675177 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:33.675879 kubelet[2689]: E0707 06:06:33.675845 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:33.703345 kubelet[2689]: I0707 06:06:33.703267 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vm8br" podStartSLOduration=2.703249639 podStartE2EDuration="2.703249639s" podCreationTimestamp="2025-07-07 06:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:06:33.703042755 +0000 UTC m=+7.226124515" watchObservedRunningTime="2025-07-07 06:06:33.703249639 +0000 UTC m=+7.226331396" Jul 7 06:06:35.452382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421269226.mount: Deactivated successfully. Jul 7 06:06:36.455907 kubelet[2689]: E0707 06:06:36.455770 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:36.682397 kubelet[2689]: E0707 06:06:36.681863 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:37.525427 containerd[1528]: time="2025-07-07T06:06:37.525357295Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:37.526248 containerd[1528]: time="2025-07-07T06:06:37.526198002Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:06:37.526836 containerd[1528]: time="2025-07-07T06:06:37.526598653Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:37.529187 containerd[1528]: time="2025-07-07T06:06:37.528555753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:37.529608 containerd[1528]: time="2025-07-07T06:06:37.529158352Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 4.076293298s" Jul 7 06:06:37.529913 containerd[1528]: time="2025-07-07T06:06:37.529687089Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:06:37.534236 containerd[1528]: time="2025-07-07T06:06:37.534198980Z" level=info msg="CreateContainer within sandbox \"acce2b308e468502ca88642fbab7df961a181178332e6d391b54632db1b30989\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:06:37.543475 containerd[1528]: time="2025-07-07T06:06:37.541891963Z" level=info msg="Container 153ad283d2b39665750427c48b6495046ba78dd489aad37bb017355b1a4b2315: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:06:37.552954 containerd[1528]: time="2025-07-07T06:06:37.552880453Z" level=info msg="CreateContainer within sandbox \"acce2b308e468502ca88642fbab7df961a181178332e6d391b54632db1b30989\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"153ad283d2b39665750427c48b6495046ba78dd489aad37bb017355b1a4b2315\"" Jul 7 06:06:37.555612 containerd[1528]: time="2025-07-07T06:06:37.555561163Z" level=info msg="StartContainer for \"153ad283d2b39665750427c48b6495046ba78dd489aad37bb017355b1a4b2315\"" Jul 7 06:06:37.557052 containerd[1528]: time="2025-07-07T06:06:37.557011202Z" level=info msg="connecting to shim 153ad283d2b39665750427c48b6495046ba78dd489aad37bb017355b1a4b2315" address="unix:///run/containerd/s/b666513617efad4d32697b300196d6a3934b45a54c1f8afae5de01f27f73c38e" protocol=ttrpc version=3 Jul 7 06:06:37.589054 systemd[1]: Started cri-containerd-153ad283d2b39665750427c48b6495046ba78dd489aad37bb017355b1a4b2315.scope - libcontainer container 153ad283d2b39665750427c48b6495046ba78dd489aad37bb017355b1a4b2315. Jul 7 06:06:37.626308 containerd[1528]: time="2025-07-07T06:06:37.626230202Z" level=info msg="StartContainer for \"153ad283d2b39665750427c48b6495046ba78dd489aad37bb017355b1a4b2315\" returns successfully" Jul 7 06:06:39.985315 kubelet[2689]: E0707 06:06:39.985253 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:40.008938 kubelet[2689]: I0707 06:06:40.008715 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-mvzq6" podStartSLOduration=4.929075772 podStartE2EDuration="9.008693566s" podCreationTimestamp="2025-07-07 06:06:31 +0000 UTC" firstStartedPulling="2025-07-07 06:06:33.45106593 +0000 UTC m=+6.974147680" lastFinishedPulling="2025-07-07 06:06:37.530683725 +0000 UTC m=+11.053765474" observedRunningTime="2025-07-07 06:06:37.699694491 +0000 UTC m=+11.222776248" watchObservedRunningTime="2025-07-07 06:06:40.008693566 +0000 UTC m=+13.531775324" Jul 7 06:06:42.361024 update_engine[1511]: I20250707 06:06:42.360939 1511 update_attempter.cc:509] Updating boot flags... Jul 7 06:06:43.883581 sudo[1770]: pam_unix(sudo:session): session closed for user root Jul 7 06:06:43.888528 sshd[1769]: Connection closed by 139.178.68.195 port 51058 Jul 7 06:06:43.891246 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:43.897189 systemd[1]: sshd@6-24.199.107.192:22-139.178.68.195:51058.service: Deactivated successfully. Jul 7 06:06:43.903264 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:06:43.904065 systemd[1]: session-7.scope: Consumed 5.790s CPU time, 160.4M memory peak. Jul 7 06:06:43.907938 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:06:43.911654 systemd-logind[1509]: Removed session 7. Jul 7 06:06:48.427192 systemd[1]: Created slice kubepods-besteffort-pod1d81591c_7188_4416_9a7a_71ab9f9540e5.slice - libcontainer container kubepods-besteffort-pod1d81591c_7188_4416_9a7a_71ab9f9540e5.slice. Jul 7 06:06:48.493286 kubelet[2689]: I0707 06:06:48.493094 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89p7c\" (UniqueName: \"kubernetes.io/projected/1d81591c-7188-4416-9a7a-71ab9f9540e5-kube-api-access-89p7c\") pod \"calico-typha-5774bbd444-xsgkr\" (UID: \"1d81591c-7188-4416-9a7a-71ab9f9540e5\") " pod="calico-system/calico-typha-5774bbd444-xsgkr" Jul 7 06:06:48.494107 kubelet[2689]: I0707 06:06:48.493415 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d81591c-7188-4416-9a7a-71ab9f9540e5-tigera-ca-bundle\") pod \"calico-typha-5774bbd444-xsgkr\" (UID: \"1d81591c-7188-4416-9a7a-71ab9f9540e5\") " pod="calico-system/calico-typha-5774bbd444-xsgkr" Jul 7 06:06:48.494107 kubelet[2689]: I0707 06:06:48.493934 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1d81591c-7188-4416-9a7a-71ab9f9540e5-typha-certs\") pod \"calico-typha-5774bbd444-xsgkr\" (UID: \"1d81591c-7188-4416-9a7a-71ab9f9540e5\") " pod="calico-system/calico-typha-5774bbd444-xsgkr" Jul 7 06:06:48.734494 kubelet[2689]: E0707 06:06:48.734341 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:48.736603 containerd[1528]: time="2025-07-07T06:06:48.736007258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5774bbd444-xsgkr,Uid:1d81591c-7188-4416-9a7a-71ab9f9540e5,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:48.800312 containerd[1528]: time="2025-07-07T06:06:48.800251791Z" level=info msg="connecting to shim 213947a2d4fa7ab0cbd0cccf1206aead1560b6f2e8a46350e910d9e84533fd4e" address="unix:///run/containerd/s/3ec4b551270dc7b7eea9a87f9d53276d5fc7a2357215ca212c4cfa32a14ca61b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:06:48.851512 systemd[1]: Created slice kubepods-besteffort-pod66dacd5b_fb89_419e_b1ae_48f7df2238f2.slice - libcontainer container kubepods-besteffort-pod66dacd5b_fb89_419e_b1ae_48f7df2238f2.slice. Jul 7 06:06:48.875749 systemd[1]: Started cri-containerd-213947a2d4fa7ab0cbd0cccf1206aead1560b6f2e8a46350e910d9e84533fd4e.scope - libcontainer container 213947a2d4fa7ab0cbd0cccf1206aead1560b6f2e8a46350e910d9e84533fd4e. Jul 7 06:06:48.896496 kubelet[2689]: I0707 06:06:48.896439 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-xtables-lock\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.896496 kubelet[2689]: I0707 06:06:48.896499 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-cni-bin-dir\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897373 kubelet[2689]: I0707 06:06:48.896515 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-policysync\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897373 kubelet[2689]: I0707 06:06:48.896538 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-lib-modules\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897373 kubelet[2689]: I0707 06:06:48.896563 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-var-run-calico\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897373 kubelet[2689]: I0707 06:06:48.896581 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9kps\" (UniqueName: \"kubernetes.io/projected/66dacd5b-fb89-419e-b1ae-48f7df2238f2-kube-api-access-q9kps\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897373 kubelet[2689]: I0707 06:06:48.896604 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-cni-log-dir\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897618 kubelet[2689]: I0707 06:06:48.896620 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-flexvol-driver-host\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897618 kubelet[2689]: I0707 06:06:48.896639 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/66dacd5b-fb89-419e-b1ae-48f7df2238f2-node-certs\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897618 kubelet[2689]: I0707 06:06:48.896657 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-cni-net-dir\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897618 kubelet[2689]: I0707 06:06:48.896674 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66dacd5b-fb89-419e-b1ae-48f7df2238f2-tigera-ca-bundle\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.897618 kubelet[2689]: I0707 06:06:48.896698 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/66dacd5b-fb89-419e-b1ae-48f7df2238f2-var-lib-calico\") pod \"calico-node-r978z\" (UID: \"66dacd5b-fb89-419e-b1ae-48f7df2238f2\") " pod="calico-system/calico-node-r978z" Jul 7 06:06:48.989946 containerd[1528]: time="2025-07-07T06:06:48.989305645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5774bbd444-xsgkr,Uid:1d81591c-7188-4416-9a7a-71ab9f9540e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"213947a2d4fa7ab0cbd0cccf1206aead1560b6f2e8a46350e910d9e84533fd4e\"" Jul 7 06:06:48.991574 kubelet[2689]: E0707 06:06:48.991360 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:48.992622 containerd[1528]: time="2025-07-07T06:06:48.992533693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:06:49.012036 kubelet[2689]: E0707 06:06:49.012004 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.012036 kubelet[2689]: W0707 06:06:49.012028 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.012229 kubelet[2689]: E0707 06:06:49.012056 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.014143 kubelet[2689]: E0707 06:06:49.014100 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.014143 kubelet[2689]: W0707 06:06:49.014122 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.014143 kubelet[2689]: E0707 06:06:49.014147 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.033950 kubelet[2689]: E0707 06:06:49.033912 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.033950 kubelet[2689]: W0707 06:06:49.033944 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.034124 kubelet[2689]: E0707 06:06:49.033973 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.158820 containerd[1528]: time="2025-07-07T06:06:49.158740428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r978z,Uid:66dacd5b-fb89-419e-b1ae-48f7df2238f2,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:49.184917 kubelet[2689]: E0707 06:06:49.184654 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7p5z" podUID="b87fa73e-26f3-47b2-81b4-1f7282c22904" Jul 7 06:06:49.189636 containerd[1528]: time="2025-07-07T06:06:49.189447765Z" level=info msg="connecting to shim 3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153" address="unix:///run/containerd/s/d41663683ed8b3486088f7039bcb6917c1314e311e0a5642f5f636275778be7a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:06:49.247090 systemd[1]: Started cri-containerd-3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153.scope - libcontainer container 3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153. Jul 7 06:06:49.250114 kubelet[2689]: E0707 06:06:49.249956 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.250114 kubelet[2689]: W0707 06:06:49.249983 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.250114 kubelet[2689]: E0707 06:06:49.250007 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.250709 kubelet[2689]: E0707 06:06:49.250508 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.250709 kubelet[2689]: W0707 06:06:49.250525 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.250709 kubelet[2689]: E0707 06:06:49.250545 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.251534 kubelet[2689]: E0707 06:06:49.251434 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.251534 kubelet[2689]: W0707 06:06:49.251465 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.251534 kubelet[2689]: E0707 06:06:49.251480 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.276158 kubelet[2689]: E0707 06:06:49.275868 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.276158 kubelet[2689]: W0707 06:06:49.275897 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.276158 kubelet[2689]: E0707 06:06:49.275925 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.276672 kubelet[2689]: E0707 06:06:49.276641 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.276915 kubelet[2689]: W0707 06:06:49.276899 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.277104 kubelet[2689]: E0707 06:06:49.276971 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.277329 kubelet[2689]: E0707 06:06:49.277318 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.277382 kubelet[2689]: W0707 06:06:49.277373 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.277426 kubelet[2689]: E0707 06:06:49.277418 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.277643 kubelet[2689]: E0707 06:06:49.277633 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.277823 kubelet[2689]: W0707 06:06:49.277706 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.277823 kubelet[2689]: E0707 06:06:49.277721 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.277983 kubelet[2689]: E0707 06:06:49.277969 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.278155 kubelet[2689]: W0707 06:06:49.278032 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.278155 kubelet[2689]: E0707 06:06:49.278047 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.278279 kubelet[2689]: E0707 06:06:49.278270 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.278333 kubelet[2689]: W0707 06:06:49.278325 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.278452 kubelet[2689]: E0707 06:06:49.278369 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.278597 kubelet[2689]: E0707 06:06:49.278585 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.278657 kubelet[2689]: W0707 06:06:49.278648 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.278723 kubelet[2689]: E0707 06:06:49.278709 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.279071 kubelet[2689]: E0707 06:06:49.278970 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.279071 kubelet[2689]: W0707 06:06:49.278985 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.279071 kubelet[2689]: E0707 06:06:49.278995 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.279307 kubelet[2689]: E0707 06:06:49.279296 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.279363 kubelet[2689]: W0707 06:06:49.279354 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.279410 kubelet[2689]: E0707 06:06:49.279402 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.279728 kubelet[2689]: E0707 06:06:49.279636 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.279728 kubelet[2689]: W0707 06:06:49.279647 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.279728 kubelet[2689]: E0707 06:06:49.279659 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.279903 kubelet[2689]: E0707 06:06:49.279894 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.279950 kubelet[2689]: W0707 06:06:49.279942 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.280006 kubelet[2689]: E0707 06:06:49.279995 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.280382 kubelet[2689]: E0707 06:06:49.280284 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.280382 kubelet[2689]: W0707 06:06:49.280296 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.280382 kubelet[2689]: E0707 06:06:49.280306 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.280648 kubelet[2689]: E0707 06:06:49.280540 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.280648 kubelet[2689]: W0707 06:06:49.280550 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.280648 kubelet[2689]: E0707 06:06:49.280560 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.280792 kubelet[2689]: E0707 06:06:49.280775 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.280864 kubelet[2689]: W0707 06:06:49.280854 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.280978 kubelet[2689]: E0707 06:06:49.280900 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.281184 kubelet[2689]: E0707 06:06:49.281174 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.281238 kubelet[2689]: W0707 06:06:49.281230 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.281283 kubelet[2689]: E0707 06:06:49.281275 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.281561 kubelet[2689]: E0707 06:06:49.281476 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.281561 kubelet[2689]: W0707 06:06:49.281487 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.281561 kubelet[2689]: E0707 06:06:49.281496 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.281750 kubelet[2689]: E0707 06:06:49.281739 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.281865 kubelet[2689]: W0707 06:06:49.281842 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.281927 kubelet[2689]: E0707 06:06:49.281917 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.301988 kubelet[2689]: E0707 06:06:49.301919 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.301988 kubelet[2689]: W0707 06:06:49.301952 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.301988 kubelet[2689]: E0707 06:06:49.301985 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.305052 kubelet[2689]: I0707 06:06:49.302048 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b87fa73e-26f3-47b2-81b4-1f7282c22904-registration-dir\") pod \"csi-node-driver-r7p5z\" (UID: \"b87fa73e-26f3-47b2-81b4-1f7282c22904\") " pod="calico-system/csi-node-driver-r7p5z" Jul 7 06:06:49.305052 kubelet[2689]: E0707 06:06:49.302383 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.305052 kubelet[2689]: W0707 06:06:49.302399 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.305052 kubelet[2689]: E0707 06:06:49.302414 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.305052 kubelet[2689]: I0707 06:06:49.302447 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b87fa73e-26f3-47b2-81b4-1f7282c22904-varrun\") pod \"csi-node-driver-r7p5z\" (UID: \"b87fa73e-26f3-47b2-81b4-1f7282c22904\") " pod="calico-system/csi-node-driver-r7p5z" Jul 7 06:06:49.305052 kubelet[2689]: E0707 06:06:49.304377 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.305052 kubelet[2689]: W0707 06:06:49.304394 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.305052 kubelet[2689]: E0707 06:06:49.304413 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.305052 kubelet[2689]: E0707 06:06:49.304640 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.305278 kubelet[2689]: W0707 06:06:49.304650 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.305278 kubelet[2689]: E0707 06:06:49.304664 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.305607 kubelet[2689]: E0707 06:06:49.305477 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.305607 kubelet[2689]: W0707 06:06:49.305491 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.305607 kubelet[2689]: E0707 06:06:49.305507 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.305607 kubelet[2689]: I0707 06:06:49.305570 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b87fa73e-26f3-47b2-81b4-1f7282c22904-kubelet-dir\") pod \"csi-node-driver-r7p5z\" (UID: \"b87fa73e-26f3-47b2-81b4-1f7282c22904\") " pod="calico-system/csi-node-driver-r7p5z" Jul 7 06:06:49.305936 kubelet[2689]: E0707 06:06:49.305828 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.305936 kubelet[2689]: W0707 06:06:49.305839 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.305936 kubelet[2689]: E0707 06:06:49.305850 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.306598 kubelet[2689]: E0707 06:06:49.306086 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.306598 kubelet[2689]: W0707 06:06:49.306099 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.306598 kubelet[2689]: E0707 06:06:49.306108 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.306598 kubelet[2689]: E0707 06:06:49.306559 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.306598 kubelet[2689]: W0707 06:06:49.306580 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.306598 kubelet[2689]: E0707 06:06:49.306598 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.306815 kubelet[2689]: I0707 06:06:49.306631 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b87fa73e-26f3-47b2-81b4-1f7282c22904-socket-dir\") pod \"csi-node-driver-r7p5z\" (UID: \"b87fa73e-26f3-47b2-81b4-1f7282c22904\") " pod="calico-system/csi-node-driver-r7p5z" Jul 7 06:06:49.308559 kubelet[2689]: E0707 06:06:49.307503 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.308559 kubelet[2689]: W0707 06:06:49.307522 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.308559 kubelet[2689]: E0707 06:06:49.307536 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.308559 kubelet[2689]: E0707 06:06:49.307703 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.308559 kubelet[2689]: W0707 06:06:49.307710 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.308559 kubelet[2689]: E0707 06:06:49.307719 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.308559 kubelet[2689]: E0707 06:06:49.307872 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.308559 kubelet[2689]: W0707 06:06:49.307879 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.308559 kubelet[2689]: E0707 06:06:49.307888 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.308559 kubelet[2689]: E0707 06:06:49.308144 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.308938 kubelet[2689]: W0707 06:06:49.308158 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.308938 kubelet[2689]: E0707 06:06:49.308172 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.308938 kubelet[2689]: I0707 06:06:49.308228 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6qck\" (UniqueName: \"kubernetes.io/projected/b87fa73e-26f3-47b2-81b4-1f7282c22904-kube-api-access-n6qck\") pod \"csi-node-driver-r7p5z\" (UID: \"b87fa73e-26f3-47b2-81b4-1f7282c22904\") " pod="calico-system/csi-node-driver-r7p5z" Jul 7 06:06:49.308938 kubelet[2689]: E0707 06:06:49.308832 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.308938 kubelet[2689]: W0707 06:06:49.308846 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.308938 kubelet[2689]: E0707 06:06:49.308859 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.310594 kubelet[2689]: E0707 06:06:49.309547 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.310594 kubelet[2689]: W0707 06:06:49.309571 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.310594 kubelet[2689]: E0707 06:06:49.309591 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.310594 kubelet[2689]: E0707 06:06:49.310070 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.310594 kubelet[2689]: W0707 06:06:49.310082 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.310594 kubelet[2689]: E0707 06:06:49.310100 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.392381 containerd[1528]: time="2025-07-07T06:06:49.392328241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r978z,Uid:66dacd5b-fb89-419e-b1ae-48f7df2238f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153\"" Jul 7 06:06:49.409662 kubelet[2689]: E0707 06:06:49.409615 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.410149 kubelet[2689]: W0707 06:06:49.409641 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.410149 kubelet[2689]: E0707 06:06:49.409758 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.410399 kubelet[2689]: E0707 06:06:49.410378 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.410489 kubelet[2689]: W0707 06:06:49.410472 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.411440 kubelet[2689]: E0707 06:06:49.411267 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.411643 kubelet[2689]: E0707 06:06:49.411606 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.411767 kubelet[2689]: W0707 06:06:49.411747 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.411884 kubelet[2689]: E0707 06:06:49.411867 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.412594 kubelet[2689]: E0707 06:06:49.412572 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.412860 kubelet[2689]: W0707 06:06:49.412839 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.413017 kubelet[2689]: E0707 06:06:49.412957 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.413296 kubelet[2689]: E0707 06:06:49.413275 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.413391 kubelet[2689]: W0707 06:06:49.413305 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.413391 kubelet[2689]: E0707 06:06:49.413322 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.413587 kubelet[2689]: E0707 06:06:49.413574 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.413587 kubelet[2689]: W0707 06:06:49.413585 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.413708 kubelet[2689]: E0707 06:06:49.413595 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.414081 kubelet[2689]: E0707 06:06:49.414051 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.414200 kubelet[2689]: W0707 06:06:49.414067 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.414298 kubelet[2689]: E0707 06:06:49.414210 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.415179 kubelet[2689]: E0707 06:06:49.415051 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.415179 kubelet[2689]: W0707 06:06:49.415069 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.415179 kubelet[2689]: E0707 06:06:49.415082 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.415692 kubelet[2689]: E0707 06:06:49.415509 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.415692 kubelet[2689]: W0707 06:06:49.415629 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.415692 kubelet[2689]: E0707 06:06:49.415645 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.416130 kubelet[2689]: E0707 06:06:49.416110 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.416130 kubelet[2689]: W0707 06:06:49.416126 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.416276 kubelet[2689]: E0707 06:06:49.416255 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.416604 kubelet[2689]: E0707 06:06:49.416587 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.416710 kubelet[2689]: W0707 06:06:49.416601 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.416764 kubelet[2689]: E0707 06:06:49.416713 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.417330 kubelet[2689]: E0707 06:06:49.417221 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.417330 kubelet[2689]: W0707 06:06:49.417246 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.417330 kubelet[2689]: E0707 06:06:49.417263 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.417836 kubelet[2689]: E0707 06:06:49.417726 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.417836 kubelet[2689]: W0707 06:06:49.417745 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.417836 kubelet[2689]: E0707 06:06:49.417760 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.418552 kubelet[2689]: E0707 06:06:49.418532 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.418726 kubelet[2689]: W0707 06:06:49.418639 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.418726 kubelet[2689]: E0707 06:06:49.418660 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.419132 kubelet[2689]: E0707 06:06:49.419115 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.419412 kubelet[2689]: W0707 06:06:49.419232 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.419412 kubelet[2689]: E0707 06:06:49.419254 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.419702 kubelet[2689]: E0707 06:06:49.419606 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.419702 kubelet[2689]: W0707 06:06:49.419622 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.419702 kubelet[2689]: E0707 06:06:49.419637 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.420164 kubelet[2689]: E0707 06:06:49.420073 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.420164 kubelet[2689]: W0707 06:06:49.420089 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.420164 kubelet[2689]: E0707 06:06:49.420104 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.420711 kubelet[2689]: E0707 06:06:49.420613 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.420711 kubelet[2689]: W0707 06:06:49.420630 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.420711 kubelet[2689]: E0707 06:06:49.420645 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.421188 kubelet[2689]: E0707 06:06:49.421095 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.421188 kubelet[2689]: W0707 06:06:49.421112 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.421188 kubelet[2689]: E0707 06:06:49.421127 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.421656 kubelet[2689]: E0707 06:06:49.421561 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.421656 kubelet[2689]: W0707 06:06:49.421579 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.421656 kubelet[2689]: E0707 06:06:49.421594 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.422214 kubelet[2689]: E0707 06:06:49.422076 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.422214 kubelet[2689]: W0707 06:06:49.422094 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.422214 kubelet[2689]: E0707 06:06:49.422124 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.422750 kubelet[2689]: E0707 06:06:49.422732 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.423027 kubelet[2689]: W0707 06:06:49.422953 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.423027 kubelet[2689]: E0707 06:06:49.422976 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.423622 kubelet[2689]: E0707 06:06:49.423520 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.423622 kubelet[2689]: W0707 06:06:49.423541 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.423622 kubelet[2689]: E0707 06:06:49.423557 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.424026 kubelet[2689]: E0707 06:06:49.424008 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.424242 kubelet[2689]: W0707 06:06:49.424111 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.424242 kubelet[2689]: E0707 06:06:49.424131 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.424570 kubelet[2689]: E0707 06:06:49.424554 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.424814 kubelet[2689]: W0707 06:06:49.424660 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.424814 kubelet[2689]: E0707 06:06:49.424683 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:49.438094 kubelet[2689]: E0707 06:06:49.438053 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:49.438094 kubelet[2689]: W0707 06:06:49.438084 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:49.438311 kubelet[2689]: E0707 06:06:49.438114 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:50.384913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287835293.mount: Deactivated successfully. Jul 7 06:06:50.636279 kubelet[2689]: E0707 06:06:50.636116 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7p5z" podUID="b87fa73e-26f3-47b2-81b4-1f7282c22904" Jul 7 06:06:51.899710 containerd[1528]: time="2025-07-07T06:06:51.899645306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:51.901817 containerd[1528]: time="2025-07-07T06:06:51.901374448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:06:51.904474 containerd[1528]: time="2025-07-07T06:06:51.904421706Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:51.907321 containerd[1528]: time="2025-07-07T06:06:51.907148085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:51.907987 containerd[1528]: time="2025-07-07T06:06:51.907753515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.915122128s" Jul 7 06:06:51.907987 containerd[1528]: time="2025-07-07T06:06:51.907817499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:06:51.910889 containerd[1528]: time="2025-07-07T06:06:51.910812728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:06:51.941824 containerd[1528]: time="2025-07-07T06:06:51.941667120Z" level=info msg="CreateContainer within sandbox \"213947a2d4fa7ab0cbd0cccf1206aead1560b6f2e8a46350e910d9e84533fd4e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:06:51.952505 containerd[1528]: time="2025-07-07T06:06:51.952450301Z" level=info msg="Container d6af51c7d292c06d63bc2138f05a4cae3f56feee5b1eb79b906421158e3d5139: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:06:51.970739 containerd[1528]: time="2025-07-07T06:06:51.970412021Z" level=info msg="CreateContainer within sandbox \"213947a2d4fa7ab0cbd0cccf1206aead1560b6f2e8a46350e910d9e84533fd4e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d6af51c7d292c06d63bc2138f05a4cae3f56feee5b1eb79b906421158e3d5139\"" Jul 7 06:06:51.972045 containerd[1528]: time="2025-07-07T06:06:51.971988909Z" level=info msg="StartContainer for \"d6af51c7d292c06d63bc2138f05a4cae3f56feee5b1eb79b906421158e3d5139\"" Jul 7 06:06:51.975267 containerd[1528]: time="2025-07-07T06:06:51.975170429Z" level=info msg="connecting to shim d6af51c7d292c06d63bc2138f05a4cae3f56feee5b1eb79b906421158e3d5139" address="unix:///run/containerd/s/3ec4b551270dc7b7eea9a87f9d53276d5fc7a2357215ca212c4cfa32a14ca61b" protocol=ttrpc version=3 Jul 7 06:06:52.025399 systemd[1]: Started cri-containerd-d6af51c7d292c06d63bc2138f05a4cae3f56feee5b1eb79b906421158e3d5139.scope - libcontainer container d6af51c7d292c06d63bc2138f05a4cae3f56feee5b1eb79b906421158e3d5139. Jul 7 06:06:52.180157 containerd[1528]: time="2025-07-07T06:06:52.179948659Z" level=info msg="StartContainer for \"d6af51c7d292c06d63bc2138f05a4cae3f56feee5b1eb79b906421158e3d5139\" returns successfully" Jul 7 06:06:52.636962 kubelet[2689]: E0707 06:06:52.636613 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7p5z" podUID="b87fa73e-26f3-47b2-81b4-1f7282c22904" Jul 7 06:06:52.739187 kubelet[2689]: E0707 06:06:52.739126 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:52.781159 kubelet[2689]: I0707 06:06:52.780956 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5774bbd444-xsgkr" podStartSLOduration=1.862416887 podStartE2EDuration="4.780930883s" podCreationTimestamp="2025-07-07 06:06:48 +0000 UTC" firstStartedPulling="2025-07-07 06:06:48.992095949 +0000 UTC m=+22.515177698" lastFinishedPulling="2025-07-07 06:06:51.910609943 +0000 UTC m=+25.433691694" observedRunningTime="2025-07-07 06:06:52.763274293 +0000 UTC m=+26.286356050" watchObservedRunningTime="2025-07-07 06:06:52.780930883 +0000 UTC m=+26.304012624" Jul 7 06:06:52.809040 kubelet[2689]: E0707 06:06:52.808973 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.809455 kubelet[2689]: W0707 06:06:52.809264 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.809455 kubelet[2689]: E0707 06:06:52.809302 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.811256 kubelet[2689]: E0707 06:06:52.811206 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.811730 kubelet[2689]: W0707 06:06:52.811430 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.811730 kubelet[2689]: E0707 06:06:52.811465 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.812325 kubelet[2689]: E0707 06:06:52.812262 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.812325 kubelet[2689]: W0707 06:06:52.812282 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.813696 kubelet[2689]: E0707 06:06:52.812302 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.814073 kubelet[2689]: E0707 06:06:52.814049 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.814073 kubelet[2689]: W0707 06:06:52.814069 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.814196 kubelet[2689]: E0707 06:06:52.814092 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.814408 kubelet[2689]: E0707 06:06:52.814391 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.814408 kubelet[2689]: W0707 06:06:52.814404 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.814575 kubelet[2689]: E0707 06:06:52.814416 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.814722 kubelet[2689]: E0707 06:06:52.814607 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.814722 kubelet[2689]: W0707 06:06:52.814635 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.814722 kubelet[2689]: E0707 06:06:52.814645 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815095 kubelet[2689]: E0707 06:06:52.814865 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.815095 kubelet[2689]: W0707 06:06:52.814874 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.815095 kubelet[2689]: E0707 06:06:52.814884 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815095 kubelet[2689]: E0707 06:06:52.815077 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.815095 kubelet[2689]: W0707 06:06:52.815085 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.815095 kubelet[2689]: E0707 06:06:52.815094 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815412 kubelet[2689]: E0707 06:06:52.815256 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.815412 kubelet[2689]: W0707 06:06:52.815262 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.815412 kubelet[2689]: E0707 06:06:52.815269 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815412 kubelet[2689]: E0707 06:06:52.815399 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.815412 kubelet[2689]: W0707 06:06:52.815405 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.815412 kubelet[2689]: E0707 06:06:52.815412 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815658 kubelet[2689]: E0707 06:06:52.815534 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.815658 kubelet[2689]: W0707 06:06:52.815539 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.815658 kubelet[2689]: E0707 06:06:52.815546 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815780 kubelet[2689]: E0707 06:06:52.815679 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.815780 kubelet[2689]: W0707 06:06:52.815685 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.815780 kubelet[2689]: E0707 06:06:52.815692 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815780 kubelet[2689]: E0707 06:06:52.815851 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.815780 kubelet[2689]: W0707 06:06:52.815858 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.815780 kubelet[2689]: E0707 06:06:52.815865 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815780 kubelet[2689]: E0707 06:06:52.815992 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.815780 kubelet[2689]: W0707 06:06:52.815998 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.815780 kubelet[2689]: E0707 06:06:52.816006 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.815780 kubelet[2689]: E0707 06:06:52.816216 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.817173 kubelet[2689]: W0707 06:06:52.816226 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.817173 kubelet[2689]: E0707 06:06:52.816236 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.843633 kubelet[2689]: E0707 06:06:52.843447 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.843633 kubelet[2689]: W0707 06:06:52.843481 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.843633 kubelet[2689]: E0707 06:06:52.843514 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.845336 kubelet[2689]: E0707 06:06:52.845117 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.845336 kubelet[2689]: W0707 06:06:52.845153 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.845336 kubelet[2689]: E0707 06:06:52.845183 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.846159 kubelet[2689]: E0707 06:06:52.846006 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.846159 kubelet[2689]: W0707 06:06:52.846032 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.846159 kubelet[2689]: E0707 06:06:52.846055 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.846690 kubelet[2689]: E0707 06:06:52.846570 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.846690 kubelet[2689]: W0707 06:06:52.846591 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.846690 kubelet[2689]: E0707 06:06:52.846611 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.847821 kubelet[2689]: E0707 06:06:52.847689 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.847821 kubelet[2689]: W0707 06:06:52.847710 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.847821 kubelet[2689]: E0707 06:06:52.847727 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.849110 kubelet[2689]: E0707 06:06:52.848964 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.850063 kubelet[2689]: W0707 06:06:52.849852 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.850063 kubelet[2689]: E0707 06:06:52.849896 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.850452 kubelet[2689]: E0707 06:06:52.850355 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.850452 kubelet[2689]: W0707 06:06:52.850372 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.850452 kubelet[2689]: E0707 06:06:52.850390 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.850982 kubelet[2689]: E0707 06:06:52.850891 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.850982 kubelet[2689]: W0707 06:06:52.850908 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.850982 kubelet[2689]: E0707 06:06:52.850924 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.851625 kubelet[2689]: E0707 06:06:52.851501 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.851625 kubelet[2689]: W0707 06:06:52.851532 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.851625 kubelet[2689]: E0707 06:06:52.851548 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.852268 kubelet[2689]: E0707 06:06:52.852075 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.852268 kubelet[2689]: W0707 06:06:52.852094 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.852268 kubelet[2689]: E0707 06:06:52.852109 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.852815 kubelet[2689]: E0707 06:06:52.852707 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.852815 kubelet[2689]: W0707 06:06:52.852725 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.852815 kubelet[2689]: E0707 06:06:52.852740 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.853260 kubelet[2689]: E0707 06:06:52.853241 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.853426 kubelet[2689]: W0707 06:06:52.853336 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.853426 kubelet[2689]: E0707 06:06:52.853358 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.853864 kubelet[2689]: E0707 06:06:52.853772 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.853864 kubelet[2689]: W0707 06:06:52.853827 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.853864 kubelet[2689]: E0707 06:06:52.853845 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.854988 kubelet[2689]: E0707 06:06:52.854965 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.854988 kubelet[2689]: W0707 06:06:52.854988 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.855588 kubelet[2689]: E0707 06:06:52.855007 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.855588 kubelet[2689]: E0707 06:06:52.855290 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.855588 kubelet[2689]: W0707 06:06:52.855303 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.855588 kubelet[2689]: E0707 06:06:52.855318 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.855588 kubelet[2689]: E0707 06:06:52.855519 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.855588 kubelet[2689]: W0707 06:06:52.855530 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.855588 kubelet[2689]: E0707 06:06:52.855577 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.856466 kubelet[2689]: E0707 06:06:52.856442 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.856466 kubelet[2689]: W0707 06:06:52.856461 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.856634 kubelet[2689]: E0707 06:06:52.856479 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:52.856865 kubelet[2689]: E0707 06:06:52.856841 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:06:52.856865 kubelet[2689]: W0707 06:06:52.856859 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:06:52.856974 kubelet[2689]: E0707 06:06:52.856874 2689 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:06:53.395347 containerd[1528]: time="2025-07-07T06:06:53.395264816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:53.396825 containerd[1528]: time="2025-07-07T06:06:53.396676305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:06:53.397572 containerd[1528]: time="2025-07-07T06:06:53.397500241Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:53.399808 containerd[1528]: time="2025-07-07T06:06:53.399716557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:53.400828 containerd[1528]: time="2025-07-07T06:06:53.400769254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.489918433s" Jul 7 06:06:53.401561 containerd[1528]: time="2025-07-07T06:06:53.400833414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:06:53.411079 containerd[1528]: time="2025-07-07T06:06:53.411031514Z" level=info msg="CreateContainer within sandbox \"3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:06:53.434867 containerd[1528]: time="2025-07-07T06:06:53.433446406Z" level=info msg="Container cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:06:53.453197 containerd[1528]: time="2025-07-07T06:06:53.453126265Z" level=info msg="CreateContainer within sandbox \"3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777\"" Jul 7 06:06:53.455193 containerd[1528]: time="2025-07-07T06:06:53.455135824Z" level=info msg="StartContainer for \"cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777\"" Jul 7 06:06:53.461895 containerd[1528]: time="2025-07-07T06:06:53.461770509Z" level=info msg="connecting to shim cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777" address="unix:///run/containerd/s/d41663683ed8b3486088f7039bcb6917c1314e311e0a5642f5f636275778be7a" protocol=ttrpc version=3 Jul 7 06:06:53.523144 systemd[1]: Started cri-containerd-cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777.scope - libcontainer container cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777. Jul 7 06:06:53.619290 containerd[1528]: time="2025-07-07T06:06:53.619181621Z" level=info msg="StartContainer for \"cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777\" returns successfully" Jul 7 06:06:53.637983 systemd[1]: cri-containerd-cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777.scope: Deactivated successfully. Jul 7 06:06:53.678079 containerd[1528]: time="2025-07-07T06:06:53.677242533Z" level=info msg="received exit event container_id:\"cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777\" id:\"cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777\" pid:3386 exited_at:{seconds:1751868413 nanos:643609224}" Jul 7 06:06:53.688280 containerd[1528]: time="2025-07-07T06:06:53.688024740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777\" id:\"cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777\" pid:3386 exited_at:{seconds:1751868413 nanos:643609224}" Jul 7 06:06:53.720609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb543ada0a841d54c600d75d0cf784a57d5fee449ffa3979ab1aff16b8f91777-rootfs.mount: Deactivated successfully. Jul 7 06:06:53.750300 kubelet[2689]: E0707 06:06:53.750229 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:54.636792 kubelet[2689]: E0707 06:06:54.635500 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7p5z" podUID="b87fa73e-26f3-47b2-81b4-1f7282c22904" Jul 7 06:06:54.751763 kubelet[2689]: E0707 06:06:54.751704 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:54.753713 containerd[1528]: time="2025-07-07T06:06:54.753649571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:06:56.637256 kubelet[2689]: E0707 06:06:56.637125 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7p5z" podUID="b87fa73e-26f3-47b2-81b4-1f7282c22904" Jul 7 06:06:58.635386 kubelet[2689]: E0707 06:06:58.635339 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r7p5z" podUID="b87fa73e-26f3-47b2-81b4-1f7282c22904" Jul 7 06:06:58.741903 containerd[1528]: time="2025-07-07T06:06:58.741841302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:58.742819 containerd[1528]: time="2025-07-07T06:06:58.742626945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:06:58.743375 containerd[1528]: time="2025-07-07T06:06:58.743343209Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:58.745177 containerd[1528]: time="2025-07-07T06:06:58.745149109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:58.745810 containerd[1528]: time="2025-07-07T06:06:58.745731915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.99204856s" Jul 7 06:06:58.745810 containerd[1528]: time="2025-07-07T06:06:58.745763889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:06:58.751600 containerd[1528]: time="2025-07-07T06:06:58.751185602Z" level=info msg="CreateContainer within sandbox \"3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:06:58.780812 containerd[1528]: time="2025-07-07T06:06:58.780044193Z" level=info msg="Container 4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:06:58.791513 containerd[1528]: time="2025-07-07T06:06:58.791391190Z" level=info msg="CreateContainer within sandbox \"3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253\"" Jul 7 06:06:58.792523 containerd[1528]: time="2025-07-07T06:06:58.792461587Z" level=info msg="StartContainer for \"4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253\"" Jul 7 06:06:58.794770 containerd[1528]: time="2025-07-07T06:06:58.794723591Z" level=info msg="connecting to shim 4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253" address="unix:///run/containerd/s/d41663683ed8b3486088f7039bcb6917c1314e311e0a5642f5f636275778be7a" protocol=ttrpc version=3 Jul 7 06:06:58.835055 systemd[1]: Started cri-containerd-4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253.scope - libcontainer container 4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253. Jul 7 06:06:58.882061 containerd[1528]: time="2025-07-07T06:06:58.881936262Z" level=info msg="StartContainer for \"4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253\" returns successfully" Jul 7 06:06:59.474993 systemd[1]: cri-containerd-4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253.scope: Deactivated successfully. Jul 7 06:06:59.475263 systemd[1]: cri-containerd-4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253.scope: Consumed 567ms CPU time, 170.1M memory peak, 12M read from disk, 171.2M written to disk. Jul 7 06:06:59.502853 containerd[1528]: time="2025-07-07T06:06:59.502507334Z" level=info msg="received exit event container_id:\"4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253\" id:\"4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253\" pid:3445 exited_at:{seconds:1751868419 nanos:478762248}" Jul 7 06:06:59.503918 containerd[1528]: time="2025-07-07T06:06:59.503880046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253\" id:\"4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253\" pid:3445 exited_at:{seconds:1751868419 nanos:478762248}" Jul 7 06:06:59.534059 kubelet[2689]: I0707 06:06:59.533771 2689 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:06:59.540761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4faedd2f09af4ad16ddaec2fdded43d2d64fe980616bea19282243366bfa5253-rootfs.mount: Deactivated successfully. Jul 7 06:06:59.589376 systemd[1]: Created slice kubepods-besteffort-pod7780537f_a70e_4007_81ea_55353b907d7d.slice - libcontainer container kubepods-besteffort-pod7780537f_a70e_4007_81ea_55353b907d7d.slice. Jul 7 06:06:59.605310 systemd[1]: Created slice kubepods-burstable-pod60b977d1_300d_49f2_a296_68ff47a7eef0.slice - libcontainer container kubepods-burstable-pod60b977d1_300d_49f2_a296_68ff47a7eef0.slice. Jul 7 06:06:59.617170 systemd[1]: Created slice kubepods-burstable-pod7737afa6_67e6_4fbb_94b1_86b8a7e9913f.slice - libcontainer container kubepods-burstable-pod7737afa6_67e6_4fbb_94b1_86b8a7e9913f.slice. Jul 7 06:06:59.626293 systemd[1]: Created slice kubepods-besteffort-pod60f633c1_26a6_485a_9167_1fd8780fa5a3.slice - libcontainer container kubepods-besteffort-pod60f633c1_26a6_485a_9167_1fd8780fa5a3.slice. Jul 7 06:06:59.638682 systemd[1]: Created slice kubepods-besteffort-pod8d1a5bc4_00cc_451d_82cf_0d08adaef63c.slice - libcontainer container kubepods-besteffort-pod8d1a5bc4_00cc_451d_82cf_0d08adaef63c.slice. Jul 7 06:06:59.650654 systemd[1]: Created slice kubepods-besteffort-pod143281af_f2d1_4ffd_9c52_e0695f1bd411.slice - libcontainer container kubepods-besteffort-pod143281af_f2d1_4ffd_9c52_e0695f1bd411.slice. Jul 7 06:06:59.657696 systemd[1]: Created slice kubepods-besteffort-poda4a67136_fb4a_44b4_a9df_5a7ebb1d0a96.slice - libcontainer container kubepods-besteffort-poda4a67136_fb4a_44b4_a9df_5a7ebb1d0a96.slice. Jul 7 06:06:59.705026 kubelet[2689]: I0707 06:06:59.704942 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b74jr\" (UniqueName: \"kubernetes.io/projected/7737afa6-67e6-4fbb-94b1-86b8a7e9913f-kube-api-access-b74jr\") pod \"coredns-674b8bbfcf-mb6xr\" (UID: \"7737afa6-67e6-4fbb-94b1-86b8a7e9913f\") " pod="kube-system/coredns-674b8bbfcf-mb6xr" Jul 7 06:06:59.705026 kubelet[2689]: I0707 06:06:59.705014 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcmc8\" (UniqueName: \"kubernetes.io/projected/7780537f-a70e-4007-81ea-55353b907d7d-kube-api-access-xcmc8\") pod \"calico-kube-controllers-cd4495d89-c7kgs\" (UID: \"7780537f-a70e-4007-81ea-55353b907d7d\") " pod="calico-system/calico-kube-controllers-cd4495d89-c7kgs" Jul 7 06:06:59.705716 kubelet[2689]: I0707 06:06:59.705062 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60f633c1-26a6-485a-9167-1fd8780fa5a3-whisker-ca-bundle\") pod \"whisker-646d79df8b-ztbd2\" (UID: \"60f633c1-26a6-485a-9167-1fd8780fa5a3\") " pod="calico-system/whisker-646d79df8b-ztbd2" Jul 7 06:06:59.705716 kubelet[2689]: I0707 06:06:59.705088 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hw99\" (UniqueName: \"kubernetes.io/projected/8d1a5bc4-00cc-451d-82cf-0d08adaef63c-kube-api-access-2hw99\") pod \"calico-apiserver-7cf678cc66-7gbjr\" (UID: \"8d1a5bc4-00cc-451d-82cf-0d08adaef63c\") " pod="calico-apiserver/calico-apiserver-7cf678cc66-7gbjr" Jul 7 06:06:59.705716 kubelet[2689]: I0707 06:06:59.705130 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585qc\" (UniqueName: \"kubernetes.io/projected/143281af-f2d1-4ffd-9c52-e0695f1bd411-kube-api-access-585qc\") pod \"goldmane-768f4c5c69-fzmgq\" (UID: \"143281af-f2d1-4ffd-9c52-e0695f1bd411\") " pod="calico-system/goldmane-768f4c5c69-fzmgq" Jul 7 06:06:59.705716 kubelet[2689]: I0707 06:06:59.705160 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60f633c1-26a6-485a-9167-1fd8780fa5a3-whisker-backend-key-pair\") pod \"whisker-646d79df8b-ztbd2\" (UID: \"60f633c1-26a6-485a-9167-1fd8780fa5a3\") " pod="calico-system/whisker-646d79df8b-ztbd2" Jul 7 06:06:59.705716 kubelet[2689]: I0707 06:06:59.705204 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7780537f-a70e-4007-81ea-55353b907d7d-tigera-ca-bundle\") pod \"calico-kube-controllers-cd4495d89-c7kgs\" (UID: \"7780537f-a70e-4007-81ea-55353b907d7d\") " pod="calico-system/calico-kube-controllers-cd4495d89-c7kgs" Jul 7 06:06:59.705972 kubelet[2689]: I0707 06:06:59.705230 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d1a5bc4-00cc-451d-82cf-0d08adaef63c-calico-apiserver-certs\") pod \"calico-apiserver-7cf678cc66-7gbjr\" (UID: \"8d1a5bc4-00cc-451d-82cf-0d08adaef63c\") " pod="calico-apiserver/calico-apiserver-7cf678cc66-7gbjr" Jul 7 06:06:59.706945 kubelet[2689]: I0707 06:06:59.706875 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bg87\" (UniqueName: \"kubernetes.io/projected/60f633c1-26a6-485a-9167-1fd8780fa5a3-kube-api-access-6bg87\") pod \"whisker-646d79df8b-ztbd2\" (UID: \"60f633c1-26a6-485a-9167-1fd8780fa5a3\") " pod="calico-system/whisker-646d79df8b-ztbd2" Jul 7 06:06:59.707082 kubelet[2689]: I0707 06:06:59.706949 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60b977d1-300d-49f2-a296-68ff47a7eef0-config-volume\") pod \"coredns-674b8bbfcf-dm6qm\" (UID: \"60b977d1-300d-49f2-a296-68ff47a7eef0\") " pod="kube-system/coredns-674b8bbfcf-dm6qm" Jul 7 06:06:59.707082 kubelet[2689]: I0707 06:06:59.706987 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9bxc\" (UniqueName: \"kubernetes.io/projected/60b977d1-300d-49f2-a296-68ff47a7eef0-kube-api-access-t9bxc\") pod \"coredns-674b8bbfcf-dm6qm\" (UID: \"60b977d1-300d-49f2-a296-68ff47a7eef0\") " pod="kube-system/coredns-674b8bbfcf-dm6qm" Jul 7 06:06:59.707082 kubelet[2689]: I0707 06:06:59.707012 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fphl\" (UniqueName: \"kubernetes.io/projected/a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96-kube-api-access-2fphl\") pod \"calico-apiserver-7cf678cc66-klsvh\" (UID: \"a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96\") " pod="calico-apiserver/calico-apiserver-7cf678cc66-klsvh" Jul 7 06:06:59.707082 kubelet[2689]: I0707 06:06:59.707056 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143281af-f2d1-4ffd-9c52-e0695f1bd411-config\") pod \"goldmane-768f4c5c69-fzmgq\" (UID: \"143281af-f2d1-4ffd-9c52-e0695f1bd411\") " pod="calico-system/goldmane-768f4c5c69-fzmgq" Jul 7 06:06:59.707342 kubelet[2689]: I0707 06:06:59.707085 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7737afa6-67e6-4fbb-94b1-86b8a7e9913f-config-volume\") pod \"coredns-674b8bbfcf-mb6xr\" (UID: \"7737afa6-67e6-4fbb-94b1-86b8a7e9913f\") " pod="kube-system/coredns-674b8bbfcf-mb6xr" Jul 7 06:06:59.707342 kubelet[2689]: I0707 06:06:59.707110 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/143281af-f2d1-4ffd-9c52-e0695f1bd411-goldmane-key-pair\") pod \"goldmane-768f4c5c69-fzmgq\" (UID: \"143281af-f2d1-4ffd-9c52-e0695f1bd411\") " pod="calico-system/goldmane-768f4c5c69-fzmgq" Jul 7 06:06:59.707342 kubelet[2689]: I0707 06:06:59.707151 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96-calico-apiserver-certs\") pod \"calico-apiserver-7cf678cc66-klsvh\" (UID: \"a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96\") " pod="calico-apiserver/calico-apiserver-7cf678cc66-klsvh" Jul 7 06:06:59.707342 kubelet[2689]: I0707 06:06:59.707173 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/143281af-f2d1-4ffd-9c52-e0695f1bd411-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-fzmgq\" (UID: \"143281af-f2d1-4ffd-9c52-e0695f1bd411\") " pod="calico-system/goldmane-768f4c5c69-fzmgq" Jul 7 06:06:59.786220 containerd[1528]: time="2025-07-07T06:06:59.786097821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:06:59.902311 containerd[1528]: time="2025-07-07T06:06:59.902235315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd4495d89-c7kgs,Uid:7780537f-a70e-4007-81ea-55353b907d7d,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:59.915097 kubelet[2689]: E0707 06:06:59.914122 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:59.915516 containerd[1528]: time="2025-07-07T06:06:59.915482544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dm6qm,Uid:60b977d1-300d-49f2-a296-68ff47a7eef0,Namespace:kube-system,Attempt:0,}" Jul 7 06:06:59.923439 kubelet[2689]: E0707 06:06:59.923402 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:06:59.925659 containerd[1528]: time="2025-07-07T06:06:59.925605311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mb6xr,Uid:7737afa6-67e6-4fbb-94b1-86b8a7e9913f,Namespace:kube-system,Attempt:0,}" Jul 7 06:06:59.939298 containerd[1528]: time="2025-07-07T06:06:59.939252717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-646d79df8b-ztbd2,Uid:60f633c1-26a6-485a-9167-1fd8780fa5a3,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:59.946434 containerd[1528]: time="2025-07-07T06:06:59.946069385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf678cc66-7gbjr,Uid:8d1a5bc4-00cc-451d-82cf-0d08adaef63c,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:06:59.961026 containerd[1528]: time="2025-07-07T06:06:59.960977729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fzmgq,Uid:143281af-f2d1-4ffd-9c52-e0695f1bd411,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:59.965320 containerd[1528]: time="2025-07-07T06:06:59.965265990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf678cc66-klsvh,Uid:a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:07:00.291818 containerd[1528]: time="2025-07-07T06:07:00.291746355Z" level=error msg="Failed to destroy network for sandbox \"3f7fb9309674212d7e5a51337dd6f98e4cf2aabfaae4ec912e5c9b1315d70686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.308711 containerd[1528]: time="2025-07-07T06:07:00.293019881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf678cc66-7gbjr,Uid:8d1a5bc4-00cc-451d-82cf-0d08adaef63c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f7fb9309674212d7e5a51337dd6f98e4cf2aabfaae4ec912e5c9b1315d70686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.309607 containerd[1528]: time="2025-07-07T06:07:00.295181104Z" level=error msg="Failed to destroy network for sandbox \"976e58bedb5ce0a89ce842640a6ece5e69598f172213d5b128ab2dcad5f2530a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.309607 containerd[1528]: time="2025-07-07T06:07:00.298825139Z" level=error msg="Failed to destroy network for sandbox \"e16a9e3eda72f33045320ea0ee4e201509da75bc786783951b25825025139228\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.309607 containerd[1528]: time="2025-07-07T06:07:00.298912336Z" level=error msg="Failed to destroy network for sandbox \"1aa3eab3d52c03298f59c363c7e4de308288bf6286909beaeb55d518ae84dcba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.309607 containerd[1528]: time="2025-07-07T06:07:00.302541752Z" level=error msg="Failed to destroy network for sandbox \"0a1f71364d26db4060f13b3a9e9086a704b9ca0f3c75ef7087cfc3a6964b8d8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.309816 kubelet[2689]: E0707 06:07:00.309015 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f7fb9309674212d7e5a51337dd6f98e4cf2aabfaae4ec912e5c9b1315d70686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.309816 kubelet[2689]: E0707 06:07:00.309122 2689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f7fb9309674212d7e5a51337dd6f98e4cf2aabfaae4ec912e5c9b1315d70686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf678cc66-7gbjr" Jul 7 06:07:00.309816 kubelet[2689]: E0707 06:07:00.309250 2689 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f7fb9309674212d7e5a51337dd6f98e4cf2aabfaae4ec912e5c9b1315d70686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf678cc66-7gbjr" Jul 7 06:07:00.310348 kubelet[2689]: E0707 06:07:00.309511 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf678cc66-7gbjr_calico-apiserver(8d1a5bc4-00cc-451d-82cf-0d08adaef63c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf678cc66-7gbjr_calico-apiserver(8d1a5bc4-00cc-451d-82cf-0d08adaef63c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f7fb9309674212d7e5a51337dd6f98e4cf2aabfaae4ec912e5c9b1315d70686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf678cc66-7gbjr" podUID="8d1a5bc4-00cc-451d-82cf-0d08adaef63c" Jul 7 06:07:00.311170 containerd[1528]: time="2025-07-07T06:07:00.306480731Z" level=error msg="Failed to destroy network for sandbox \"e9a9f5f683cffbae7d2ffbf66b9adcf2abd716b42e8475f09c7ce32d0923eddf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.313105 containerd[1528]: time="2025-07-07T06:07:00.312823313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fzmgq,Uid:143281af-f2d1-4ffd-9c52-e0695f1bd411,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1f71364d26db4060f13b3a9e9086a704b9ca0f3c75ef7087cfc3a6964b8d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.313953 kubelet[2689]: E0707 06:07:00.313312 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1f71364d26db4060f13b3a9e9086a704b9ca0f3c75ef7087cfc3a6964b8d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.313953 kubelet[2689]: E0707 06:07:00.313381 2689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1f71364d26db4060f13b3a9e9086a704b9ca0f3c75ef7087cfc3a6964b8d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fzmgq" Jul 7 06:07:00.313953 kubelet[2689]: E0707 06:07:00.313647 2689 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1f71364d26db4060f13b3a9e9086a704b9ca0f3c75ef7087cfc3a6964b8d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fzmgq" Jul 7 06:07:00.314400 kubelet[2689]: E0707 06:07:00.313763 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-fzmgq_calico-system(143281af-f2d1-4ffd-9c52-e0695f1bd411)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-fzmgq_calico-system(143281af-f2d1-4ffd-9c52-e0695f1bd411)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a1f71364d26db4060f13b3a9e9086a704b9ca0f3c75ef7087cfc3a6964b8d8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-fzmgq" podUID="143281af-f2d1-4ffd-9c52-e0695f1bd411" Jul 7 06:07:00.316394 containerd[1528]: time="2025-07-07T06:07:00.316009177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-646d79df8b-ztbd2,Uid:60f633c1-26a6-485a-9167-1fd8780fa5a3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e16a9e3eda72f33045320ea0ee4e201509da75bc786783951b25825025139228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.316992 containerd[1528]: time="2025-07-07T06:07:00.316825144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf678cc66-klsvh,Uid:a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aa3eab3d52c03298f59c363c7e4de308288bf6286909beaeb55d518ae84dcba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.317260 kubelet[2689]: E0707 06:07:00.317095 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e16a9e3eda72f33045320ea0ee4e201509da75bc786783951b25825025139228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.317260 kubelet[2689]: E0707 06:07:00.317162 2689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e16a9e3eda72f33045320ea0ee4e201509da75bc786783951b25825025139228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-646d79df8b-ztbd2" Jul 7 06:07:00.317260 kubelet[2689]: E0707 06:07:00.317183 2689 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e16a9e3eda72f33045320ea0ee4e201509da75bc786783951b25825025139228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-646d79df8b-ztbd2" Jul 7 06:07:00.317537 kubelet[2689]: E0707 06:07:00.317249 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-646d79df8b-ztbd2_calico-system(60f633c1-26a6-485a-9167-1fd8780fa5a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-646d79df8b-ztbd2_calico-system(60f633c1-26a6-485a-9167-1fd8780fa5a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e16a9e3eda72f33045320ea0ee4e201509da75bc786783951b25825025139228\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-646d79df8b-ztbd2" podUID="60f633c1-26a6-485a-9167-1fd8780fa5a3" Jul 7 06:07:00.318321 containerd[1528]: time="2025-07-07T06:07:00.318278515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mb6xr,Uid:7737afa6-67e6-4fbb-94b1-86b8a7e9913f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"976e58bedb5ce0a89ce842640a6ece5e69598f172213d5b128ab2dcad5f2530a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.318621 kubelet[2689]: E0707 06:07:00.318387 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aa3eab3d52c03298f59c363c7e4de308288bf6286909beaeb55d518ae84dcba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.318621 kubelet[2689]: E0707 06:07:00.318435 2689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aa3eab3d52c03298f59c363c7e4de308288bf6286909beaeb55d518ae84dcba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf678cc66-klsvh" Jul 7 06:07:00.318621 kubelet[2689]: E0707 06:07:00.318457 2689 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aa3eab3d52c03298f59c363c7e4de308288bf6286909beaeb55d518ae84dcba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf678cc66-klsvh" Jul 7 06:07:00.318763 kubelet[2689]: E0707 06:07:00.318523 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf678cc66-klsvh_calico-apiserver(a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf678cc66-klsvh_calico-apiserver(a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1aa3eab3d52c03298f59c363c7e4de308288bf6286909beaeb55d518ae84dcba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf678cc66-klsvh" podUID="a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96" Jul 7 06:07:00.319120 kubelet[2689]: E0707 06:07:00.318929 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976e58bedb5ce0a89ce842640a6ece5e69598f172213d5b128ab2dcad5f2530a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.319120 kubelet[2689]: E0707 06:07:00.318968 2689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976e58bedb5ce0a89ce842640a6ece5e69598f172213d5b128ab2dcad5f2530a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mb6xr" Jul 7 06:07:00.319120 kubelet[2689]: E0707 06:07:00.318986 2689 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976e58bedb5ce0a89ce842640a6ece5e69598f172213d5b128ab2dcad5f2530a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mb6xr" Jul 7 06:07:00.319279 kubelet[2689]: E0707 06:07:00.319025 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mb6xr_kube-system(7737afa6-67e6-4fbb-94b1-86b8a7e9913f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mb6xr_kube-system(7737afa6-67e6-4fbb-94b1-86b8a7e9913f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"976e58bedb5ce0a89ce842640a6ece5e69598f172213d5b128ab2dcad5f2530a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mb6xr" podUID="7737afa6-67e6-4fbb-94b1-86b8a7e9913f" Jul 7 06:07:00.319817 containerd[1528]: time="2025-07-07T06:07:00.319707253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dm6qm,Uid:60b977d1-300d-49f2-a296-68ff47a7eef0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a9f5f683cffbae7d2ffbf66b9adcf2abd716b42e8475f09c7ce32d0923eddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.320450 kubelet[2689]: E0707 06:07:00.320404 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a9f5f683cffbae7d2ffbf66b9adcf2abd716b42e8475f09c7ce32d0923eddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.320916 kubelet[2689]: E0707 06:07:00.320877 2689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a9f5f683cffbae7d2ffbf66b9adcf2abd716b42e8475f09c7ce32d0923eddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dm6qm" Jul 7 06:07:00.320916 kubelet[2689]: E0707 06:07:00.320910 2689 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a9f5f683cffbae7d2ffbf66b9adcf2abd716b42e8475f09c7ce32d0923eddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dm6qm" Jul 7 06:07:00.321124 kubelet[2689]: E0707 06:07:00.320962 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dm6qm_kube-system(60b977d1-300d-49f2-a296-68ff47a7eef0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dm6qm_kube-system(60b977d1-300d-49f2-a296-68ff47a7eef0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9a9f5f683cffbae7d2ffbf66b9adcf2abd716b42e8475f09c7ce32d0923eddf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dm6qm" podUID="60b977d1-300d-49f2-a296-68ff47a7eef0" Jul 7 06:07:00.323163 containerd[1528]: time="2025-07-07T06:07:00.323114975Z" level=error msg="Failed to destroy network for sandbox \"e4523cd736323c6a64a59e54577acf85feccef06f8473d4d76d6ab631cdfd881\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.324503 containerd[1528]: time="2025-07-07T06:07:00.324342065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd4495d89-c7kgs,Uid:7780537f-a70e-4007-81ea-55353b907d7d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4523cd736323c6a64a59e54577acf85feccef06f8473d4d76d6ab631cdfd881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.325333 kubelet[2689]: E0707 06:07:00.325289 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4523cd736323c6a64a59e54577acf85feccef06f8473d4d76d6ab631cdfd881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.325421 kubelet[2689]: E0707 06:07:00.325367 2689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4523cd736323c6a64a59e54577acf85feccef06f8473d4d76d6ab631cdfd881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd4495d89-c7kgs" Jul 7 06:07:00.325421 kubelet[2689]: E0707 06:07:00.325399 2689 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4523cd736323c6a64a59e54577acf85feccef06f8473d4d76d6ab631cdfd881\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd4495d89-c7kgs" Jul 7 06:07:00.325505 kubelet[2689]: E0707 06:07:00.325455 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd4495d89-c7kgs_calico-system(7780537f-a70e-4007-81ea-55353b907d7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd4495d89-c7kgs_calico-system(7780537f-a70e-4007-81ea-55353b907d7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4523cd736323c6a64a59e54577acf85feccef06f8473d4d76d6ab631cdfd881\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd4495d89-c7kgs" podUID="7780537f-a70e-4007-81ea-55353b907d7d" Jul 7 06:07:00.645998 systemd[1]: Created slice kubepods-besteffort-podb87fa73e_26f3_47b2_81b4_1f7282c22904.slice - libcontainer container kubepods-besteffort-podb87fa73e_26f3_47b2_81b4_1f7282c22904.slice. Jul 7 06:07:00.650761 containerd[1528]: time="2025-07-07T06:07:00.650721821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7p5z,Uid:b87fa73e-26f3-47b2-81b4-1f7282c22904,Namespace:calico-system,Attempt:0,}" Jul 7 06:07:00.744509 containerd[1528]: time="2025-07-07T06:07:00.744453511Z" level=error msg="Failed to destroy network for sandbox \"af9b51a81eae3c72b013cc532b4045e376812b91c202cc46f0543f0a11814e86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.746228 containerd[1528]: time="2025-07-07T06:07:00.746101488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7p5z,Uid:b87fa73e-26f3-47b2-81b4-1f7282c22904,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9b51a81eae3c72b013cc532b4045e376812b91c202cc46f0543f0a11814e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.747004 kubelet[2689]: E0707 06:07:00.746396 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9b51a81eae3c72b013cc532b4045e376812b91c202cc46f0543f0a11814e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:07:00.747004 kubelet[2689]: E0707 06:07:00.746469 2689 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9b51a81eae3c72b013cc532b4045e376812b91c202cc46f0543f0a11814e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r7p5z" Jul 7 06:07:00.747004 kubelet[2689]: E0707 06:07:00.746500 2689 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9b51a81eae3c72b013cc532b4045e376812b91c202cc46f0543f0a11814e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r7p5z" Jul 7 06:07:00.747427 kubelet[2689]: E0707 06:07:00.746575 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r7p5z_calico-system(b87fa73e-26f3-47b2-81b4-1f7282c22904)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r7p5z_calico-system(b87fa73e-26f3-47b2-81b4-1f7282c22904)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af9b51a81eae3c72b013cc532b4045e376812b91c202cc46f0543f0a11814e86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r7p5z" podUID="b87fa73e-26f3-47b2-81b4-1f7282c22904" Jul 7 06:07:00.837741 systemd[1]: run-netns-cni\x2d580f1249\x2d9c4a\x2ddadf\x2d0693\x2d411e01b41bb8.mount: Deactivated successfully. Jul 7 06:07:07.269312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436333688.mount: Deactivated successfully. Jul 7 06:07:07.309496 containerd[1528]: time="2025-07-07T06:07:07.309281678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:07.309496 containerd[1528]: time="2025-07-07T06:07:07.309356488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:07:07.310888 containerd[1528]: time="2025-07-07T06:07:07.310825679Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:07.312965 containerd[1528]: time="2025-07-07T06:07:07.312256638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:07.312965 containerd[1528]: time="2025-07-07T06:07:07.312846600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.526691705s" Jul 7 06:07:07.312965 containerd[1528]: time="2025-07-07T06:07:07.312875783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:07:07.349105 containerd[1528]: time="2025-07-07T06:07:07.349022763Z" level=info msg="CreateContainer within sandbox \"3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:07:07.357812 containerd[1528]: time="2025-07-07T06:07:07.356709428Z" level=info msg="Container 5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:07.360967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244445364.mount: Deactivated successfully. Jul 7 06:07:07.373715 containerd[1528]: time="2025-07-07T06:07:07.373659113Z" level=info msg="CreateContainer within sandbox \"3a71ed0ece3a493ec00a9064170c6e2bf37ed1fe0b382644a9409913e29d0153\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137\"" Jul 7 06:07:07.375645 containerd[1528]: time="2025-07-07T06:07:07.375612816Z" level=info msg="StartContainer for \"5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137\"" Jul 7 06:07:07.380720 containerd[1528]: time="2025-07-07T06:07:07.380609161Z" level=info msg="connecting to shim 5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137" address="unix:///run/containerd/s/d41663683ed8b3486088f7039bcb6917c1314e311e0a5642f5f636275778be7a" protocol=ttrpc version=3 Jul 7 06:07:07.511071 systemd[1]: Started cri-containerd-5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137.scope - libcontainer container 5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137. Jul 7 06:07:07.622648 containerd[1528]: time="2025-07-07T06:07:07.622610926Z" level=info msg="StartContainer for \"5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137\" returns successfully" Jul 7 06:07:07.834137 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:07:07.834975 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:07:08.145232 kubelet[2689]: I0707 06:07:08.142500 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r978z" podStartSLOduration=2.224466717 podStartE2EDuration="20.142468417s" podCreationTimestamp="2025-07-07 06:06:48 +0000 UTC" firstStartedPulling="2025-07-07 06:06:49.39601571 +0000 UTC m=+22.919097448" lastFinishedPulling="2025-07-07 06:07:07.314017413 +0000 UTC m=+40.837099148" observedRunningTime="2025-07-07 06:07:07.861329897 +0000 UTC m=+41.384411662" watchObservedRunningTime="2025-07-07 06:07:08.142468417 +0000 UTC m=+41.665550174" Jul 7 06:07:08.184867 kubelet[2689]: I0707 06:07:08.183719 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60f633c1-26a6-485a-9167-1fd8780fa5a3-whisker-ca-bundle\") pod \"60f633c1-26a6-485a-9167-1fd8780fa5a3\" (UID: \"60f633c1-26a6-485a-9167-1fd8780fa5a3\") " Jul 7 06:07:08.185175 kubelet[2689]: I0707 06:07:08.185149 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60f633c1-26a6-485a-9167-1fd8780fa5a3-whisker-backend-key-pair\") pod \"60f633c1-26a6-485a-9167-1fd8780fa5a3\" (UID: \"60f633c1-26a6-485a-9167-1fd8780fa5a3\") " Jul 7 06:07:08.185301 kubelet[2689]: I0707 06:07:08.185287 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bg87\" (UniqueName: \"kubernetes.io/projected/60f633c1-26a6-485a-9167-1fd8780fa5a3-kube-api-access-6bg87\") pod \"60f633c1-26a6-485a-9167-1fd8780fa5a3\" (UID: \"60f633c1-26a6-485a-9167-1fd8780fa5a3\") " Jul 7 06:07:08.187420 kubelet[2689]: I0707 06:07:08.184459 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f633c1-26a6-485a-9167-1fd8780fa5a3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "60f633c1-26a6-485a-9167-1fd8780fa5a3" (UID: "60f633c1-26a6-485a-9167-1fd8780fa5a3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:07:08.193676 kubelet[2689]: I0707 06:07:08.193309 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f633c1-26a6-485a-9167-1fd8780fa5a3-kube-api-access-6bg87" (OuterVolumeSpecName: "kube-api-access-6bg87") pod "60f633c1-26a6-485a-9167-1fd8780fa5a3" (UID: "60f633c1-26a6-485a-9167-1fd8780fa5a3"). InnerVolumeSpecName "kube-api-access-6bg87". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:07:08.195973 kubelet[2689]: I0707 06:07:08.195910 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f633c1-26a6-485a-9167-1fd8780fa5a3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "60f633c1-26a6-485a-9167-1fd8780fa5a3" (UID: "60f633c1-26a6-485a-9167-1fd8780fa5a3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:07:08.274412 systemd[1]: var-lib-kubelet-pods-60f633c1\x2d26a6\x2d485a\x2d9167\x2d1fd8780fa5a3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:07:08.274588 systemd[1]: var-lib-kubelet-pods-60f633c1\x2d26a6\x2d485a\x2d9167\x2d1fd8780fa5a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6bg87.mount: Deactivated successfully. Jul 7 06:07:08.286303 kubelet[2689]: I0707 06:07:08.286239 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60f633c1-26a6-485a-9167-1fd8780fa5a3-whisker-ca-bundle\") on node \"ci-4372.0.1-6-9e8df2071f\" DevicePath \"\"" Jul 7 06:07:08.286499 kubelet[2689]: I0707 06:07:08.286373 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60f633c1-26a6-485a-9167-1fd8780fa5a3-whisker-backend-key-pair\") on node \"ci-4372.0.1-6-9e8df2071f\" DevicePath \"\"" Jul 7 06:07:08.286499 kubelet[2689]: I0707 06:07:08.286387 2689 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bg87\" (UniqueName: \"kubernetes.io/projected/60f633c1-26a6-485a-9167-1fd8780fa5a3-kube-api-access-6bg87\") on node \"ci-4372.0.1-6-9e8df2071f\" DevicePath \"\"" Jul 7 06:07:08.646462 systemd[1]: Removed slice kubepods-besteffort-pod60f633c1_26a6_485a_9167_1fd8780fa5a3.slice - libcontainer container kubepods-besteffort-pod60f633c1_26a6_485a_9167_1fd8780fa5a3.slice. Jul 7 06:07:08.829152 kubelet[2689]: I0707 06:07:08.829102 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:07:08.911390 systemd[1]: Created slice kubepods-besteffort-pod3b9ff975_9ca6_4576_aedc_9338f36ee626.slice - libcontainer container kubepods-besteffort-pod3b9ff975_9ca6_4576_aedc_9338f36ee626.slice. Jul 7 06:07:08.992065 kubelet[2689]: I0707 06:07:08.992005 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl72n\" (UniqueName: \"kubernetes.io/projected/3b9ff975-9ca6-4576-aedc-9338f36ee626-kube-api-access-dl72n\") pod \"whisker-966b5bb58-5j6mc\" (UID: \"3b9ff975-9ca6-4576-aedc-9338f36ee626\") " pod="calico-system/whisker-966b5bb58-5j6mc" Jul 7 06:07:08.992065 kubelet[2689]: I0707 06:07:08.992062 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b9ff975-9ca6-4576-aedc-9338f36ee626-whisker-ca-bundle\") pod \"whisker-966b5bb58-5j6mc\" (UID: \"3b9ff975-9ca6-4576-aedc-9338f36ee626\") " pod="calico-system/whisker-966b5bb58-5j6mc" Jul 7 06:07:08.992350 kubelet[2689]: I0707 06:07:08.992109 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b9ff975-9ca6-4576-aedc-9338f36ee626-whisker-backend-key-pair\") pod \"whisker-966b5bb58-5j6mc\" (UID: \"3b9ff975-9ca6-4576-aedc-9338f36ee626\") " pod="calico-system/whisker-966b5bb58-5j6mc" Jul 7 06:07:09.221523 containerd[1528]: time="2025-07-07T06:07:09.221194409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-966b5bb58-5j6mc,Uid:3b9ff975-9ca6-4576-aedc-9338f36ee626,Namespace:calico-system,Attempt:0,}" Jul 7 06:07:09.577829 systemd-networkd[1441]: cali1e4001548ee: Link UP Jul 7 06:07:09.580385 systemd-networkd[1441]: cali1e4001548ee: Gained carrier Jul 7 06:07:09.609316 containerd[1528]: 2025-07-07 06:07:09.299 [INFO][3767] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:07:09.609316 containerd[1528]: 2025-07-07 06:07:09.334 [INFO][3767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0 whisker-966b5bb58- calico-system 3b9ff975-9ca6-4576-aedc-9338f36ee626 912 0 2025-07-07 06:07:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:966b5bb58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-6-9e8df2071f whisker-966b5bb58-5j6mc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1e4001548ee [] [] }} ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Namespace="calico-system" Pod="whisker-966b5bb58-5j6mc" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-" Jul 7 06:07:09.609316 containerd[1528]: 2025-07-07 06:07:09.334 [INFO][3767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Namespace="calico-system" Pod="whisker-966b5bb58-5j6mc" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" Jul 7 06:07:09.609316 containerd[1528]: 2025-07-07 06:07:09.488 [INFO][3775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" HandleID="k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Workload="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.490 [INFO][3775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" HandleID="k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Workload="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033e6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-6-9e8df2071f", "pod":"whisker-966b5bb58-5j6mc", "timestamp":"2025-07-07 06:07:09.488621123 +0000 UTC"}, Hostname:"ci-4372.0.1-6-9e8df2071f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.490 [INFO][3775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.491 [INFO][3775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.491 [INFO][3775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-6-9e8df2071f' Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.504 [INFO][3775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.516 [INFO][3775] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.525 [INFO][3775] ipam/ipam.go 511: Trying affinity for 192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.528 [INFO][3775] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.609899 containerd[1528]: 2025-07-07 06:07:09.531 [INFO][3775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.610216 containerd[1528]: 2025-07-07 06:07:09.531 [INFO][3775] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.610216 containerd[1528]: 2025-07-07 06:07:09.534 [INFO][3775] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424 Jul 7 06:07:09.610216 containerd[1528]: 2025-07-07 06:07:09.541 [INFO][3775] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.610216 containerd[1528]: 2025-07-07 06:07:09.550 [INFO][3775] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.67.129/26] block=192.168.67.128/26 handle="k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.610216 containerd[1528]: 2025-07-07 06:07:09.550 [INFO][3775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.129/26] handle="k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:09.610216 containerd[1528]: 2025-07-07 06:07:09.550 [INFO][3775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:07:09.610216 containerd[1528]: 2025-07-07 06:07:09.550 [INFO][3775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.129/26] IPv6=[] ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" HandleID="k8s-pod-network.ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Workload="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" Jul 7 06:07:09.610446 containerd[1528]: 2025-07-07 06:07:09.554 [INFO][3767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Namespace="calico-system" Pod="whisker-966b5bb58-5j6mc" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0", GenerateName:"whisker-966b5bb58-", Namespace:"calico-system", SelfLink:"", UID:"3b9ff975-9ca6-4576-aedc-9338f36ee626", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"966b5bb58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"", Pod:"whisker-966b5bb58-5j6mc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.67.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1e4001548ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:09.610446 containerd[1528]: 2025-07-07 06:07:09.554 [INFO][3767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.129/32] ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Namespace="calico-system" Pod="whisker-966b5bb58-5j6mc" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" Jul 7 06:07:09.610535 containerd[1528]: 2025-07-07 06:07:09.554 [INFO][3767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e4001548ee ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Namespace="calico-system" Pod="whisker-966b5bb58-5j6mc" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" Jul 7 06:07:09.610535 containerd[1528]: 2025-07-07 06:07:09.582 [INFO][3767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Namespace="calico-system" Pod="whisker-966b5bb58-5j6mc" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" Jul 7 06:07:09.610713 containerd[1528]: 2025-07-07 06:07:09.583 [INFO][3767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Namespace="calico-system" Pod="whisker-966b5bb58-5j6mc" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0", GenerateName:"whisker-966b5bb58-", Namespace:"calico-system", SelfLink:"", UID:"3b9ff975-9ca6-4576-aedc-9338f36ee626", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"966b5bb58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424", Pod:"whisker-966b5bb58-5j6mc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.67.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1e4001548ee", MAC:"46:ff:1c:89:cc:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:09.610796 containerd[1528]: 2025-07-07 06:07:09.602 [INFO][3767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" Namespace="calico-system" Pod="whisker-966b5bb58-5j6mc" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-whisker--966b5bb58--5j6mc-eth0" Jul 7 06:07:09.691525 containerd[1528]: time="2025-07-07T06:07:09.690780274Z" level=info msg="connecting to shim ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424" address="unix:///run/containerd/s/81efc5db1ac2eccc870ed916fe0ce2e77093c414d55e016a13e0f82d8e1e50d0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:07:09.780016 systemd[1]: Started cri-containerd-ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424.scope - libcontainer container ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424. Jul 7 06:07:09.883578 containerd[1528]: time="2025-07-07T06:07:09.883499661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-966b5bb58-5j6mc,Uid:3b9ff975-9ca6-4576-aedc-9338f36ee626,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424\"" Jul 7 06:07:09.890124 containerd[1528]: time="2025-07-07T06:07:09.890074663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:07:10.643747 kubelet[2689]: I0707 06:07:10.643702 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f633c1-26a6-485a-9167-1fd8780fa5a3" path="/var/lib/kubelet/pods/60f633c1-26a6-485a-9167-1fd8780fa5a3/volumes" Jul 7 06:07:10.654570 systemd-networkd[1441]: vxlan.calico: Link UP Jul 7 06:07:10.654662 systemd-networkd[1441]: vxlan.calico: Gained carrier Jul 7 06:07:11.327894 containerd[1528]: time="2025-07-07T06:07:11.327836288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:11.329156 containerd[1528]: time="2025-07-07T06:07:11.329109334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:07:11.329610 containerd[1528]: time="2025-07-07T06:07:11.329581108Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:11.331641 containerd[1528]: time="2025-07-07T06:07:11.331597699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:11.332714 containerd[1528]: time="2025-07-07T06:07:11.332677127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.442559203s" Jul 7 06:07:11.332807 containerd[1528]: time="2025-07-07T06:07:11.332718256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:07:11.336940 containerd[1528]: time="2025-07-07T06:07:11.336877426Z" level=info msg="CreateContainer within sandbox \"ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:07:11.347759 containerd[1528]: time="2025-07-07T06:07:11.346053339Z" level=info msg="Container 67fd3651fe6ae6b91c2f0cf349f126a9e12db99c3769d5b303dc3018b90a0cbc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:11.353842 containerd[1528]: time="2025-07-07T06:07:11.353750979Z" level=info msg="CreateContainer within sandbox \"ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"67fd3651fe6ae6b91c2f0cf349f126a9e12db99c3769d5b303dc3018b90a0cbc\"" Jul 7 06:07:11.355034 containerd[1528]: time="2025-07-07T06:07:11.354978689Z" level=info msg="StartContainer for \"67fd3651fe6ae6b91c2f0cf349f126a9e12db99c3769d5b303dc3018b90a0cbc\"" Jul 7 06:07:11.358016 containerd[1528]: time="2025-07-07T06:07:11.357977705Z" level=info msg="connecting to shim 67fd3651fe6ae6b91c2f0cf349f126a9e12db99c3769d5b303dc3018b90a0cbc" address="unix:///run/containerd/s/81efc5db1ac2eccc870ed916fe0ce2e77093c414d55e016a13e0f82d8e1e50d0" protocol=ttrpc version=3 Jul 7 06:07:11.408001 systemd[1]: Started cri-containerd-67fd3651fe6ae6b91c2f0cf349f126a9e12db99c3769d5b303dc3018b90a0cbc.scope - libcontainer container 67fd3651fe6ae6b91c2f0cf349f126a9e12db99c3769d5b303dc3018b90a0cbc. Jul 7 06:07:11.471824 containerd[1528]: time="2025-07-07T06:07:11.471767271Z" level=info msg="StartContainer for \"67fd3651fe6ae6b91c2f0cf349f126a9e12db99c3769d5b303dc3018b90a0cbc\" returns successfully" Jul 7 06:07:11.474045 containerd[1528]: time="2025-07-07T06:07:11.474011386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:07:11.548098 systemd-networkd[1441]: cali1e4001548ee: Gained IPv6LL Jul 7 06:07:11.637077 containerd[1528]: time="2025-07-07T06:07:11.637025323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf678cc66-klsvh,Uid:a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:07:11.780918 systemd-networkd[1441]: cali8e9d0d924d2: Link UP Jul 7 06:07:11.782709 systemd-networkd[1441]: cali8e9d0d924d2: Gained carrier Jul 7 06:07:11.805838 containerd[1528]: 2025-07-07 06:07:11.692 [INFO][4063] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0 calico-apiserver-7cf678cc66- calico-apiserver a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96 844 0 2025-07-07 06:06:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf678cc66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-6-9e8df2071f calico-apiserver-7cf678cc66-klsvh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e9d0d924d2 [] [] }} ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-klsvh" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-" Jul 7 06:07:11.805838 containerd[1528]: 2025-07-07 06:07:11.692 [INFO][4063] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-klsvh" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" Jul 7 06:07:11.805838 containerd[1528]: 2025-07-07 06:07:11.727 [INFO][4076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" HandleID="k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.728 [INFO][4076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" HandleID="k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-6-9e8df2071f", "pod":"calico-apiserver-7cf678cc66-klsvh", "timestamp":"2025-07-07 06:07:11.727966985 +0000 UTC"}, Hostname:"ci-4372.0.1-6-9e8df2071f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.728 [INFO][4076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.728 [INFO][4076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.728 [INFO][4076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-6-9e8df2071f' Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.738 [INFO][4076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.745 [INFO][4076] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.754 [INFO][4076] ipam/ipam.go 511: Trying affinity for 192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.756 [INFO][4076] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.806137 containerd[1528]: 2025-07-07 06:07:11.759 [INFO][4076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.808051 containerd[1528]: 2025-07-07 06:07:11.759 [INFO][4076] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.808051 containerd[1528]: 2025-07-07 06:07:11.761 [INFO][4076] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61 Jul 7 06:07:11.808051 containerd[1528]: 2025-07-07 06:07:11.766 [INFO][4076] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.808051 containerd[1528]: 2025-07-07 06:07:11.773 [INFO][4076] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.67.130/26] block=192.168.67.128/26 handle="k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.808051 containerd[1528]: 2025-07-07 06:07:11.773 [INFO][4076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.130/26] handle="k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:11.808051 containerd[1528]: 2025-07-07 06:07:11.773 [INFO][4076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:07:11.808051 containerd[1528]: 2025-07-07 06:07:11.773 [INFO][4076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.130/26] IPv6=[] ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" HandleID="k8s-pod-network.6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" Jul 7 06:07:11.808381 containerd[1528]: 2025-07-07 06:07:11.777 [INFO][4063] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-klsvh" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0", GenerateName:"calico-apiserver-7cf678cc66-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf678cc66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"", Pod:"calico-apiserver-7cf678cc66-klsvh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e9d0d924d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:11.808502 containerd[1528]: 2025-07-07 06:07:11.777 [INFO][4063] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.130/32] ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-klsvh" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" Jul 7 06:07:11.808502 containerd[1528]: 2025-07-07 06:07:11.777 [INFO][4063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e9d0d924d2 ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-klsvh" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" Jul 7 06:07:11.808502 containerd[1528]: 2025-07-07 06:07:11.783 [INFO][4063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-klsvh" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" Jul 7 06:07:11.808630 containerd[1528]: 2025-07-07 06:07:11.786 [INFO][4063] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-klsvh" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0", GenerateName:"calico-apiserver-7cf678cc66-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf678cc66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61", Pod:"calico-apiserver-7cf678cc66-klsvh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e9d0d924d2", MAC:"7e:1f:9b:ae:5e:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:11.808716 containerd[1528]: 2025-07-07 06:07:11.799 [INFO][4063] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-klsvh" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--klsvh-eth0" Jul 7 06:07:11.862859 containerd[1528]: time="2025-07-07T06:07:11.862057662Z" level=info msg="connecting to shim 6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61" address="unix:///run/containerd/s/e4b3a3843002b643dec2f37d92f2c67f2afb1d782f2dbba2661e8aeeda836690" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:07:11.896047 systemd[1]: Started cri-containerd-6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61.scope - libcontainer container 6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61. Jul 7 06:07:11.952512 containerd[1528]: time="2025-07-07T06:07:11.952437367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf678cc66-klsvh,Uid:a4a67136-fb4a-44b4-a9df-5a7ebb1d0a96,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61\"" Jul 7 06:07:12.125016 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL Jul 7 06:07:12.645028 kubelet[2689]: E0707 06:07:12.643619 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:12.645524 containerd[1528]: time="2025-07-07T06:07:12.644737478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fzmgq,Uid:143281af-f2d1-4ffd-9c52-e0695f1bd411,Namespace:calico-system,Attempt:0,}" Jul 7 06:07:12.649022 containerd[1528]: time="2025-07-07T06:07:12.648660854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dm6qm,Uid:60b977d1-300d-49f2-a296-68ff47a7eef0,Namespace:kube-system,Attempt:0,}" Jul 7 06:07:12.843910 systemd-networkd[1441]: cali6b04e48307b: Link UP Jul 7 06:07:12.853682 systemd-networkd[1441]: cali6b04e48307b: Gained carrier Jul 7 06:07:12.879799 containerd[1528]: 2025-07-07 06:07:12.726 [INFO][4135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0 coredns-674b8bbfcf- kube-system 60b977d1-300d-49f2-a296-68ff47a7eef0 841 0 2025-07-07 06:06:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-6-9e8df2071f coredns-674b8bbfcf-dm6qm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b04e48307b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm6qm" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-" Jul 7 06:07:12.879799 containerd[1528]: 2025-07-07 06:07:12.726 [INFO][4135] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm6qm" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" Jul 7 06:07:12.879799 containerd[1528]: 2025-07-07 06:07:12.773 [INFO][4158] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" HandleID="k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Workload="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.773 [INFO][4158] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" HandleID="k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Workload="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf1d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-6-9e8df2071f", "pod":"coredns-674b8bbfcf-dm6qm", "timestamp":"2025-07-07 06:07:12.773478862 +0000 UTC"}, Hostname:"ci-4372.0.1-6-9e8df2071f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.773 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.773 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.773 [INFO][4158] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-6-9e8df2071f' Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.785 [INFO][4158] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.792 [INFO][4158] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.799 [INFO][4158] ipam/ipam.go 511: Trying affinity for 192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.801 [INFO][4158] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880105 containerd[1528]: 2025-07-07 06:07:12.804 [INFO][4158] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880401 containerd[1528]: 2025-07-07 06:07:12.804 [INFO][4158] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880401 containerd[1528]: 2025-07-07 06:07:12.806 [INFO][4158] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d Jul 7 06:07:12.880401 containerd[1528]: 2025-07-07 06:07:12.812 [INFO][4158] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880401 containerd[1528]: 2025-07-07 06:07:12.819 [INFO][4158] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.67.131/26] block=192.168.67.128/26 handle="k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880401 containerd[1528]: 2025-07-07 06:07:12.820 [INFO][4158] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.131/26] handle="k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:12.880401 containerd[1528]: 2025-07-07 06:07:12.820 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:07:12.880401 containerd[1528]: 2025-07-07 06:07:12.821 [INFO][4158] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.131/26] IPv6=[] ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" HandleID="k8s-pod-network.87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Workload="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" Jul 7 06:07:12.880592 containerd[1528]: 2025-07-07 06:07:12.827 [INFO][4135] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm6qm" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"60b977d1-300d-49f2-a296-68ff47a7eef0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"", Pod:"coredns-674b8bbfcf-dm6qm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b04e48307b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:12.880592 containerd[1528]: 2025-07-07 06:07:12.827 [INFO][4135] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.131/32] ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm6qm" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" Jul 7 06:07:12.880592 containerd[1528]: 2025-07-07 06:07:12.827 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b04e48307b ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm6qm" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" Jul 7 06:07:12.880592 containerd[1528]: 2025-07-07 06:07:12.854 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm6qm" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" Jul 7 06:07:12.880592 containerd[1528]: 2025-07-07 06:07:12.856 [INFO][4135] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm6qm" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"60b977d1-300d-49f2-a296-68ff47a7eef0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d", Pod:"coredns-674b8bbfcf-dm6qm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b04e48307b", MAC:"1e:ef:47:02:f3:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:12.880592 containerd[1528]: 2025-07-07 06:07:12.873 [INFO][4135] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm6qm" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--dm6qm-eth0" Jul 7 06:07:12.935202 containerd[1528]: time="2025-07-07T06:07:12.935084537Z" level=info msg="connecting to shim 87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d" address="unix:///run/containerd/s/47183beff27e524b726d6b543fbb848821a2ddb529d139b7009881ffda1967ef" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:07:12.998137 systemd-networkd[1441]: cali6fecfffaf4c: Link UP Jul 7 06:07:13.000278 systemd-networkd[1441]: cali6fecfffaf4c: Gained carrier Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.737 [INFO][4144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0 goldmane-768f4c5c69- calico-system 143281af-f2d1-4ffd-9c52-e0695f1bd411 846 0 2025-07-07 06:06:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-6-9e8df2071f goldmane-768f4c5c69-fzmgq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6fecfffaf4c [] [] }} ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Namespace="calico-system" Pod="goldmane-768f4c5c69-fzmgq" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.737 [INFO][4144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Namespace="calico-system" Pod="goldmane-768f4c5c69-fzmgq" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.787 [INFO][4163] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" HandleID="k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Workload="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.788 [INFO][4163] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" HandleID="k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Workload="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-6-9e8df2071f", "pod":"goldmane-768f4c5c69-fzmgq", "timestamp":"2025-07-07 06:07:12.787362455 +0000 UTC"}, Hostname:"ci-4372.0.1-6-9e8df2071f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.788 [INFO][4163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.820 [INFO][4163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.820 [INFO][4163] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-6-9e8df2071f' Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.888 [INFO][4163] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.903 [INFO][4163] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.927 [INFO][4163] ipam/ipam.go 511: Trying affinity for 192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.935 [INFO][4163] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.941 [INFO][4163] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.941 [INFO][4163] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.947 [INFO][4163] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735 Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.955 [INFO][4163] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.983 [INFO][4163] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.67.132/26] block=192.168.67.128/26 handle="k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.984 [INFO][4163] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.132/26] handle="k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.984 [INFO][4163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:07:13.035016 containerd[1528]: 2025-07-07 06:07:12.984 [INFO][4163] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.132/26] IPv6=[] ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" HandleID="k8s-pod-network.fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Workload="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" Jul 7 06:07:13.035767 containerd[1528]: 2025-07-07 06:07:12.991 [INFO][4144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Namespace="calico-system" Pod="goldmane-768f4c5c69-fzmgq" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"143281af-f2d1-4ffd-9c52-e0695f1bd411", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"", Pod:"goldmane-768f4c5c69-fzmgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6fecfffaf4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:13.035767 containerd[1528]: 2025-07-07 06:07:12.992 [INFO][4144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.132/32] ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Namespace="calico-system" Pod="goldmane-768f4c5c69-fzmgq" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" Jul 7 06:07:13.035767 containerd[1528]: 2025-07-07 06:07:12.992 [INFO][4144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fecfffaf4c ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Namespace="calico-system" Pod="goldmane-768f4c5c69-fzmgq" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" Jul 7 06:07:13.035767 containerd[1528]: 2025-07-07 06:07:12.999 [INFO][4144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Namespace="calico-system" Pod="goldmane-768f4c5c69-fzmgq" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" Jul 7 06:07:13.035767 containerd[1528]: 2025-07-07 06:07:13.000 [INFO][4144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Namespace="calico-system" Pod="goldmane-768f4c5c69-fzmgq" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"143281af-f2d1-4ffd-9c52-e0695f1bd411", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735", Pod:"goldmane-768f4c5c69-fzmgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6fecfffaf4c", MAC:"a2:62:85:2d:e7:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:13.035767 containerd[1528]: 2025-07-07 06:07:13.020 [INFO][4144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" Namespace="calico-system" Pod="goldmane-768f4c5c69-fzmgq" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-goldmane--768f4c5c69--fzmgq-eth0" Jul 7 06:07:13.047062 systemd[1]: Started cri-containerd-87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d.scope - libcontainer container 87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d. Jul 7 06:07:13.103776 containerd[1528]: time="2025-07-07T06:07:13.101393739Z" level=info msg="connecting to shim fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735" address="unix:///run/containerd/s/69300c7b72d9b953f209c4897a634438f92907ebc415033477b376ad6b918188" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:07:13.148744 systemd-networkd[1441]: cali8e9d0d924d2: Gained IPv6LL Jul 7 06:07:13.156063 systemd[1]: Started cri-containerd-fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735.scope - libcontainer container fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735. Jul 7 06:07:13.204840 containerd[1528]: time="2025-07-07T06:07:13.204778355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dm6qm,Uid:60b977d1-300d-49f2-a296-68ff47a7eef0,Namespace:kube-system,Attempt:0,} returns sandbox id \"87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d\"" Jul 7 06:07:13.206595 kubelet[2689]: E0707 06:07:13.206100 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:13.215336 containerd[1528]: time="2025-07-07T06:07:13.215283061Z" level=info msg="CreateContainer within sandbox \"87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:07:13.236317 containerd[1528]: time="2025-07-07T06:07:13.236269248Z" level=info msg="Container aba068b338b1ec31b3996bd9b0f69ee3eee6c81e8a85c42e7844b54b51ff9bcc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:13.248439 containerd[1528]: time="2025-07-07T06:07:13.248350602Z" level=info msg="CreateContainer within sandbox \"87896385543c641d76dd797596d213c28bdc714d0f7cadd8ac7ef202ccde217d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aba068b338b1ec31b3996bd9b0f69ee3eee6c81e8a85c42e7844b54b51ff9bcc\"" Jul 7 06:07:13.251262 containerd[1528]: time="2025-07-07T06:07:13.251220880Z" level=info msg="StartContainer for \"aba068b338b1ec31b3996bd9b0f69ee3eee6c81e8a85c42e7844b54b51ff9bcc\"" Jul 7 06:07:13.261690 containerd[1528]: time="2025-07-07T06:07:13.261611608Z" level=info msg="connecting to shim aba068b338b1ec31b3996bd9b0f69ee3eee6c81e8a85c42e7844b54b51ff9bcc" address="unix:///run/containerd/s/47183beff27e524b726d6b543fbb848821a2ddb529d139b7009881ffda1967ef" protocol=ttrpc version=3 Jul 7 06:07:13.279371 containerd[1528]: time="2025-07-07T06:07:13.279332152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fzmgq,Uid:143281af-f2d1-4ffd-9c52-e0695f1bd411,Namespace:calico-system,Attempt:0,} returns sandbox id \"fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735\"" Jul 7 06:07:13.300120 systemd[1]: Started cri-containerd-aba068b338b1ec31b3996bd9b0f69ee3eee6c81e8a85c42e7844b54b51ff9bcc.scope - libcontainer container aba068b338b1ec31b3996bd9b0f69ee3eee6c81e8a85c42e7844b54b51ff9bcc. Jul 7 06:07:13.394528 containerd[1528]: time="2025-07-07T06:07:13.394401743Z" level=info msg="StartContainer for \"aba068b338b1ec31b3996bd9b0f69ee3eee6c81e8a85c42e7844b54b51ff9bcc\" returns successfully" Jul 7 06:07:13.636802 containerd[1528]: time="2025-07-07T06:07:13.636742907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf678cc66-7gbjr,Uid:8d1a5bc4-00cc-451d-82cf-0d08adaef63c,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:07:13.873951 kubelet[2689]: E0707 06:07:13.873884 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:13.911149 kubelet[2689]: I0707 06:07:13.909176 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dm6qm" podStartSLOduration=42.909153016 podStartE2EDuration="42.909153016s" podCreationTimestamp="2025-07-07 06:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:07:13.902993526 +0000 UTC m=+47.426075285" watchObservedRunningTime="2025-07-07 06:07:13.909153016 +0000 UTC m=+47.432234793" Jul 7 06:07:13.988331 systemd-networkd[1441]: calicc3bc01b52f: Link UP Jul 7 06:07:13.990879 systemd-networkd[1441]: calicc3bc01b52f: Gained carrier Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.758 [INFO][4323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0 calico-apiserver-7cf678cc66- calico-apiserver 8d1a5bc4-00cc-451d-82cf-0d08adaef63c 843 0 2025-07-07 06:06:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf678cc66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-6-9e8df2071f calico-apiserver-7cf678cc66-7gbjr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicc3bc01b52f [] [] }} ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-7gbjr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.758 [INFO][4323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-7gbjr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.835 [INFO][4335] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" HandleID="k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.835 [INFO][4335] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" HandleID="k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-6-9e8df2071f", "pod":"calico-apiserver-7cf678cc66-7gbjr", "timestamp":"2025-07-07 06:07:13.83529788 +0000 UTC"}, Hostname:"ci-4372.0.1-6-9e8df2071f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.835 [INFO][4335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.835 [INFO][4335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.835 [INFO][4335] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-6-9e8df2071f' Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.850 [INFO][4335] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.863 [INFO][4335] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.884 [INFO][4335] ipam/ipam.go 511: Trying affinity for 192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.895 [INFO][4335] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.902 [INFO][4335] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.903 [INFO][4335] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.916 [INFO][4335] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3 Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.938 [INFO][4335] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.965 [INFO][4335] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.67.133/26] block=192.168.67.128/26 handle="k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.965 [INFO][4335] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.133/26] handle="k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.965 [INFO][4335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:07:14.041834 containerd[1528]: 2025-07-07 06:07:13.965 [INFO][4335] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.133/26] IPv6=[] ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" HandleID="k8s-pod-network.c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" Jul 7 06:07:14.047416 containerd[1528]: 2025-07-07 06:07:13.979 [INFO][4323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-7gbjr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0", GenerateName:"calico-apiserver-7cf678cc66-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d1a5bc4-00cc-451d-82cf-0d08adaef63c", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf678cc66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"", Pod:"calico-apiserver-7cf678cc66-7gbjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc3bc01b52f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:14.047416 containerd[1528]: 2025-07-07 06:07:13.980 [INFO][4323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.133/32] ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-7gbjr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" Jul 7 06:07:14.047416 containerd[1528]: 2025-07-07 06:07:13.980 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc3bc01b52f ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-7gbjr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" Jul 7 06:07:14.047416 containerd[1528]: 2025-07-07 06:07:13.993 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-7gbjr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" Jul 7 06:07:14.047416 containerd[1528]: 2025-07-07 06:07:13.994 [INFO][4323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-7gbjr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0", GenerateName:"calico-apiserver-7cf678cc66-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d1a5bc4-00cc-451d-82cf-0d08adaef63c", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf678cc66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3", Pod:"calico-apiserver-7cf678cc66-7gbjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc3bc01b52f", MAC:"d2:6d:67:36:af:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:14.047416 containerd[1528]: 2025-07-07 06:07:14.013 [INFO][4323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" Namespace="calico-apiserver" Pod="calico-apiserver-7cf678cc66-7gbjr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--apiserver--7cf678cc66--7gbjr-eth0" Jul 7 06:07:14.089672 containerd[1528]: time="2025-07-07T06:07:14.089578986Z" level=info msg="connecting to shim c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3" address="unix:///run/containerd/s/40750aec4e09597df8cd5236fa81d8acece73f322369de492be7fae5245d8195" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:07:14.157077 systemd[1]: Started cri-containerd-c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3.scope - libcontainer container c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3. Jul 7 06:07:14.171989 systemd-networkd[1441]: cali6fecfffaf4c: Gained IPv6LL Jul 7 06:07:14.238423 containerd[1528]: time="2025-07-07T06:07:14.238372653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf678cc66-7gbjr,Uid:8d1a5bc4-00cc-451d-82cf-0d08adaef63c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3\"" Jul 7 06:07:14.428010 systemd-networkd[1441]: cali6b04e48307b: Gained IPv6LL Jul 7 06:07:14.476345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323990011.mount: Deactivated successfully. Jul 7 06:07:14.515733 containerd[1528]: time="2025-07-07T06:07:14.515437902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:07:14.551887 containerd[1528]: time="2025-07-07T06:07:14.551826696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.077777968s" Jul 7 06:07:14.552082 containerd[1528]: time="2025-07-07T06:07:14.552064341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:07:14.554810 containerd[1528]: time="2025-07-07T06:07:14.553376104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:07:14.561301 containerd[1528]: time="2025-07-07T06:07:14.561250800Z" level=info msg="CreateContainer within sandbox \"ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:07:14.567214 containerd[1528]: time="2025-07-07T06:07:14.567174607Z" level=info msg="Container 968f7b2ade06fe1347b3ba86a9c49dfbaa14248653555c27e126161a6c9dd199: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:14.643577 kubelet[2689]: E0707 06:07:14.642898 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:14.665537 containerd[1528]: time="2025-07-07T06:07:14.618456296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:14.667168 containerd[1528]: time="2025-07-07T06:07:14.666518360Z" level=info msg="CreateContainer within sandbox \"ef5d2522a720626673bf298429584a3a53a60fdab8af477896d8afafd1098424\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"968f7b2ade06fe1347b3ba86a9c49dfbaa14248653555c27e126161a6c9dd199\"" Jul 7 06:07:14.667630 containerd[1528]: time="2025-07-07T06:07:14.667578164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mb6xr,Uid:7737afa6-67e6-4fbb-94b1-86b8a7e9913f,Namespace:kube-system,Attempt:0,}" Jul 7 06:07:14.680931 containerd[1528]: time="2025-07-07T06:07:14.680737837Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:14.683160 containerd[1528]: time="2025-07-07T06:07:14.682058412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:14.687140 containerd[1528]: time="2025-07-07T06:07:14.687074652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7p5z,Uid:b87fa73e-26f3-47b2-81b4-1f7282c22904,Namespace:calico-system,Attempt:0,}" Jul 7 06:07:14.689098 containerd[1528]: time="2025-07-07T06:07:14.689047551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd4495d89-c7kgs,Uid:7780537f-a70e-4007-81ea-55353b907d7d,Namespace:calico-system,Attempt:0,}" Jul 7 06:07:14.689631 containerd[1528]: time="2025-07-07T06:07:14.689588623Z" level=info msg="StartContainer for \"968f7b2ade06fe1347b3ba86a9c49dfbaa14248653555c27e126161a6c9dd199\"" Jul 7 06:07:14.691039 containerd[1528]: time="2025-07-07T06:07:14.690961687Z" level=info msg="connecting to shim 968f7b2ade06fe1347b3ba86a9c49dfbaa14248653555c27e126161a6c9dd199" address="unix:///run/containerd/s/81efc5db1ac2eccc870ed916fe0ce2e77093c414d55e016a13e0f82d8e1e50d0" protocol=ttrpc version=3 Jul 7 06:07:14.700923 kubelet[2689]: I0707 06:07:14.700878 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:07:14.864877 systemd[1]: Started cri-containerd-968f7b2ade06fe1347b3ba86a9c49dfbaa14248653555c27e126161a6c9dd199.scope - libcontainer container 968f7b2ade06fe1347b3ba86a9c49dfbaa14248653555c27e126161a6c9dd199. Jul 7 06:07:15.007420 kubelet[2689]: E0707 06:07:15.005565 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:15.217979 systemd-networkd[1441]: cali29837cef521: Link UP Jul 7 06:07:15.221368 systemd-networkd[1441]: cali29837cef521: Gained carrier Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:14.849 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0 coredns-674b8bbfcf- kube-system 7737afa6-67e6-4fbb-94b1-86b8a7e9913f 845 0 2025-07-07 06:06:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-6-9e8df2071f coredns-674b8bbfcf-mb6xr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29837cef521 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Namespace="kube-system" Pod="coredns-674b8bbfcf-mb6xr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:14.850 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Namespace="kube-system" Pod="coredns-674b8bbfcf-mb6xr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.000 [INFO][4467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" HandleID="k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Workload="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.000 [INFO][4467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" HandleID="k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Workload="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003279d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-6-9e8df2071f", "pod":"coredns-674b8bbfcf-mb6xr", "timestamp":"2025-07-07 06:07:15.00015116 +0000 UTC"}, Hostname:"ci-4372.0.1-6-9e8df2071f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.000 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.000 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.000 [INFO][4467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-6-9e8df2071f' Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.036 [INFO][4467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.057 [INFO][4467] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.074 [INFO][4467] ipam/ipam.go 511: Trying affinity for 192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.087 [INFO][4467] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.106 [INFO][4467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.106 [INFO][4467] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.111 [INFO][4467] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18 Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.131 [INFO][4467] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.160 [INFO][4467] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.67.134/26] block=192.168.67.128/26 handle="k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.160 [INFO][4467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.134/26] handle="k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.160 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:07:15.277126 containerd[1528]: 2025-07-07 06:07:15.160 [INFO][4467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.134/26] IPv6=[] ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" HandleID="k8s-pod-network.2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Workload="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" Jul 7 06:07:15.278751 containerd[1528]: 2025-07-07 06:07:15.198 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Namespace="kube-system" Pod="coredns-674b8bbfcf-mb6xr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7737afa6-67e6-4fbb-94b1-86b8a7e9913f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"", Pod:"coredns-674b8bbfcf-mb6xr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29837cef521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:15.278751 containerd[1528]: 2025-07-07 06:07:15.199 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.134/32] ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Namespace="kube-system" Pod="coredns-674b8bbfcf-mb6xr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" Jul 7 06:07:15.278751 containerd[1528]: 2025-07-07 06:07:15.199 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29837cef521 ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Namespace="kube-system" Pod="coredns-674b8bbfcf-mb6xr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" Jul 7 06:07:15.278751 containerd[1528]: 2025-07-07 06:07:15.224 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Namespace="kube-system" Pod="coredns-674b8bbfcf-mb6xr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" Jul 7 06:07:15.278751 containerd[1528]: 2025-07-07 06:07:15.229 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Namespace="kube-system" Pod="coredns-674b8bbfcf-mb6xr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7737afa6-67e6-4fbb-94b1-86b8a7e9913f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18", Pod:"coredns-674b8bbfcf-mb6xr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29837cef521", MAC:"0a:ff:91:11:61:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:15.278751 containerd[1528]: 2025-07-07 06:07:15.262 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" Namespace="kube-system" Pod="coredns-674b8bbfcf-mb6xr" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-coredns--674b8bbfcf--mb6xr-eth0" Jul 7 06:07:15.308772 containerd[1528]: time="2025-07-07T06:07:15.308438652Z" level=info msg="StartContainer for \"968f7b2ade06fe1347b3ba86a9c49dfbaa14248653555c27e126161a6c9dd199\" returns successfully" Jul 7 06:07:15.345573 containerd[1528]: time="2025-07-07T06:07:15.345524201Z" level=info msg="connecting to shim 2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18" address="unix:///run/containerd/s/ace8cc60420fab787b25861969b44a50f54a7b2274a9d46345a36d8a453680dc" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:07:15.407238 systemd[1]: Started cri-containerd-2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18.scope - libcontainer container 2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18. Jul 7 06:07:15.408743 systemd-networkd[1441]: cali998afa71455: Link UP Jul 7 06:07:15.417481 systemd-networkd[1441]: cali998afa71455: Gained carrier Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:14.815 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0 csi-node-driver- calico-system b87fa73e-26f3-47b2-81b4-1f7282c22904 724 0 2025-07-07 06:06:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-6-9e8df2071f csi-node-driver-r7p5z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali998afa71455 [] [] }} ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Namespace="calico-system" Pod="csi-node-driver-r7p5z" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:14.816 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Namespace="calico-system" Pod="csi-node-driver-r7p5z" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.063 [INFO][4453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" HandleID="k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Workload="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.063 [INFO][4453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" HandleID="k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Workload="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-6-9e8df2071f", "pod":"csi-node-driver-r7p5z", "timestamp":"2025-07-07 06:07:15.063120522 +0000 UTC"}, Hostname:"ci-4372.0.1-6-9e8df2071f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.063 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.160 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.160 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-6-9e8df2071f' Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.189 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.234 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.283 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.289 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.299 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.299 [INFO][4453] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.302 [INFO][4453] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241 Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.321 [INFO][4453] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.331 [INFO][4453] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.67.135/26] block=192.168.67.128/26 handle="k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.331 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.135/26] handle="k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.331 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:07:15.470440 containerd[1528]: 2025-07-07 06:07:15.331 [INFO][4453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.135/26] IPv6=[] ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" HandleID="k8s-pod-network.b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Workload="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" Jul 7 06:07:15.472420 containerd[1528]: 2025-07-07 06:07:15.354 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Namespace="calico-system" Pod="csi-node-driver-r7p5z" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b87fa73e-26f3-47b2-81b4-1f7282c22904", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"", Pod:"csi-node-driver-r7p5z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali998afa71455", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:15.472420 containerd[1528]: 2025-07-07 06:07:15.357 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.135/32] ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Namespace="calico-system" Pod="csi-node-driver-r7p5z" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" Jul 7 06:07:15.472420 containerd[1528]: 2025-07-07 06:07:15.357 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali998afa71455 ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Namespace="calico-system" Pod="csi-node-driver-r7p5z" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" Jul 7 06:07:15.472420 containerd[1528]: 2025-07-07 06:07:15.435 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Namespace="calico-system" Pod="csi-node-driver-r7p5z" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" Jul 7 06:07:15.472420 containerd[1528]: 2025-07-07 06:07:15.439 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Namespace="calico-system" Pod="csi-node-driver-r7p5z" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b87fa73e-26f3-47b2-81b4-1f7282c22904", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241", Pod:"csi-node-driver-r7p5z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali998afa71455", MAC:"0a:48:0e:03:f3:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:15.472420 containerd[1528]: 2025-07-07 06:07:15.464 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" Namespace="calico-system" Pod="csi-node-driver-r7p5z" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-csi--node--driver--r7p5z-eth0" Jul 7 06:07:15.487188 systemd-networkd[1441]: calif083495573e: Link UP Jul 7 06:07:15.494163 systemd-networkd[1441]: calif083495573e: Gained carrier Jul 7 06:07:15.588357 containerd[1528]: time="2025-07-07T06:07:15.588203049Z" level=info msg="connecting to shim b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241" address="unix:///run/containerd/s/f5b935dd86f4fb4c982f169b7d4f7b7b4009f00def3114033aa91c25287d202b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:14.962 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0 calico-kube-controllers-cd4495d89- calico-system 7780537f-a70e-4007-81ea-55353b907d7d 835 0 2025-07-07 06:06:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd4495d89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-6-9e8df2071f calico-kube-controllers-cd4495d89-c7kgs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif083495573e [] [] }} ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Namespace="calico-system" Pod="calico-kube-controllers-cd4495d89-c7kgs" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:14.962 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Namespace="calico-system" Pod="calico-kube-controllers-cd4495d89-c7kgs" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.149 [INFO][4474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" HandleID="k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.150 [INFO][4474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" HandleID="k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bcfc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-6-9e8df2071f", "pod":"calico-kube-controllers-cd4495d89-c7kgs", "timestamp":"2025-07-07 06:07:15.146770512 +0000 UTC"}, Hostname:"ci-4372.0.1-6-9e8df2071f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.150 [INFO][4474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.332 [INFO][4474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.333 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-6-9e8df2071f' Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.355 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.374 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.394 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.402 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.412 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.128/26 host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.412 [INFO][4474] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.67.128/26 handle="k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.422 [INFO][4474] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6 Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.444 [INFO][4474] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.67.128/26 handle="k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.466 [INFO][4474] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.67.136/26] block=192.168.67.128/26 handle="k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.466 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.136/26] handle="k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" host="ci-4372.0.1-6-9e8df2071f" Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.466 [INFO][4474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:07:15.589095 containerd[1528]: 2025-07-07 06:07:15.466 [INFO][4474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.136/26] IPv6=[] ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" HandleID="k8s-pod-network.111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Workload="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" Jul 7 06:07:15.590421 containerd[1528]: 2025-07-07 06:07:15.475 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Namespace="calico-system" Pod="calico-kube-controllers-cd4495d89-c7kgs" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0", GenerateName:"calico-kube-controllers-cd4495d89-", Namespace:"calico-system", SelfLink:"", UID:"7780537f-a70e-4007-81ea-55353b907d7d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd4495d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"", Pod:"calico-kube-controllers-cd4495d89-c7kgs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif083495573e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:15.590421 containerd[1528]: 2025-07-07 06:07:15.475 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.136/32] ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Namespace="calico-system" Pod="calico-kube-controllers-cd4495d89-c7kgs" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" Jul 7 06:07:15.590421 containerd[1528]: 2025-07-07 06:07:15.475 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif083495573e ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Namespace="calico-system" Pod="calico-kube-controllers-cd4495d89-c7kgs" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" Jul 7 06:07:15.590421 containerd[1528]: 2025-07-07 06:07:15.498 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Namespace="calico-system" Pod="calico-kube-controllers-cd4495d89-c7kgs" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" Jul 7 06:07:15.590421 containerd[1528]: 2025-07-07 06:07:15.505 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Namespace="calico-system" Pod="calico-kube-controllers-cd4495d89-c7kgs" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0", GenerateName:"calico-kube-controllers-cd4495d89-", Namespace:"calico-system", SelfLink:"", UID:"7780537f-a70e-4007-81ea-55353b907d7d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd4495d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-6-9e8df2071f", ContainerID:"111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6", Pod:"calico-kube-controllers-cd4495d89-c7kgs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif083495573e", MAC:"be:11:40:56:0c:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:07:15.590421 containerd[1528]: 2025-07-07 06:07:15.520 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" Namespace="calico-system" Pod="calico-kube-controllers-cd4495d89-c7kgs" WorkloadEndpoint="ci--4372.0.1--6--9e8df2071f-k8s-calico--kube--controllers--cd4495d89--c7kgs-eth0" Jul 7 06:07:15.702813 containerd[1528]: time="2025-07-07T06:07:15.702420339Z" level=info msg="connecting to shim 111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6" address="unix:///run/containerd/s/6c17e53baa4c99ddd78178e2e0143037a18ba5310d4bc1cf24800c6fe559e32b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:07:15.703485 containerd[1528]: time="2025-07-07T06:07:15.703454860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mb6xr,Uid:7737afa6-67e6-4fbb-94b1-86b8a7e9913f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18\"" Jul 7 06:07:15.714805 kubelet[2689]: E0707 06:07:15.713686 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:15.740095 systemd[1]: Started cri-containerd-b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241.scope - libcontainer container b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241. Jul 7 06:07:15.746392 containerd[1528]: time="2025-07-07T06:07:15.745915750Z" level=info msg="CreateContainer within sandbox \"2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:07:15.768821 containerd[1528]: time="2025-07-07T06:07:15.766978494Z" level=info msg="Container 7baabb2d679bb12fd1430c84b5a85c227b6396fae672d357ca10c485863198f3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:15.771440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848917904.mount: Deactivated successfully. Jul 7 06:07:15.786101 systemd[1]: Started cri-containerd-111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6.scope - libcontainer container 111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6. Jul 7 06:07:15.791165 containerd[1528]: time="2025-07-07T06:07:15.790970379Z" level=info msg="CreateContainer within sandbox \"2c279394210d3ea3c161114e659d5fc6f9cb8c8be3911db08c0a619db88a9c18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7baabb2d679bb12fd1430c84b5a85c227b6396fae672d357ca10c485863198f3\"" Jul 7 06:07:15.792610 containerd[1528]: time="2025-07-07T06:07:15.792566111Z" level=info msg="StartContainer for \"7baabb2d679bb12fd1430c84b5a85c227b6396fae672d357ca10c485863198f3\"" Jul 7 06:07:15.795691 containerd[1528]: time="2025-07-07T06:07:15.795546011Z" level=info msg="connecting to shim 7baabb2d679bb12fd1430c84b5a85c227b6396fae672d357ca10c485863198f3" address="unix:///run/containerd/s/ace8cc60420fab787b25861969b44a50f54a7b2274a9d46345a36d8a453680dc" protocol=ttrpc version=3 Jul 7 06:07:15.877087 systemd[1]: Started cri-containerd-7baabb2d679bb12fd1430c84b5a85c227b6396fae672d357ca10c485863198f3.scope - libcontainer container 7baabb2d679bb12fd1430c84b5a85c227b6396fae672d357ca10c485863198f3. Jul 7 06:07:15.915687 containerd[1528]: time="2025-07-07T06:07:15.915531579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r7p5z,Uid:b87fa73e-26f3-47b2-81b4-1f7282c22904,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241\"" Jul 7 06:07:15.973180 containerd[1528]: time="2025-07-07T06:07:15.973116100Z" level=info msg="StartContainer for \"7baabb2d679bb12fd1430c84b5a85c227b6396fae672d357ca10c485863198f3\" returns successfully" Jul 7 06:07:16.028185 systemd-networkd[1441]: calicc3bc01b52f: Gained IPv6LL Jul 7 06:07:16.030113 kubelet[2689]: E0707 06:07:16.029673 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:16.032197 kubelet[2689]: E0707 06:07:16.032163 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:16.045930 kubelet[2689]: I0707 06:07:16.045861 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-966b5bb58-5j6mc" podStartSLOduration=3.379113066 podStartE2EDuration="8.045844942s" podCreationTimestamp="2025-07-07 06:07:08 +0000 UTC" firstStartedPulling="2025-07-07 06:07:09.88631165 +0000 UTC m=+43.409393402" lastFinishedPulling="2025-07-07 06:07:14.553043542 +0000 UTC m=+48.076125278" observedRunningTime="2025-07-07 06:07:16.04513397 +0000 UTC m=+49.568215727" watchObservedRunningTime="2025-07-07 06:07:16.045844942 +0000 UTC m=+49.568926700" Jul 7 06:07:16.181856 containerd[1528]: time="2025-07-07T06:07:16.181681332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd4495d89-c7kgs,Uid:7780537f-a70e-4007-81ea-55353b907d7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6\"" Jul 7 06:07:16.289968 containerd[1528]: time="2025-07-07T06:07:16.289864859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137\" id:\"30e0a71f629af2a607c5874c456184570577ce1d62607f8bde6c8240734156a5\" pid:4517 exited_at:{seconds:1751868436 nanos:289094419}" Jul 7 06:07:16.347528 kubelet[2689]: I0707 06:07:16.347036 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mb6xr" podStartSLOduration=45.346982162 podStartE2EDuration="45.346982162s" podCreationTimestamp="2025-07-07 06:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:07:16.095285457 +0000 UTC m=+49.618367216" watchObservedRunningTime="2025-07-07 06:07:16.346982162 +0000 UTC m=+49.870063920" Jul 7 06:07:16.605922 systemd-networkd[1441]: cali29837cef521: Gained IPv6LL Jul 7 06:07:16.746098 containerd[1528]: time="2025-07-07T06:07:16.745966433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137\" id:\"f7d7f58a1db9d27045414a3a74b833ce2e85b9b07e9c87214656b7cc8927ec6c\" pid:4748 exited_at:{seconds:1751868436 nanos:742105870}" Jul 7 06:07:17.054758 systemd-networkd[1441]: cali998afa71455: Gained IPv6LL Jul 7 06:07:17.070213 kubelet[2689]: E0707 06:07:17.069160 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:17.436431 systemd-networkd[1441]: calif083495573e: Gained IPv6LL Jul 7 06:07:18.070862 kubelet[2689]: E0707 06:07:18.070823 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:18.612955 containerd[1528]: time="2025-07-07T06:07:18.612869878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:18.614417 containerd[1528]: time="2025-07-07T06:07:18.614213628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:07:18.615522 containerd[1528]: time="2025-07-07T06:07:18.615414686Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:18.618066 containerd[1528]: time="2025-07-07T06:07:18.617993009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:18.618643 containerd[1528]: time="2025-07-07T06:07:18.618509948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.065102556s" Jul 7 06:07:18.618643 containerd[1528]: time="2025-07-07T06:07:18.618547452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:07:18.621842 containerd[1528]: time="2025-07-07T06:07:18.620557093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:07:18.623080 containerd[1528]: time="2025-07-07T06:07:18.623031634Z" level=info msg="CreateContainer within sandbox \"6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:07:18.638571 containerd[1528]: time="2025-07-07T06:07:18.638502713Z" level=info msg="Container 627c0e8a2ac755cebf5fea9831875c64d25a0d0d5f38b58233282281e154376e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:18.649365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3026468705.mount: Deactivated successfully. Jul 7 06:07:18.666158 containerd[1528]: time="2025-07-07T06:07:18.666042796Z" level=info msg="CreateContainer within sandbox \"6529175b1260cfe99b0f3084b32509246250d4957a47b4ab37f2032529bcaf61\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"627c0e8a2ac755cebf5fea9831875c64d25a0d0d5f38b58233282281e154376e\"" Jul 7 06:07:18.671358 containerd[1528]: time="2025-07-07T06:07:18.670136449Z" level=info msg="StartContainer for \"627c0e8a2ac755cebf5fea9831875c64d25a0d0d5f38b58233282281e154376e\"" Jul 7 06:07:18.672196 containerd[1528]: time="2025-07-07T06:07:18.672153838Z" level=info msg="connecting to shim 627c0e8a2ac755cebf5fea9831875c64d25a0d0d5f38b58233282281e154376e" address="unix:///run/containerd/s/e4b3a3843002b643dec2f37d92f2c67f2afb1d782f2dbba2661e8aeeda836690" protocol=ttrpc version=3 Jul 7 06:07:18.714109 systemd[1]: Started cri-containerd-627c0e8a2ac755cebf5fea9831875c64d25a0d0d5f38b58233282281e154376e.scope - libcontainer container 627c0e8a2ac755cebf5fea9831875c64d25a0d0d5f38b58233282281e154376e. Jul 7 06:07:18.789117 containerd[1528]: time="2025-07-07T06:07:18.789077846Z" level=info msg="StartContainer for \"627c0e8a2ac755cebf5fea9831875c64d25a0d0d5f38b58233282281e154376e\" returns successfully" Jul 7 06:07:19.085442 kubelet[2689]: E0707 06:07:19.085298 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:19.105594 kubelet[2689]: I0707 06:07:19.105210 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cf678cc66-klsvh" podStartSLOduration=27.439392537 podStartE2EDuration="34.105188223s" podCreationTimestamp="2025-07-07 06:06:45 +0000 UTC" firstStartedPulling="2025-07-07 06:07:11.954197985 +0000 UTC m=+45.477279734" lastFinishedPulling="2025-07-07 06:07:18.619993667 +0000 UTC m=+52.143075420" observedRunningTime="2025-07-07 06:07:19.104709538 +0000 UTC m=+52.627791296" watchObservedRunningTime="2025-07-07 06:07:19.105188223 +0000 UTC m=+52.628270009" Jul 7 06:07:20.127735 kubelet[2689]: I0707 06:07:20.109190 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:07:21.796446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634783796.mount: Deactivated successfully. Jul 7 06:07:23.025816 containerd[1528]: time="2025-07-07T06:07:23.025043976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:23.026695 containerd[1528]: time="2025-07-07T06:07:23.026657889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:07:23.027985 containerd[1528]: time="2025-07-07T06:07:23.027945757Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:23.031286 containerd[1528]: time="2025-07-07T06:07:23.031226787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:23.031888 containerd[1528]: time="2025-07-07T06:07:23.031850511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.411253864s" Jul 7 06:07:23.031985 containerd[1528]: time="2025-07-07T06:07:23.031898139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:07:23.042959 containerd[1528]: time="2025-07-07T06:07:23.041852795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:07:23.052121 containerd[1528]: time="2025-07-07T06:07:23.052051984Z" level=info msg="CreateContainer within sandbox \"fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:07:23.064822 containerd[1528]: time="2025-07-07T06:07:23.060444546Z" level=info msg="Container 7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:23.072086 containerd[1528]: time="2025-07-07T06:07:23.072030883Z" level=info msg="CreateContainer within sandbox \"fab9efcf9e3307ed7f5abf4e88973f77bda91301c696e34021154ef8e6a57735\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313\"" Jul 7 06:07:23.073045 containerd[1528]: time="2025-07-07T06:07:23.073018767Z" level=info msg="StartContainer for \"7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313\"" Jul 7 06:07:23.077763 containerd[1528]: time="2025-07-07T06:07:23.077676771Z" level=info msg="connecting to shim 7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313" address="unix:///run/containerd/s/69300c7b72d9b953f209c4897a634438f92907ebc415033477b376ad6b918188" protocol=ttrpc version=3 Jul 7 06:07:23.176013 systemd[1]: Started cri-containerd-7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313.scope - libcontainer container 7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313. Jul 7 06:07:23.445437 containerd[1528]: time="2025-07-07T06:07:23.445324470Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:23.447989 containerd[1528]: time="2025-07-07T06:07:23.447177765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:07:23.451840 containerd[1528]: time="2025-07-07T06:07:23.451014357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 409.109295ms" Jul 7 06:07:23.452073 containerd[1528]: time="2025-07-07T06:07:23.452042201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:07:23.453984 containerd[1528]: time="2025-07-07T06:07:23.453864777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:07:23.457810 containerd[1528]: time="2025-07-07T06:07:23.457560973Z" level=info msg="CreateContainer within sandbox \"c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:07:23.467859 containerd[1528]: time="2025-07-07T06:07:23.466298331Z" level=info msg="Container 8e2aae1280e427b8ee44922a5af4018352d28155cc15316dbd9015905b35ac37: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:23.481938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119114800.mount: Deactivated successfully. Jul 7 06:07:23.494700 containerd[1528]: time="2025-07-07T06:07:23.494645837Z" level=info msg="CreateContainer within sandbox \"c7ffd5220d61867b1a2cdd8959bec8017bcfa4c40b3b260a3838487f31161ae3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e2aae1280e427b8ee44922a5af4018352d28155cc15316dbd9015905b35ac37\"" Jul 7 06:07:23.501578 containerd[1528]: time="2025-07-07T06:07:23.501526829Z" level=info msg="StartContainer for \"8e2aae1280e427b8ee44922a5af4018352d28155cc15316dbd9015905b35ac37\"" Jul 7 06:07:23.511710 containerd[1528]: time="2025-07-07T06:07:23.509687911Z" level=info msg="connecting to shim 8e2aae1280e427b8ee44922a5af4018352d28155cc15316dbd9015905b35ac37" address="unix:///run/containerd/s/40750aec4e09597df8cd5236fa81d8acece73f322369de492be7fae5245d8195" protocol=ttrpc version=3 Jul 7 06:07:23.582163 systemd[1]: Started cri-containerd-8e2aae1280e427b8ee44922a5af4018352d28155cc15316dbd9015905b35ac37.scope - libcontainer container 8e2aae1280e427b8ee44922a5af4018352d28155cc15316dbd9015905b35ac37. Jul 7 06:07:23.586642 kubelet[2689]: I0707 06:07:23.586427 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:07:23.722839 containerd[1528]: time="2025-07-07T06:07:23.720090159Z" level=info msg="StartContainer for \"7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313\" returns successfully" Jul 7 06:07:23.925002 containerd[1528]: time="2025-07-07T06:07:23.924947757Z" level=info msg="StartContainer for \"8e2aae1280e427b8ee44922a5af4018352d28155cc15316dbd9015905b35ac37\" returns successfully" Jul 7 06:07:24.209152 kubelet[2689]: I0707 06:07:24.209034 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cf678cc66-7gbjr" podStartSLOduration=29.997191789 podStartE2EDuration="39.208403641s" podCreationTimestamp="2025-07-07 06:06:45 +0000 UTC" firstStartedPulling="2025-07-07 06:07:14.242323024 +0000 UTC m=+47.765404776" lastFinishedPulling="2025-07-07 06:07:23.453534874 +0000 UTC m=+56.976616628" observedRunningTime="2025-07-07 06:07:24.167381545 +0000 UTC m=+57.690463314" watchObservedRunningTime="2025-07-07 06:07:24.208403641 +0000 UTC m=+57.731485407" Jul 7 06:07:24.328155 systemd[1]: Started sshd@7-24.199.107.192:22-139.178.68.195:36536.service - OpenSSH per-connection server daemon (139.178.68.195:36536). Jul 7 06:07:24.588555 sshd[4898]: Accepted publickey for core from 139.178.68.195 port 36536 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:07:24.594553 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:24.605859 systemd-logind[1509]: New session 8 of user core. Jul 7 06:07:24.611009 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:07:24.792386 kubelet[2689]: I0707 06:07:24.792226 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-fzmgq" podStartSLOduration=27.03430247 podStartE2EDuration="36.792203891s" podCreationTimestamp="2025-07-07 06:06:48 +0000 UTC" firstStartedPulling="2025-07-07 06:07:13.283384797 +0000 UTC m=+46.806466535" lastFinishedPulling="2025-07-07 06:07:23.041286215 +0000 UTC m=+56.564367956" observedRunningTime="2025-07-07 06:07:24.212691201 +0000 UTC m=+57.735772958" watchObservedRunningTime="2025-07-07 06:07:24.792203891 +0000 UTC m=+58.315285648" Jul 7 06:07:25.156033 kubelet[2689]: I0707 06:07:25.154602 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:07:25.699932 sshd[4922]: Connection closed by 139.178.68.195 port 36536 Jul 7 06:07:25.700457 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:25.716699 systemd[1]: sshd@7-24.199.107.192:22-139.178.68.195:36536.service: Deactivated successfully. Jul 7 06:07:25.725706 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:07:25.740838 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:07:25.746089 systemd-logind[1509]: Removed session 8. Jul 7 06:07:25.817811 containerd[1528]: time="2025-07-07T06:07:25.816651859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:25.818838 containerd[1528]: time="2025-07-07T06:07:25.818741045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:07:25.820807 containerd[1528]: time="2025-07-07T06:07:25.820626681Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:25.826807 containerd[1528]: time="2025-07-07T06:07:25.825906532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:25.829885 containerd[1528]: time="2025-07-07T06:07:25.829835270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.37560913s" Jul 7 06:07:25.830091 containerd[1528]: time="2025-07-07T06:07:25.830067986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:07:25.833393 containerd[1528]: time="2025-07-07T06:07:25.831440908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:07:25.836284 containerd[1528]: time="2025-07-07T06:07:25.836218475Z" level=info msg="CreateContainer within sandbox \"b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:07:25.873003 containerd[1528]: time="2025-07-07T06:07:25.869941974Z" level=info msg="Container dc9e63ac1e8c637801f86c90193ab08924c48d190d2c9906b87ec6ff79fca6f8: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:25.885265 containerd[1528]: time="2025-07-07T06:07:25.883486729Z" level=info msg="CreateContainer within sandbox \"b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dc9e63ac1e8c637801f86c90193ab08924c48d190d2c9906b87ec6ff79fca6f8\"" Jul 7 06:07:25.886846 containerd[1528]: time="2025-07-07T06:07:25.885479782Z" level=info msg="StartContainer for \"dc9e63ac1e8c637801f86c90193ab08924c48d190d2c9906b87ec6ff79fca6f8\"" Jul 7 06:07:25.891005 containerd[1528]: time="2025-07-07T06:07:25.890952521Z" level=info msg="connecting to shim dc9e63ac1e8c637801f86c90193ab08924c48d190d2c9906b87ec6ff79fca6f8" address="unix:///run/containerd/s/f5b935dd86f4fb4c982f169b7d4f7b7b4009f00def3114033aa91c25287d202b" protocol=ttrpc version=3 Jul 7 06:07:25.946965 systemd[1]: Started cri-containerd-dc9e63ac1e8c637801f86c90193ab08924c48d190d2c9906b87ec6ff79fca6f8.scope - libcontainer container dc9e63ac1e8c637801f86c90193ab08924c48d190d2c9906b87ec6ff79fca6f8. Jul 7 06:07:26.117142 containerd[1528]: time="2025-07-07T06:07:26.117065649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313\" id:\"1898aebe51fc495f06a64148d23c5061216c8a37f69f1cf206cf622ac1a4c695\" pid:4913 exit_status:1 exited_at:{seconds:1751868446 nanos:103026166}" Jul 7 06:07:26.316439 containerd[1528]: time="2025-07-07T06:07:26.316321089Z" level=info msg="StartContainer for \"dc9e63ac1e8c637801f86c90193ab08924c48d190d2c9906b87ec6ff79fca6f8\" returns successfully" Jul 7 06:07:26.608478 containerd[1528]: time="2025-07-07T06:07:26.607341076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313\" id:\"516d5cb1b61b052fe53029e8b66e7f68ac5ccc3d6ea18a86cb6d6f13eedce747\" pid:4985 exit_status:1 exited_at:{seconds:1751868446 nanos:605422292}" Jul 7 06:07:30.463314 containerd[1528]: time="2025-07-07T06:07:30.463067865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:30.469149 containerd[1528]: time="2025-07-07T06:07:30.469108735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:07:30.474717 containerd[1528]: time="2025-07-07T06:07:30.474601719Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:30.487324 containerd[1528]: time="2025-07-07T06:07:30.487246100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.654115577s" Jul 7 06:07:30.488900 containerd[1528]: time="2025-07-07T06:07:30.487300558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:07:30.489286 containerd[1528]: time="2025-07-07T06:07:30.489219134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:30.547318 containerd[1528]: time="2025-07-07T06:07:30.547241163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:07:30.656700 containerd[1528]: time="2025-07-07T06:07:30.656616003Z" level=info msg="CreateContainer within sandbox \"111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:07:30.675368 containerd[1528]: time="2025-07-07T06:07:30.675310365Z" level=info msg="Container 8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:30.700084 containerd[1528]: time="2025-07-07T06:07:30.699915305Z" level=info msg="CreateContainer within sandbox \"111a91b96ecbd5f3e8e1e0977aab7c27031ea9e541ea5e4f8f2a93ab519a3df6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a\"" Jul 7 06:07:30.701065 containerd[1528]: time="2025-07-07T06:07:30.701022252Z" level=info msg="StartContainer for \"8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a\"" Jul 7 06:07:30.705647 containerd[1528]: time="2025-07-07T06:07:30.705598757Z" level=info msg="connecting to shim 8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a" address="unix:///run/containerd/s/6c17e53baa4c99ddd78178e2e0143037a18ba5310d4bc1cf24800c6fe559e32b" protocol=ttrpc version=3 Jul 7 06:07:30.739140 systemd[1]: Started sshd@8-24.199.107.192:22-139.178.68.195:34642.service - OpenSSH per-connection server daemon (139.178.68.195:34642). Jul 7 06:07:30.794232 systemd[1]: Started cri-containerd-8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a.scope - libcontainer container 8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a. Jul 7 06:07:30.911673 sshd[5028]: Accepted publickey for core from 139.178.68.195 port 34642 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:07:30.916079 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:30.923214 systemd-logind[1509]: New session 9 of user core. Jul 7 06:07:30.929399 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:07:31.849374 containerd[1528]: time="2025-07-07T06:07:31.849307305Z" level=info msg="StartContainer for \"8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a\" returns successfully" Jul 7 06:07:31.897107 sshd[5043]: Connection closed by 139.178.68.195 port 34642 Jul 7 06:07:31.898024 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:31.904959 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:07:31.905749 systemd[1]: sshd@8-24.199.107.192:22-139.178.68.195:34642.service: Deactivated successfully. Jul 7 06:07:31.911844 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:07:31.918632 systemd-logind[1509]: Removed session 9. Jul 7 06:07:33.300736 containerd[1528]: time="2025-07-07T06:07:33.300677646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:33.305817 containerd[1528]: time="2025-07-07T06:07:33.304889167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:07:33.305817 containerd[1528]: time="2025-07-07T06:07:33.305504087Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:33.313122 containerd[1528]: time="2025-07-07T06:07:33.312250247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.76496538s" Jul 7 06:07:33.313122 containerd[1528]: time="2025-07-07T06:07:33.312301354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:07:33.313122 containerd[1528]: time="2025-07-07T06:07:33.312901090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:07:33.333583 containerd[1528]: time="2025-07-07T06:07:33.332351878Z" level=info msg="CreateContainer within sandbox \"b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:07:33.372100 containerd[1528]: time="2025-07-07T06:07:33.372044114Z" level=info msg="Container 7d146b01abc9a38cf8eaf250988b14f6cdcb79b69476c98bff5b389910e73645: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:07:33.375746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950862584.mount: Deactivated successfully. Jul 7 06:07:33.411448 containerd[1528]: time="2025-07-07T06:07:33.411405282Z" level=info msg="CreateContainer within sandbox \"b4712aecc42ee009a5395b85e66a2761a43a5674bc6240d1c1cf165995f0e241\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7d146b01abc9a38cf8eaf250988b14f6cdcb79b69476c98bff5b389910e73645\"" Jul 7 06:07:33.424347 containerd[1528]: time="2025-07-07T06:07:33.424294185Z" level=info msg="StartContainer for \"7d146b01abc9a38cf8eaf250988b14f6cdcb79b69476c98bff5b389910e73645\"" Jul 7 06:07:33.426844 containerd[1528]: time="2025-07-07T06:07:33.426705794Z" level=info msg="connecting to shim 7d146b01abc9a38cf8eaf250988b14f6cdcb79b69476c98bff5b389910e73645" address="unix:///run/containerd/s/f5b935dd86f4fb4c982f169b7d4f7b7b4009f00def3114033aa91c25287d202b" protocol=ttrpc version=3 Jul 7 06:07:33.554890 systemd[1]: Started cri-containerd-7d146b01abc9a38cf8eaf250988b14f6cdcb79b69476c98bff5b389910e73645.scope - libcontainer container 7d146b01abc9a38cf8eaf250988b14f6cdcb79b69476c98bff5b389910e73645. Jul 7 06:07:33.846892 containerd[1528]: time="2025-07-07T06:07:33.846845194Z" level=info msg="StartContainer for \"7d146b01abc9a38cf8eaf250988b14f6cdcb79b69476c98bff5b389910e73645\" returns successfully" Jul 7 06:07:33.906222 containerd[1528]: time="2025-07-07T06:07:33.906161463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a\" id:\"2b6ee669e8b36b951ee007a8c1e9625c42f1e50d8b37813384ead0155ceea890\" pid:5133 exited_at:{seconds:1751868453 nanos:859132335}" Jul 7 06:07:33.906463 containerd[1528]: time="2025-07-07T06:07:33.906418684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313\" id:\"98f5372acbba7e649f91a0c3266049c3b1f8c6bc7131892db4f0a9a158802415\" pid:5119 exited_at:{seconds:1751868453 nanos:901767681}" Jul 7 06:07:33.966816 kubelet[2689]: I0707 06:07:33.966721 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cd4495d89-c7kgs" podStartSLOduration=30.606581395 podStartE2EDuration="44.957298376s" podCreationTimestamp="2025-07-07 06:06:49 +0000 UTC" firstStartedPulling="2025-07-07 06:07:16.18898885 +0000 UTC m=+49.712070599" lastFinishedPulling="2025-07-07 06:07:30.539705831 +0000 UTC m=+64.062787580" observedRunningTime="2025-07-07 06:07:32.606889964 +0000 UTC m=+66.129971718" watchObservedRunningTime="2025-07-07 06:07:33.957298376 +0000 UTC m=+67.480380133" Jul 7 06:07:34.980155 kubelet[2689]: I0707 06:07:34.980079 2689 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:07:34.980764 kubelet[2689]: I0707 06:07:34.980197 2689 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:07:36.941860 systemd[1]: Started sshd@9-24.199.107.192:22-139.178.68.195:34654.service - OpenSSH per-connection server daemon (139.178.68.195:34654). Jul 7 06:07:37.097570 sshd[5169]: Accepted publickey for core from 139.178.68.195 port 34654 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:07:37.103722 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:37.113089 systemd-logind[1509]: New session 10 of user core. Jul 7 06:07:37.119062 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:07:38.141421 sshd[5171]: Connection closed by 139.178.68.195 port 34654 Jul 7 06:07:38.142477 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:38.163441 systemd[1]: sshd@9-24.199.107.192:22-139.178.68.195:34654.service: Deactivated successfully. Jul 7 06:07:38.172919 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:07:38.175933 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:07:38.189936 systemd[1]: Started sshd@10-24.199.107.192:22-139.178.68.195:36770.service - OpenSSH per-connection server daemon (139.178.68.195:36770). Jul 7 06:07:38.194454 systemd-logind[1509]: Removed session 10. Jul 7 06:07:38.300022 sshd[5185]: Accepted publickey for core from 139.178.68.195 port 36770 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:07:38.304038 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:38.317942 systemd-logind[1509]: New session 11 of user core. Jul 7 06:07:38.323619 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:07:38.582684 sshd[5187]: Connection closed by 139.178.68.195 port 36770 Jul 7 06:07:38.581453 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:38.602481 systemd[1]: sshd@10-24.199.107.192:22-139.178.68.195:36770.service: Deactivated successfully. Jul 7 06:07:38.609570 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:07:38.612140 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:07:38.621816 systemd[1]: Started sshd@11-24.199.107.192:22-139.178.68.195:36778.service - OpenSSH per-connection server daemon (139.178.68.195:36778). Jul 7 06:07:38.623581 systemd-logind[1509]: Removed session 11. Jul 7 06:07:38.732818 sshd[5197]: Accepted publickey for core from 139.178.68.195 port 36778 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:07:38.735274 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:38.745711 systemd-logind[1509]: New session 12 of user core. Jul 7 06:07:38.756534 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:07:38.926709 sshd[5199]: Connection closed by 139.178.68.195 port 36778 Jul 7 06:07:38.927216 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:38.932726 systemd[1]: sshd@11-24.199.107.192:22-139.178.68.195:36778.service: Deactivated successfully. Jul 7 06:07:38.935529 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:07:38.938858 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:07:38.941473 systemd-logind[1509]: Removed session 12. Jul 7 06:07:43.945390 systemd[1]: Started sshd@12-24.199.107.192:22-139.178.68.195:36780.service - OpenSSH per-connection server daemon (139.178.68.195:36780). Jul 7 06:07:44.042842 sshd[5217]: Accepted publickey for core from 139.178.68.195 port 36780 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:07:44.045633 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:44.054481 systemd-logind[1509]: New session 13 of user core. Jul 7 06:07:44.059067 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:07:44.276997 sshd[5219]: Connection closed by 139.178.68.195 port 36780 Jul 7 06:07:44.278992 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:44.286028 systemd[1]: sshd@12-24.199.107.192:22-139.178.68.195:36780.service: Deactivated successfully. Jul 7 06:07:44.291838 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:07:44.294962 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:07:44.297148 systemd-logind[1509]: Removed session 13. Jul 7 06:07:46.814719 containerd[1528]: time="2025-07-07T06:07:46.814669185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137\" id:\"92ce9dcf7fb5eba795c765789faa685ac912303a6e100a458e88d545cae9893f\" pid:5246 exited_at:{seconds:1751868466 nanos:814025000}" Jul 7 06:07:49.295310 systemd[1]: Started sshd@13-24.199.107.192:22-139.178.68.195:39464.service - OpenSSH per-connection server daemon (139.178.68.195:39464). Jul 7 06:07:49.454801 sshd[5261]: Accepted publickey for core from 139.178.68.195 port 39464 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:07:49.458132 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:49.468701 systemd-logind[1509]: New session 14 of user core. Jul 7 06:07:49.477699 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:07:49.940416 sshd[5263]: Connection closed by 139.178.68.195 port 39464 Jul 7 06:07:49.942286 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:49.953038 systemd[1]: sshd@13-24.199.107.192:22-139.178.68.195:39464.service: Deactivated successfully. Jul 7 06:07:49.956235 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:07:49.958518 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:07:49.965336 systemd-logind[1509]: Removed session 14. Jul 7 06:07:54.670844 kubelet[2689]: E0707 06:07:54.668556 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:07:54.955103 systemd[1]: Started sshd@14-24.199.107.192:22-139.178.68.195:39466.service - OpenSSH per-connection server daemon (139.178.68.195:39466). Jul 7 06:07:55.030382 sshd[5281]: Accepted publickey for core from 139.178.68.195 port 39466 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:07:55.032252 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:07:55.041832 systemd-logind[1509]: New session 15 of user core. Jul 7 06:07:55.045899 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:07:55.217893 sshd[5283]: Connection closed by 139.178.68.195 port 39466 Jul 7 06:07:55.218057 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Jul 7 06:07:55.226464 systemd[1]: sshd@14-24.199.107.192:22-139.178.68.195:39466.service: Deactivated successfully. Jul 7 06:07:55.227014 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:07:55.231092 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:07:55.235503 systemd-logind[1509]: Removed session 15. Jul 7 06:07:56.356806 containerd[1528]: time="2025-07-07T06:07:56.356748668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fa3d93c8b19cdcca1067060eaeb8e962abe3b05df760461cd004ae22e456313\" id:\"10e9f28d8434762b14b774cbbefe5c4a199d9c4339b7c2663db19b64b52dc5ab\" pid:5308 exited_at:{seconds:1751868476 nanos:355978632}" Jul 7 06:07:56.443909 kubelet[2689]: I0707 06:07:56.420725 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r7p5z" podStartSLOduration=50.024966974 podStartE2EDuration="1m7.420687106s" podCreationTimestamp="2025-07-07 06:06:49 +0000 UTC" firstStartedPulling="2025-07-07 06:07:15.921506764 +0000 UTC m=+49.444588501" lastFinishedPulling="2025-07-07 06:07:33.317226896 +0000 UTC m=+66.840308633" observedRunningTime="2025-07-07 06:07:34.617595981 +0000 UTC m=+68.140677730" watchObservedRunningTime="2025-07-07 06:07:56.420687106 +0000 UTC m=+89.943768863" Jul 7 06:07:56.693755 kubelet[2689]: I0707 06:07:56.693614 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:07:59.636850 kubelet[2689]: E0707 06:07:59.635585 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:08:00.232343 systemd[1]: Started sshd@15-24.199.107.192:22-139.178.68.195:44332.service - OpenSSH per-connection server daemon (139.178.68.195:44332). Jul 7 06:08:00.371199 sshd[5322]: Accepted publickey for core from 139.178.68.195 port 44332 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:08:00.377146 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:00.388333 systemd-logind[1509]: New session 16 of user core. Jul 7 06:08:00.395221 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:08:00.679864 kubelet[2689]: E0707 06:08:00.677751 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:08:01.166371 sshd[5324]: Connection closed by 139.178.68.195 port 44332 Jul 7 06:08:01.174475 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:01.182794 systemd[1]: Started sshd@16-24.199.107.192:22-139.178.68.195:44338.service - OpenSSH per-connection server daemon (139.178.68.195:44338). Jul 7 06:08:01.201560 systemd[1]: sshd@15-24.199.107.192:22-139.178.68.195:44332.service: Deactivated successfully. Jul 7 06:08:01.207140 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:08:01.213753 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:08:01.217901 systemd-logind[1509]: Removed session 16. Jul 7 06:08:01.285272 sshd[5333]: Accepted publickey for core from 139.178.68.195 port 44338 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:08:01.289044 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:01.301493 systemd-logind[1509]: New session 17 of user core. Jul 7 06:08:01.310054 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:08:01.731230 sshd[5338]: Connection closed by 139.178.68.195 port 44338 Jul 7 06:08:01.731942 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:01.744690 systemd[1]: sshd@16-24.199.107.192:22-139.178.68.195:44338.service: Deactivated successfully. Jul 7 06:08:01.748313 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:08:01.752710 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:08:01.761975 systemd[1]: Started sshd@17-24.199.107.192:22-139.178.68.195:44340.service - OpenSSH per-connection server daemon (139.178.68.195:44340). Jul 7 06:08:01.781614 systemd-logind[1509]: Removed session 17. Jul 7 06:08:01.879935 sshd[5348]: Accepted publickey for core from 139.178.68.195 port 44340 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:08:01.881721 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:01.890861 systemd-logind[1509]: New session 18 of user core. Jul 7 06:08:01.896082 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:08:02.641688 kubelet[2689]: E0707 06:08:02.641522 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:08:02.675228 kubelet[2689]: E0707 06:08:02.675190 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 7 06:08:03.459245 sshd[5350]: Connection closed by 139.178.68.195 port 44340 Jul 7 06:08:03.466995 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:03.478800 systemd[1]: sshd@17-24.199.107.192:22-139.178.68.195:44340.service: Deactivated successfully. Jul 7 06:08:03.486145 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:08:03.492612 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:08:03.497566 systemd[1]: Started sshd@18-24.199.107.192:22-139.178.68.195:44352.service - OpenSSH per-connection server daemon (139.178.68.195:44352). Jul 7 06:08:03.509005 systemd-logind[1509]: Removed session 18. Jul 7 06:08:03.656326 sshd[5368]: Accepted publickey for core from 139.178.68.195 port 44352 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:08:03.659502 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:03.671328 systemd-logind[1509]: New session 19 of user core. Jul 7 06:08:03.676343 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:08:03.705765 containerd[1528]: time="2025-07-07T06:08:03.705534576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a\" id:\"b8de5119956b379a1eba8f9af31546aef5889ddb5008840be6cbd94c5fbbf69d\" pid:5383 exited_at:{seconds:1751868483 nanos:703951431}" Jul 7 06:08:04.733886 sshd[5392]: Connection closed by 139.178.68.195 port 44352 Jul 7 06:08:04.734648 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:04.750472 systemd[1]: sshd@18-24.199.107.192:22-139.178.68.195:44352.service: Deactivated successfully. Jul 7 06:08:04.758143 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:08:04.759633 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:08:04.767170 systemd[1]: Started sshd@19-24.199.107.192:22-139.178.68.195:44362.service - OpenSSH per-connection server daemon (139.178.68.195:44362). Jul 7 06:08:04.768789 systemd-logind[1509]: Removed session 19. Jul 7 06:08:04.845120 sshd[5405]: Accepted publickey for core from 139.178.68.195 port 44362 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:08:04.848135 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:04.857487 systemd-logind[1509]: New session 20 of user core. Jul 7 06:08:04.864075 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:08:05.072409 sshd[5407]: Connection closed by 139.178.68.195 port 44362 Jul 7 06:08:05.073078 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:05.081771 systemd[1]: sshd@19-24.199.107.192:22-139.178.68.195:44362.service: Deactivated successfully. Jul 7 06:08:05.082320 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:08:05.089088 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:08:05.096172 systemd-logind[1509]: Removed session 20. Jul 7 06:08:08.105313 containerd[1528]: time="2025-07-07T06:08:08.105214642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f8c31ab7647bf0a72f5ed0733d17584acecd8d1c99decffdae8a3b5a4e8dc1a\" id:\"017d7283c9272308231d36d8da5f82187d86eab7fed1fce750f00f69469f70cc\" pid:5429 exited_at:{seconds:1751868488 nanos:102326607}" Jul 7 06:08:09.845234 systemd[1]: Started sshd@20-24.199.107.192:22-164.92.210.70:6101.service - OpenSSH per-connection server daemon (164.92.210.70:6101). Jul 7 06:08:10.097541 systemd[1]: Started sshd@21-24.199.107.192:22-139.178.68.195:48898.service - OpenSSH per-connection server daemon (139.178.68.195:48898). Jul 7 06:08:10.168821 sshd[5444]: Accepted publickey for core from 139.178.68.195 port 48898 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:08:10.170400 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:10.183548 systemd-logind[1509]: New session 21 of user core. Jul 7 06:08:10.190129 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:08:10.374997 sshd[5446]: Connection closed by 139.178.68.195 port 48898 Jul 7 06:08:10.376185 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:10.378641 sshd[5441]: kex_protocol_error: type 20 seq 2 [preauth] Jul 7 06:08:10.378641 sshd[5441]: kex_protocol_error: type 30 seq 3 [preauth] Jul 7 06:08:10.384476 systemd[1]: sshd@21-24.199.107.192:22-139.178.68.195:48898.service: Deactivated successfully. Jul 7 06:08:10.387717 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:08:10.391029 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:08:10.393163 systemd-logind[1509]: Removed session 21. Jul 7 06:08:11.803914 sshd[5441]: kex_protocol_error: type 20 seq 4 [preauth] Jul 7 06:08:11.805055 sshd[5441]: kex_protocol_error: type 30 seq 5 [preauth] Jul 7 06:08:13.812551 sshd[5441]: kex_protocol_error: type 20 seq 6 [preauth] Jul 7 06:08:13.812551 sshd[5441]: kex_protocol_error: type 30 seq 7 [preauth] Jul 7 06:08:15.389543 systemd[1]: Started sshd@22-24.199.107.192:22-139.178.68.195:48902.service - OpenSSH per-connection server daemon (139.178.68.195:48902). Jul 7 06:08:15.485720 sshd[5458]: Accepted publickey for core from 139.178.68.195 port 48902 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:08:15.488025 sshd-session[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:15.497917 systemd-logind[1509]: New session 22 of user core. Jul 7 06:08:15.502212 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:08:15.907822 sshd[5460]: Connection closed by 139.178.68.195 port 48902 Jul 7 06:08:15.908771 sshd-session[5458]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:15.921633 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:08:15.922560 systemd[1]: sshd@22-24.199.107.192:22-139.178.68.195:48902.service: Deactivated successfully. Jul 7 06:08:15.926406 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:08:15.932606 systemd-logind[1509]: Removed session 22. Jul 7 06:08:16.807613 containerd[1528]: time="2025-07-07T06:08:16.807554420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5084cb89c54896e54ba7767a90dab4c98b6794561d892da3f2deb22306e6d137\" id:\"b6bb2eb1408a1e58063fb2f5ad7ff1cff5c131d2a096f72bbaefc13a436bdabb\" pid:5484 exited_at:{seconds:1751868496 nanos:807129660}" Jul 7 06:08:20.924262 systemd[1]: Started sshd@23-24.199.107.192:22-139.178.68.195:33662.service - OpenSSH per-connection server daemon (139.178.68.195:33662). Jul 7 06:08:21.052159 sshd[5497]: Accepted publickey for core from 139.178.68.195 port 33662 ssh2: RSA SHA256://HBRUCG4z1rET3XTemWsW6XYz5/MLsCKEcM6u7ZnVc Jul 7 06:08:21.056853 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:21.067276 systemd-logind[1509]: New session 23 of user core. Jul 7 06:08:21.077175 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:08:22.087696 sshd[5499]: Connection closed by 139.178.68.195 port 33662 Jul 7 06:08:22.089223 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:22.115473 systemd[1]: sshd@23-24.199.107.192:22-139.178.68.195:33662.service: Deactivated successfully. Jul 7 06:08:22.123067 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:08:22.126031 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:08:22.130395 systemd-logind[1509]: Removed session 23.