Jul 10 00:18:51.011910 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:18:51.011948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:18:51.011959 kernel: BIOS-provided physical RAM map: Jul 10 00:18:51.011966 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 10 00:18:51.011973 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 10 00:18:51.011980 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 10 00:18:51.011988 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 10 00:18:51.012002 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 10 00:18:51.012014 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:18:51.012021 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 10 00:18:51.012029 kernel: NX (Execute Disable) protection: active Jul 10 00:18:51.012036 kernel: APIC: Static calls initialized Jul 10 00:18:51.012043 kernel: SMBIOS 2.8 present. Jul 10 00:18:51.012051 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 10 00:18:51.012064 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:18:51.012072 kernel: Hypervisor detected: KVM Jul 10 00:18:51.012084 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:18:51.012092 kernel: kvm-clock: using sched offset of 4889462398 cycles Jul 10 00:18:51.012101 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:18:51.012109 kernel: tsc: Detected 2494.170 MHz processor Jul 10 00:18:51.012118 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:18:51.012127 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:18:51.012135 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 10 00:18:51.012147 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 10 00:18:51.012156 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:18:51.012164 kernel: ACPI: Early table checksum verification disabled Jul 10 00:18:51.012172 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 10 00:18:51.012181 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:18:51.012189 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:18:51.012198 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:18:51.012206 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 10 00:18:51.012214 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:18:51.012226 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:18:51.012236 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:18:51.012248 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:18:51.012295 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 10 00:18:51.012305 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 10 00:18:51.012314 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 10 00:18:51.012322 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 10 00:18:51.012331 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 10 00:18:51.012349 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 10 00:18:51.012358 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 10 00:18:51.012366 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 10 00:18:51.012711 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 10 00:18:51.012733 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jul 10 00:18:51.012742 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jul 10 00:18:51.012756 kernel: Zone ranges: Jul 10 00:18:51.012766 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:18:51.012775 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 10 00:18:51.012784 kernel: Normal empty Jul 10 00:18:51.012792 kernel: Device empty Jul 10 00:18:51.012801 kernel: Movable zone start for each node Jul 10 00:18:51.012810 kernel: Early memory node ranges Jul 10 00:18:51.012819 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 10 00:18:51.012828 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 10 00:18:51.012862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 10 00:18:51.012872 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:18:51.012880 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 00:18:51.012890 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 10 00:18:51.012899 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:18:51.012908 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:18:51.012922 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:18:51.012931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:18:51.012942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:18:51.012955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:18:51.012967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:18:51.012976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:18:51.012985 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:18:51.012994 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:18:51.013003 kernel: TSC deadline timer available Jul 10 00:18:51.013012 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:18:51.013036 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:18:51.013049 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:18:51.013067 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:18:51.013080 kernel: CPU topo: Num. cores per package: 2 Jul 10 00:18:51.013092 kernel: CPU topo: Num. threads per package: 2 Jul 10 00:18:51.013101 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 00:18:51.013110 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:18:51.013119 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 10 00:18:51.013128 kernel: Booting paravirtualized kernel on KVM Jul 10 00:18:51.013137 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:18:51.013147 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 00:18:51.013156 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 00:18:51.013169 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 00:18:51.013178 kernel: pcpu-alloc: [0] 0 1 Jul 10 00:18:51.013187 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 10 00:18:51.013198 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:18:51.013208 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:18:51.013217 kernel: random: crng init done Jul 10 00:18:51.013226 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:18:51.013235 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:18:51.013248 kernel: Fallback order for Node 0: 0 Jul 10 00:18:51.013257 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jul 10 00:18:51.013266 kernel: Policy zone: DMA32 Jul 10 00:18:51.013274 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:18:51.013285 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:18:51.013325 kernel: Kernel/User page tables isolation: enabled Jul 10 00:18:51.013338 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:18:51.013347 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:18:51.013356 kernel: Dynamic Preempt: voluntary Jul 10 00:18:51.013371 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:18:51.013403 kernel: rcu: RCU event tracing is enabled. Jul 10 00:18:51.013412 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:18:51.013422 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:18:51.013431 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:18:51.013440 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:18:51.013450 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:18:51.013459 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:18:51.013468 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:18:51.013486 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:18:51.013496 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:18:51.013505 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 10 00:18:51.013514 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:18:51.013523 kernel: Console: colour VGA+ 80x25 Jul 10 00:18:51.013532 kernel: printk: legacy console [tty0] enabled Jul 10 00:18:51.013541 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:18:51.013550 kernel: ACPI: Core revision 20240827 Jul 10 00:18:51.013559 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:18:51.013583 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:18:51.013593 kernel: x2apic enabled Jul 10 00:18:51.013603 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:18:51.013616 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:18:51.013628 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3b633397, max_idle_ns: 440795206106 ns Jul 10 00:18:51.013638 kernel: Calibrating delay loop (skipped) preset value.. 4988.34 BogoMIPS (lpj=2494170) Jul 10 00:18:51.013647 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 10 00:18:51.013657 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 10 00:18:51.013666 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:18:51.013680 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:18:51.013689 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:18:51.013699 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 10 00:18:51.013709 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:18:51.013760 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:18:51.013772 kernel: MDS: Mitigation: Clear CPU buffers Jul 10 00:18:51.013782 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 10 00:18:51.013795 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:18:51.013805 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:18:51.013827 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:18:51.013836 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:18:51.013846 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:18:51.013855 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 10 00:18:51.013864 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:18:51.013874 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:18:51.013883 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:18:51.013897 kernel: landlock: Up and running. Jul 10 00:18:51.013906 kernel: SELinux: Initializing. Jul 10 00:18:51.013915 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:18:51.013924 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:18:51.013934 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 10 00:18:51.013944 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 10 00:18:51.013953 kernel: signal: max sigframe size: 1776 Jul 10 00:18:51.013963 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:18:51.013972 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:18:51.013986 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:18:51.013995 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:18:51.014004 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:18:51.014013 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:18:51.014025 kernel: .... node #0, CPUs: #1 Jul 10 00:18:51.014034 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:18:51.014043 kernel: smpboot: Total of 2 processors activated (9976.68 BogoMIPS) Jul 10 00:18:51.014053 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 125140K reserved, 0K cma-reserved) Jul 10 00:18:51.014063 kernel: devtmpfs: initialized Jul 10 00:18:51.014076 kernel: x86/mm: Memory block size: 128MB Jul 10 00:18:51.014085 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:18:51.014104 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:18:51.014114 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:18:51.014123 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:18:51.014132 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:18:51.014142 kernel: audit: type=2000 audit(1752106727.082:1): state=initialized audit_enabled=0 res=1 Jul 10 00:18:51.014151 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:18:51.014160 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:18:51.014174 kernel: cpuidle: using governor menu Jul 10 00:18:51.014188 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:18:51.014197 kernel: dca service started, version 1.12.1 Jul 10 00:18:51.014207 kernel: PCI: Using configuration type 1 for base access Jul 10 00:18:51.014220 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:18:51.014235 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:18:51.014269 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:18:51.014281 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:18:51.014294 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:18:51.014314 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:18:51.014327 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:18:51.014343 kernel: ACPI: Interpreter enabled Jul 10 00:18:51.014357 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:18:51.014371 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:18:51.014386 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:18:51.015981 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:18:51.016010 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 10 00:18:51.016028 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:18:51.016460 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:18:51.016808 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 10 00:18:51.017069 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 10 00:18:51.017097 kernel: acpiphp: Slot [3] registered Jul 10 00:18:51.017118 kernel: acpiphp: Slot [4] registered Jul 10 00:18:51.017141 kernel: acpiphp: Slot [5] registered Jul 10 00:18:51.017161 kernel: acpiphp: Slot [6] registered Jul 10 00:18:51.017196 kernel: acpiphp: Slot [7] registered Jul 10 00:18:51.017220 kernel: acpiphp: Slot [8] registered Jul 10 00:18:51.017242 kernel: acpiphp: Slot [9] registered Jul 10 00:18:51.017264 kernel: acpiphp: Slot [10] registered Jul 10 00:18:51.017286 kernel: acpiphp: Slot [11] registered Jul 10 00:18:51.017306 kernel: acpiphp: Slot [12] registered Jul 10 00:18:51.017327 kernel: acpiphp: Slot [13] registered Jul 10 00:18:51.017346 kernel: acpiphp: Slot [14] registered Jul 10 00:18:51.017368 kernel: acpiphp: Slot [15] registered Jul 10 00:18:51.019334 kernel: acpiphp: Slot [16] registered Jul 10 00:18:51.019368 kernel: acpiphp: Slot [17] registered Jul 10 00:18:51.019434 kernel: acpiphp: Slot [18] registered Jul 10 00:18:51.019448 kernel: acpiphp: Slot [19] registered Jul 10 00:18:51.019462 kernel: acpiphp: Slot [20] registered Jul 10 00:18:51.019476 kernel: acpiphp: Slot [21] registered Jul 10 00:18:51.019489 kernel: acpiphp: Slot [22] registered Jul 10 00:18:51.019503 kernel: acpiphp: Slot [23] registered Jul 10 00:18:51.019518 kernel: acpiphp: Slot [24] registered Jul 10 00:18:51.019532 kernel: acpiphp: Slot [25] registered Jul 10 00:18:51.019554 kernel: acpiphp: Slot [26] registered Jul 10 00:18:51.019570 kernel: acpiphp: Slot [27] registered Jul 10 00:18:51.019585 kernel: acpiphp: Slot [28] registered Jul 10 00:18:51.019600 kernel: acpiphp: Slot [29] registered Jul 10 00:18:51.019614 kernel: acpiphp: Slot [30] registered Jul 10 00:18:51.019627 kernel: acpiphp: Slot [31] registered Jul 10 00:18:51.019641 kernel: PCI host bridge to bus 0000:00 Jul 10 00:18:51.019884 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:18:51.019989 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:18:51.020084 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:18:51.020170 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 10 00:18:51.020254 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 10 00:18:51.020339 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:18:51.020544 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:18:51.020666 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:18:51.020824 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 10 00:18:51.020958 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jul 10 00:18:51.021196 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 10 00:18:51.021304 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 10 00:18:51.021504 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 10 00:18:51.021621 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 10 00:18:51.021760 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jul 10 00:18:51.021880 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jul 10 00:18:51.021996 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 10 00:18:51.022098 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 10 00:18:51.022210 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 10 00:18:51.022372 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:18:51.022493 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 10 00:18:51.022604 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jul 10 00:18:51.022768 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jul 10 00:18:51.022869 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jul 10 00:18:51.022965 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:18:51.023090 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:18:51.023206 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jul 10 00:18:51.023316 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jul 10 00:18:51.023495 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jul 10 00:18:51.023676 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:18:51.023841 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jul 10 00:18:51.024011 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jul 10 00:18:51.024212 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 10 00:18:51.024487 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:18:51.024682 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jul 10 00:18:51.024898 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jul 10 00:18:51.025091 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 10 00:18:51.025365 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:18:51.034158 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jul 10 00:18:51.034465 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jul 10 00:18:51.034671 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jul 10 00:18:51.034905 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:18:51.035129 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jul 10 00:18:51.035354 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jul 10 00:18:51.035673 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jul 10 00:18:51.035948 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 00:18:51.036174 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jul 10 00:18:51.036453 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 10 00:18:51.036485 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:18:51.036509 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:18:51.036530 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:18:51.036555 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:18:51.036623 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 10 00:18:51.036645 kernel: iommu: Default domain type: Translated Jul 10 00:18:51.036665 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:18:51.036685 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:18:51.036720 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:18:51.036741 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 10 00:18:51.036766 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 10 00:18:51.037102 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 10 00:18:51.037330 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 10 00:18:51.041560 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:18:51.041646 kernel: vgaarb: loaded Jul 10 00:18:51.041682 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:18:51.041708 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:18:51.041774 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:18:51.041797 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:18:51.041819 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:18:51.041837 kernel: pnp: PnP ACPI init Jul 10 00:18:51.041856 kernel: pnp: PnP ACPI: found 4 devices Jul 10 00:18:51.041878 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:18:51.041901 kernel: NET: Registered PF_INET protocol family Jul 10 00:18:51.041923 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:18:51.041945 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 10 00:18:51.041979 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:18:51.042002 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:18:51.042024 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 10 00:18:51.042048 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 10 00:18:51.042069 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:18:51.042091 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:18:51.042116 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:18:51.042139 kernel: NET: Registered PF_XDP protocol family Jul 10 00:18:51.042621 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:18:51.042932 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:18:51.043173 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:18:51.043397 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 10 00:18:51.043606 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 10 00:18:51.043831 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 10 00:18:51.044057 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 10 00:18:51.044085 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 10 00:18:51.044308 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28771 usecs Jul 10 00:18:51.044339 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:18:51.044360 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 10 00:18:51.044381 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3b633397, max_idle_ns: 440795206106 ns Jul 10 00:18:51.047222 kernel: Initialise system trusted keyrings Jul 10 00:18:51.047239 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 10 00:18:51.047250 kernel: Key type asymmetric registered Jul 10 00:18:51.047261 kernel: Asymmetric key parser 'x509' registered Jul 10 00:18:51.047270 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:18:51.047292 kernel: io scheduler mq-deadline registered Jul 10 00:18:51.047302 kernel: io scheduler kyber registered Jul 10 00:18:51.047312 kernel: io scheduler bfq registered Jul 10 00:18:51.047322 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:18:51.047333 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 10 00:18:51.047343 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 10 00:18:51.047353 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 10 00:18:51.047362 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:18:51.047372 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:18:51.047406 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:18:51.047435 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:18:51.047449 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:18:51.047710 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 10 00:18:51.047731 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:18:51.047825 kernel: rtc_cmos 00:03: registered as rtc0 Jul 10 00:18:51.047918 kernel: rtc_cmos 00:03: setting system clock to 2025-07-10T00:18:50 UTC (1752106730) Jul 10 00:18:51.048007 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 10 00:18:51.048028 kernel: intel_pstate: CPU model not supported Jul 10 00:18:51.048038 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:18:51.048048 kernel: Segment Routing with IPv6 Jul 10 00:18:51.048057 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:18:51.048067 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:18:51.048077 kernel: Key type dns_resolver registered Jul 10 00:18:51.048087 kernel: IPI shorthand broadcast: enabled Jul 10 00:18:51.048097 kernel: sched_clock: Marking stable (4078005676, 92677855)->(4192522401, -21838870) Jul 10 00:18:51.048107 kernel: registered taskstats version 1 Jul 10 00:18:51.048120 kernel: Loading compiled-in X.509 certificates Jul 10 00:18:51.048129 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:18:51.048138 kernel: Demotion targets for Node 0: null Jul 10 00:18:51.048148 kernel: Key type .fscrypt registered Jul 10 00:18:51.048158 kernel: Key type fscrypt-provisioning registered Jul 10 00:18:51.048171 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:18:51.048207 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:18:51.048220 kernel: ima: No architecture policies found Jul 10 00:18:51.048230 kernel: clk: Disabling unused clocks Jul 10 00:18:51.048244 kernel: Warning: unable to open an initial console. Jul 10 00:18:51.048255 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:18:51.048265 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:18:51.048275 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:18:51.048288 kernel: Run /init as init process Jul 10 00:18:51.048302 kernel: with arguments: Jul 10 00:18:51.048313 kernel: /init Jul 10 00:18:51.048324 kernel: with environment: Jul 10 00:18:51.048339 kernel: HOME=/ Jul 10 00:18:51.048357 kernel: TERM=linux Jul 10 00:18:51.048368 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:18:51.048398 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:18:51.048414 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:18:51.048426 systemd[1]: Detected virtualization kvm. Jul 10 00:18:51.048436 systemd[1]: Detected architecture x86-64. Jul 10 00:18:51.048446 systemd[1]: Running in initrd. Jul 10 00:18:51.048461 systemd[1]: No hostname configured, using default hostname. Jul 10 00:18:51.048472 systemd[1]: Hostname set to . Jul 10 00:18:51.048482 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:18:51.048492 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:18:51.048503 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:18:51.048513 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:18:51.048525 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:18:51.048535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:18:51.048549 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:18:51.048560 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:18:51.048572 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:18:51.048586 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:18:51.048601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:18:51.048611 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:18:51.048622 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:18:51.048632 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:18:51.048643 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:18:51.048653 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:18:51.048664 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:18:51.048674 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:18:51.048685 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:18:51.048699 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:18:51.048709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:18:51.048720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:18:51.048730 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:18:51.048741 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:18:51.048751 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:18:51.048762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:18:51.048772 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:18:51.048787 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:18:51.048798 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:18:51.048808 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:18:51.048818 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:18:51.048829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:18:51.048840 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:18:51.048854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:18:51.048865 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:18:51.048876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:18:51.048927 systemd-journald[212]: Collecting audit messages is disabled. Jul 10 00:18:51.048965 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:18:51.048976 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:18:51.048987 kernel: Bridge firewalling registered Jul 10 00:18:51.048999 systemd-journald[212]: Journal started Jul 10 00:18:51.049040 systemd-journald[212]: Runtime Journal (/run/log/journal/e676653b4862453ca0fc31ccc3d8c9d3) is 4.9M, max 39.5M, 34.6M free. Jul 10 00:18:50.995238 systemd-modules-load[214]: Inserted module 'overlay' Jul 10 00:18:51.117013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:18:51.117149 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:18:51.046264 systemd-modules-load[214]: Inserted module 'br_netfilter' Jul 10 00:18:51.120046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:18:51.123058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:18:51.130818 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:18:51.134623 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:18:51.146343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:18:51.150503 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:18:51.161234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:18:51.169123 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:18:51.171078 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:18:51.172441 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:18:51.182033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:18:51.187706 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:18:51.201790 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:18:51.259803 systemd-resolved[252]: Positive Trust Anchors: Jul 10 00:18:51.259828 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:18:51.259896 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:18:51.265859 systemd-resolved[252]: Defaulting to hostname 'linux'. Jul 10 00:18:51.268013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:18:51.269759 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:18:51.354478 kernel: SCSI subsystem initialized Jul 10 00:18:51.369454 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:18:51.386465 kernel: iscsi: registered transport (tcp) Jul 10 00:18:51.419497 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:18:51.419594 kernel: QLogic iSCSI HBA Driver Jul 10 00:18:51.453406 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:18:51.475798 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:18:51.476916 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:18:51.555037 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:18:51.559168 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:18:51.621470 kernel: raid6: avx2x4 gen() 18559 MB/s Jul 10 00:18:51.638461 kernel: raid6: avx2x2 gen() 20354 MB/s Jul 10 00:18:51.655956 kernel: raid6: avx2x1 gen() 16018 MB/s Jul 10 00:18:51.656081 kernel: raid6: using algorithm avx2x2 gen() 20354 MB/s Jul 10 00:18:51.674482 kernel: raid6: .... xor() 11653 MB/s, rmw enabled Jul 10 00:18:51.674588 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:18:51.707451 kernel: xor: automatically using best checksumming function avx Jul 10 00:18:51.923895 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:18:51.933386 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:18:51.937788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:18:51.982615 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 10 00:18:51.993763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:18:51.997861 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:18:52.033120 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 10 00:18:52.078415 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:18:52.081326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:18:52.165693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:18:52.170072 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:18:52.271593 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 10 00:18:52.278612 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jul 10 00:18:52.288359 kernel: scsi host0: Virtio SCSI HBA Jul 10 00:18:52.290823 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 10 00:18:52.291140 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:18:52.292647 kernel: GPT:9289727 != 125829119 Jul 10 00:18:52.292695 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:18:52.294401 kernel: GPT:9289727 != 125829119 Jul 10 00:18:52.294452 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:18:52.294466 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:18:52.306415 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:18:52.335407 kernel: AES CTR mode by8 optimization enabled Jul 10 00:18:52.373668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:18:52.377855 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 10 00:18:52.379694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:18:52.382616 kernel: libata version 3.00 loaded. Jul 10 00:18:52.382810 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:18:52.386136 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 10 00:18:52.387745 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 10 00:18:52.387953 kernel: scsi host1: ata_piix Jul 10 00:18:52.388147 kernel: scsi host2: ata_piix Jul 10 00:18:52.388312 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jul 10 00:18:52.388872 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jul 10 00:18:52.391866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:18:52.392721 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:18:52.395440 kernel: ACPI: bus type USB registered Jul 10 00:18:52.397406 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jul 10 00:18:52.403305 kernel: usbcore: registered new interface driver usbfs Jul 10 00:18:52.406533 kernel: usbcore: registered new interface driver hub Jul 10 00:18:52.410437 kernel: usbcore: registered new device driver usb Jul 10 00:18:52.471871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:18:52.602659 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:18:52.626270 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:18:52.636436 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 10 00:18:52.636744 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 10 00:18:52.637677 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 10 00:18:52.638915 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 10 00:18:52.640025 kernel: hub 1-0:1.0: USB hub found Jul 10 00:18:52.640827 kernel: hub 1-0:1.0: 2 ports detected Jul 10 00:18:52.653465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:18:52.666216 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:18:52.667743 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:18:52.669786 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:18:52.672532 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:18:52.673961 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:18:52.675454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:18:52.692934 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:18:52.694886 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:18:52.715328 disk-uuid[618]: Primary Header is updated. Jul 10 00:18:52.715328 disk-uuid[618]: Secondary Entries is updated. Jul 10 00:18:52.715328 disk-uuid[618]: Secondary Header is updated. Jul 10 00:18:52.724250 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:18:52.733612 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:18:52.751435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:18:53.743548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:18:53.743617 disk-uuid[621]: The operation has completed successfully. Jul 10 00:18:53.802638 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:18:53.802804 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:18:53.833930 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:18:53.855773 sh[637]: Success Jul 10 00:18:53.878619 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:18:53.878727 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:18:53.879580 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:18:53.891442 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 10 00:18:53.984266 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:18:53.990607 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:18:54.003456 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:18:54.022459 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:18:54.024602 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (649) Jul 10 00:18:54.029353 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:18:54.029462 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:18:54.029499 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:18:54.037649 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:18:54.038895 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:18:54.039883 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:18:54.041603 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:18:54.043543 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:18:54.073489 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (680) Jul 10 00:18:54.076564 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:18:54.076667 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:18:54.076690 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:18:54.087467 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:18:54.089719 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:18:54.092062 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:18:54.199866 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:18:54.206892 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:18:54.282213 systemd-networkd[819]: lo: Link UP Jul 10 00:18:54.283003 systemd-networkd[819]: lo: Gained carrier Jul 10 00:18:54.286731 systemd-networkd[819]: Enumeration completed Jul 10 00:18:54.287971 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 10 00:18:54.287977 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 10 00:18:54.288712 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:18:54.289360 systemd[1]: Reached target network.target - Network. Jul 10 00:18:54.290992 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:18:54.290996 systemd-networkd[819]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:18:54.292303 systemd-networkd[819]: eth0: Link UP Jul 10 00:18:54.292307 systemd-networkd[819]: eth0: Gained carrier Jul 10 00:18:54.292322 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 10 00:18:54.297156 ignition[723]: Ignition 2.21.0 Jul 10 00:18:54.297166 ignition[723]: Stage: fetch-offline Jul 10 00:18:54.297222 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:18:54.297232 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:18:54.299306 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:18:54.297355 ignition[723]: parsed url from cmdline: "" Jul 10 00:18:54.300901 systemd-networkd[819]: eth1: Link UP Jul 10 00:18:54.297358 ignition[723]: no config URL provided Jul 10 00:18:54.300907 systemd-networkd[819]: eth1: Gained carrier Jul 10 00:18:54.297364 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:18:54.300970 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:18:54.297373 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:18:54.302626 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:18:54.297394 ignition[723]: failed to fetch config: resource requires networking Jul 10 00:18:54.297633 ignition[723]: Ignition finished successfully Jul 10 00:18:54.318503 systemd-networkd[819]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 Jul 10 00:18:54.324526 systemd-networkd[819]: eth0: DHCPv4 address 164.90.146.220/20, gateway 164.90.144.1 acquired from 169.254.169.253 Jul 10 00:18:54.349239 ignition[828]: Ignition 2.21.0 Jul 10 00:18:54.349261 ignition[828]: Stage: fetch Jul 10 00:18:54.349504 ignition[828]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:18:54.349520 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:18:54.349653 ignition[828]: parsed url from cmdline: "" Jul 10 00:18:54.349659 ignition[828]: no config URL provided Jul 10 00:18:54.349666 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:18:54.349677 ignition[828]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:18:54.349719 ignition[828]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 10 00:18:54.379929 ignition[828]: GET result: OK Jul 10 00:18:54.380280 ignition[828]: parsing config with SHA512: c24923a72947f293fd8599acbb34f944cb6cd93dd5db82371e4ae6d5ebe654f5ff8606dbb2f8675a0b19953f3af06caf4850d5cf62f64953d799f22fb15b3c8f Jul 10 00:18:54.390011 unknown[828]: fetched base config from "system" Jul 10 00:18:54.390023 unknown[828]: fetched base config from "system" Jul 10 00:18:54.390403 ignition[828]: fetch: fetch complete Jul 10 00:18:54.390030 unknown[828]: fetched user config from "digitalocean" Jul 10 00:18:54.390414 ignition[828]: fetch: fetch passed Jul 10 00:18:54.394454 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:18:54.390477 ignition[828]: Ignition finished successfully Jul 10 00:18:54.396310 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:18:54.434051 ignition[836]: Ignition 2.21.0 Jul 10 00:18:54.434067 ignition[836]: Stage: kargs Jul 10 00:18:54.434283 ignition[836]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:18:54.434297 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:18:54.436066 ignition[836]: kargs: kargs passed Jul 10 00:18:54.436168 ignition[836]: Ignition finished successfully Jul 10 00:18:54.441539 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:18:54.444713 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:18:54.476072 ignition[842]: Ignition 2.21.0 Jul 10 00:18:54.476089 ignition[842]: Stage: disks Jul 10 00:18:54.476366 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:18:54.476398 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:18:54.477545 ignition[842]: disks: disks passed Jul 10 00:18:54.479339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:18:54.477606 ignition[842]: Ignition finished successfully Jul 10 00:18:54.480449 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:18:54.480829 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:18:54.481598 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:18:54.482506 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:18:54.483292 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:18:54.485320 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:18:54.520598 systemd-fsck[851]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:18:54.525450 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:18:54.530962 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:18:54.670126 kernel: EXT4-fs (vda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:18:54.669679 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:18:54.671643 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:18:54.674601 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:18:54.676774 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:18:54.689180 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jul 10 00:18:54.692587 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 00:18:54.693929 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:18:54.694664 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:18:54.699444 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Jul 10 00:18:54.700146 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:18:54.704349 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:18:54.704419 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:18:54.704441 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:18:54.714134 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:18:54.720561 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:18:54.789796 coreos-metadata[861]: Jul 10 00:18:54.789 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 10 00:18:54.796169 initrd-setup-root[889]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:18:54.804860 coreos-metadata[861]: Jul 10 00:18:54.804 INFO Fetch successful Jul 10 00:18:54.807575 coreos-metadata[862]: Jul 10 00:18:54.807 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 10 00:18:54.809708 initrd-setup-root[896]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:18:54.818053 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jul 10 00:18:54.819516 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jul 10 00:18:54.822616 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:18:54.823662 coreos-metadata[862]: Jul 10 00:18:54.823 INFO Fetch successful Jul 10 00:18:54.831529 coreos-metadata[862]: Jul 10 00:18:54.831 INFO wrote hostname ci-4344.1.1-n-5827fce73f to /sysroot/etc/hostname Jul 10 00:18:54.832245 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:18:54.833700 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:18:54.977505 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:18:54.980258 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:18:54.982567 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:18:55.013436 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:18:55.025972 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:18:55.037622 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:18:55.067271 ignition[981]: INFO : Ignition 2.21.0 Jul 10 00:18:55.067271 ignition[981]: INFO : Stage: mount Jul 10 00:18:55.069535 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:18:55.069535 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:18:55.072535 ignition[981]: INFO : mount: mount passed Jul 10 00:18:55.072535 ignition[981]: INFO : Ignition finished successfully Jul 10 00:18:55.075345 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:18:55.078578 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:18:55.104427 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:18:55.138507 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Jul 10 00:18:55.141526 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:18:55.141619 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:18:55.143555 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:18:55.149695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:18:55.201992 ignition[1010]: INFO : Ignition 2.21.0 Jul 10 00:18:55.204603 ignition[1010]: INFO : Stage: files Jul 10 00:18:55.204603 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:18:55.204603 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:18:55.209044 ignition[1010]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:18:55.210207 ignition[1010]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:18:55.210207 ignition[1010]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:18:55.214401 ignition[1010]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:18:55.215561 ignition[1010]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:18:55.216759 unknown[1010]: wrote ssh authorized keys file for user: core Jul 10 00:18:55.217804 ignition[1010]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:18:55.220846 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:18:55.222145 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 10 00:18:55.270910 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:18:55.545661 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:18:55.545661 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:18:55.547328 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:18:55.547328 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:18:55.547328 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:18:55.547328 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:18:55.547328 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:18:55.547328 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:18:55.547328 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:18:55.560241 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:18:55.560241 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:18:55.560241 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:18:55.560241 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:18:55.560241 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:18:55.560241 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 10 00:18:55.846647 systemd-networkd[819]: eth1: Gained IPv6LL Jul 10 00:18:56.166628 systemd-networkd[819]: eth0: Gained IPv6LL Jul 10 00:18:56.273212 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:18:56.656789 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:18:56.656789 ignition[1010]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 00:18:56.659230 ignition[1010]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:18:56.661044 ignition[1010]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:18:56.661044 ignition[1010]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 00:18:56.661044 ignition[1010]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:18:56.667043 ignition[1010]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:18:56.667043 ignition[1010]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:18:56.667043 ignition[1010]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:18:56.667043 ignition[1010]: INFO : files: files passed Jul 10 00:18:56.667043 ignition[1010]: INFO : Ignition finished successfully Jul 10 00:18:56.665033 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:18:56.669121 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:18:56.673563 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:18:56.692211 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:18:56.692499 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:18:56.706223 initrd-setup-root-after-ignition[1040]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:18:56.706223 initrd-setup-root-after-ignition[1040]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:18:56.710075 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:18:56.712519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:18:56.714191 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:18:56.716340 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:18:56.780249 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:18:56.780517 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:18:56.782139 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:18:56.782970 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:18:56.784126 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:18:56.786172 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:18:56.819829 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:18:56.824342 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:18:56.856585 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:18:56.858129 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:18:56.859634 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:18:56.860762 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:18:56.861037 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:18:56.863514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:18:56.864646 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:18:56.865865 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:18:56.867058 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:18:56.867619 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:18:56.868231 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:18:56.868856 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:18:56.869929 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:18:56.870878 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:18:56.871623 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:18:56.872336 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:18:56.873173 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:18:56.873477 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:18:56.874913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:18:56.875891 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:18:56.876837 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:18:56.877085 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:18:56.877790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:18:56.878052 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:18:56.879441 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:18:56.879705 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:18:56.880642 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:18:56.880882 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:18:56.881977 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 00:18:56.882140 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:18:56.885691 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:18:56.886172 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:18:56.886415 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:18:56.889816 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:18:56.891546 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:18:56.891814 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:18:56.898346 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:18:56.898601 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:18:56.908367 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:18:56.908563 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:18:56.930670 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:18:56.941328 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:18:56.942222 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:18:56.945333 ignition[1064]: INFO : Ignition 2.21.0 Jul 10 00:18:56.945333 ignition[1064]: INFO : Stage: umount Jul 10 00:18:56.949099 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:18:56.949099 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 10 00:18:56.950268 ignition[1064]: INFO : umount: umount passed Jul 10 00:18:56.950268 ignition[1064]: INFO : Ignition finished successfully Jul 10 00:18:56.951790 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:18:56.952612 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:18:56.954704 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:18:56.955399 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:18:56.956492 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:18:56.957115 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:18:56.958178 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:18:56.958248 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:18:56.958730 systemd[1]: Stopped target network.target - Network. Jul 10 00:18:56.959778 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:18:56.959858 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:18:56.960623 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:18:56.961230 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:18:56.965577 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:18:56.966259 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:18:56.967473 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:18:56.968233 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:18:56.968303 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:18:56.968986 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:18:56.969038 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:18:56.969989 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:18:56.970073 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:18:56.970850 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:18:56.970902 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:18:56.971687 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:18:56.971748 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:18:56.972796 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:18:56.973856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:18:56.978969 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:18:56.979524 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:18:56.983891 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:18:56.985288 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:18:56.985449 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:18:56.989191 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:18:56.989890 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:18:56.990159 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:18:56.994053 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:18:56.995430 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:18:56.996355 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:18:56.996505 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:18:56.999696 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:18:57.000453 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:18:57.000586 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:18:57.001739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:18:57.001861 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:18:57.003019 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:18:57.003127 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:18:57.003943 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:18:57.015551 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:18:57.067610 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:18:57.069184 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:18:57.073223 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:18:57.074662 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:18:57.076038 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:18:57.076779 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:18:57.077938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:18:57.078488 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:18:57.079746 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:18:57.079822 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:18:57.081350 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:18:57.081482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:18:57.085609 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:18:57.086228 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:18:57.086345 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:18:57.087764 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:18:57.087838 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:18:57.090164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:18:57.090264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:18:57.095351 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:18:57.100003 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:18:57.111472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:18:57.111655 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:18:57.114693 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:18:57.116823 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:18:57.140082 systemd[1]: Switching root. Jul 10 00:18:57.176247 systemd-journald[212]: Journal stopped Jul 10 00:18:58.566358 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Jul 10 00:18:58.566455 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:18:58.566478 kernel: SELinux: policy capability open_perms=1 Jul 10 00:18:58.566491 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:18:58.566504 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:18:58.566520 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:18:58.566537 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:18:58.566555 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:18:58.566567 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:18:58.566582 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:18:58.566594 kernel: audit: type=1403 audit(1752106737.358:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:18:58.566607 systemd[1]: Successfully loaded SELinux policy in 51.367ms. Jul 10 00:18:58.566630 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.708ms. Jul 10 00:18:58.566645 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:18:58.566662 systemd[1]: Detected virtualization kvm. Jul 10 00:18:58.566677 systemd[1]: Detected architecture x86-64. Jul 10 00:18:58.566690 systemd[1]: Detected first boot. Jul 10 00:18:58.566709 systemd[1]: Hostname set to . Jul 10 00:18:58.566722 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:18:58.566735 zram_generator::config[1111]: No configuration found. Jul 10 00:18:58.566752 kernel: Guest personality initialized and is inactive Jul 10 00:18:58.566764 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:18:58.566779 kernel: Initialized host personality Jul 10 00:18:58.566795 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:18:58.566812 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:18:58.566827 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:18:58.566843 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:18:58.566856 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:18:58.566869 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:18:58.566885 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:18:58.566899 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:18:58.566915 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:18:58.566928 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:18:58.566944 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:18:58.566958 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:18:58.566971 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:18:58.566987 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:18:58.567000 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:18:58.567013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:18:58.567026 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:18:58.567045 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:18:58.567057 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:18:58.567070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:18:58.567083 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:18:58.567096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:18:58.567109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:18:58.567125 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:18:58.567139 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:18:58.567155 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:18:58.567168 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:18:58.567180 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:18:58.567223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:18:58.567246 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:18:58.567263 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:18:58.567282 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:18:58.567305 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:18:58.567327 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:18:58.567345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:18:58.567367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:18:58.571510 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:18:58.571561 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:18:58.571583 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:18:58.571603 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:18:58.571624 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:18:58.571660 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:18:58.571765 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:18:58.571803 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:18:58.571822 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:18:58.571846 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:18:58.571864 systemd[1]: Reached target machines.target - Containers. Jul 10 00:18:58.571886 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:18:58.571907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:18:58.571933 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:18:58.571954 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:18:58.571972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:18:58.571988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:18:58.572006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:18:58.572026 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:18:58.572044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:18:58.572068 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:18:58.572088 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:18:58.572112 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:18:58.572132 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:18:58.572151 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:18:58.572193 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:18:58.572214 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:18:58.572236 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:18:58.572257 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:18:58.572281 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:18:58.572300 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:18:58.572327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:18:58.572350 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:18:58.572366 systemd[1]: Stopped verity-setup.service. Jul 10 00:18:58.572453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:18:58.572473 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:18:58.572492 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:18:58.572513 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:18:58.572536 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:18:58.572557 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:18:58.572583 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:18:58.572604 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:18:58.572625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:18:58.572645 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:18:58.572759 systemd-journald[1180]: Collecting audit messages is disabled. Jul 10 00:18:58.572806 systemd-journald[1180]: Journal started Jul 10 00:18:58.572881 systemd-journald[1180]: Runtime Journal (/run/log/journal/e676653b4862453ca0fc31ccc3d8c9d3) is 4.9M, max 39.5M, 34.6M free. Jul 10 00:18:58.219847 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:18:58.579702 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:18:58.243684 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:18:58.244506 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:18:58.658010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:18:58.658275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:18:58.660566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:18:58.663159 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:18:58.666293 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:18:58.666363 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:18:58.680189 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:18:58.690617 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:18:58.692898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:18:58.723538 kernel: fuse: init (API version 7.41) Jul 10 00:18:58.718101 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:18:58.726204 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:18:58.727261 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:18:58.739580 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:18:58.750355 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:18:58.762077 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:18:58.765053 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:18:58.768696 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:18:58.769779 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:18:58.776538 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:18:58.776943 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:18:58.778646 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:18:58.786080 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:18:58.793686 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:18:58.799637 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:18:58.810797 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:18:58.823617 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:18:58.836431 kernel: loop: module loaded Jul 10 00:18:58.835817 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:18:58.836965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:18:58.838265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:18:58.840787 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:18:58.844723 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:18:58.845650 systemd-journald[1180]: Time spent on flushing to /var/log/journal/e676653b4862453ca0fc31ccc3d8c9d3 is 132.690ms for 1002 entries. Jul 10 00:18:58.845650 systemd-journald[1180]: System Journal (/var/log/journal/e676653b4862453ca0fc31ccc3d8c9d3) is 8M, max 195.6M, 187.6M free. Jul 10 00:19:00.742802 systemd-journald[1180]: Received client request to flush runtime journal. Jul 10 00:19:00.742977 kernel: loop0: detected capacity change from 0 to 229808 Jul 10 00:19:00.743031 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1225530585 wd_nsec: 1225530084 Jul 10 00:19:00.743065 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:19:00.743087 kernel: ACPI: bus type drm_connector registered Jul 10 00:18:58.855790 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:18:58.926802 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:18:58.941884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:18:58.957865 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:18:58.974166 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:19:00.705678 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:19:00.738113 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:19:00.739713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:19:00.740240 systemd[1]: modprobe@drm.service: Consumed 1.759s CPU time, 3.2M memory peak. Jul 10 00:19:00.740827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:19:00.752621 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:19:00.779539 kernel: loop1: detected capacity change from 0 to 146240 Jul 10 00:19:00.835325 kernel: loop2: detected capacity change from 0 to 113872 Jul 10 00:19:00.849925 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:19:00.857789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:19:00.886063 kernel: loop3: detected capacity change from 0 to 8 Jul 10 00:19:00.919536 kernel: loop4: detected capacity change from 0 to 229808 Jul 10 00:19:00.976419 kernel: loop5: detected capacity change from 0 to 146240 Jul 10 00:19:00.979648 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jul 10 00:19:00.979680 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jul 10 00:19:01.016426 kernel: loop6: detected capacity change from 0 to 113872 Jul 10 00:19:01.032802 kernel: loop7: detected capacity change from 0 to 8 Jul 10 00:19:01.034910 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 10 00:19:01.035921 (sd-merge)[1257]: Merged extensions into '/usr'. Jul 10 00:19:01.036238 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:19:01.046406 systemd[1]: Reload requested from client PID 1216 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:19:01.046444 systemd[1]: Reloading... Jul 10 00:19:01.206465 zram_generator::config[1281]: No configuration found. Jul 10 00:19:01.575426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:19:01.762963 ldconfig[1208]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:19:01.795551 systemd[1]: Reloading finished in 747 ms. Jul 10 00:19:01.812680 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:19:01.814116 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:19:01.830001 systemd[1]: Starting ensure-sysext.service... Jul 10 00:19:01.834751 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:19:01.882554 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:19:01.882575 systemd[1]: Reloading... Jul 10 00:19:01.892251 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:19:01.892294 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:19:01.892747 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:19:01.893188 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:19:01.895141 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:19:01.897777 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jul 10 00:19:01.897902 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jul 10 00:19:01.906140 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:19:01.906162 systemd-tmpfiles[1328]: Skipping /boot Jul 10 00:19:01.928986 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:19:01.929009 systemd-tmpfiles[1328]: Skipping /boot Jul 10 00:19:02.055484 zram_generator::config[1358]: No configuration found. Jul 10 00:19:02.237238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:19:02.389691 systemd[1]: Reloading finished in 506 ms. Jul 10 00:19:02.415968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:19:02.429977 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:19:02.445057 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:19:02.450899 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:19:02.456132 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:19:02.461899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:19:02.476040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:19:02.486655 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:19:02.493140 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:19:02.493757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:19:02.498329 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:19:02.502981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:19:02.511484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:19:02.512248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:19:02.512502 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:19:02.512672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:19:02.524030 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:19:02.528275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:19:02.529849 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:19:02.530222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:19:02.530902 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:19:02.531091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:19:02.540123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:19:02.541620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:19:02.553918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:19:02.555782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:19:02.556032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:19:02.556266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:19:02.560547 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:19:02.560990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:19:02.563725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:19:02.564489 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:19:02.581506 systemd[1]: Finished ensure-sysext.service. Jul 10 00:19:02.586694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:19:02.588908 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:19:02.590351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:19:02.592000 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:19:02.602212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:19:02.603740 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:19:02.608650 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:19:02.611524 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:19:02.655534 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:19:02.658174 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:19:02.666580 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:19:02.667512 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:19:02.697179 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Jul 10 00:19:02.718123 augenrules[1440]: No rules Jul 10 00:19:02.720473 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:19:02.722021 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:19:02.722715 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:19:02.731066 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:19:02.767485 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:19:02.773418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:19:02.932827 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:19:02.933676 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:19:02.971856 systemd-resolved[1403]: Positive Trust Anchors: Jul 10 00:19:02.971876 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:19:02.971948 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:19:02.986005 systemd-resolved[1403]: Using system hostname 'ci-4344.1.1-n-5827fce73f'. Jul 10 00:19:02.990177 systemd-networkd[1455]: lo: Link UP Jul 10 00:19:02.990192 systemd-networkd[1455]: lo: Gained carrier Jul 10 00:19:02.991347 systemd-networkd[1455]: Enumeration completed Jul 10 00:19:02.991569 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:19:02.992625 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:19:02.994670 systemd[1]: Reached target network.target - Network. Jul 10 00:19:02.995068 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:19:02.995515 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:19:02.996053 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:19:02.996651 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:19:02.997189 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:19:02.997820 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:19:02.998356 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:19:02.998799 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:19:03.000528 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:19:03.000590 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:19:03.001085 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:19:03.003098 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:19:03.006252 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:19:03.014010 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:19:03.017154 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:19:03.017838 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:19:03.031709 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:19:03.033203 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:19:03.038629 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:19:03.043157 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:19:03.046566 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:19:03.054236 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:19:03.055541 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:19:03.057611 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:19:03.057657 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:19:03.061807 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:19:03.070850 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:19:03.076861 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:19:03.091885 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:19:03.097762 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:19:03.102893 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:19:03.105587 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:19:03.112955 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:19:03.122913 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:19:03.125777 jq[1489]: false Jul 10 00:19:03.130605 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:19:03.142784 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:19:03.145339 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Refreshing passwd entry cache Jul 10 00:19:03.145848 oslogin_cache_refresh[1491]: Refreshing passwd entry cache Jul 10 00:19:03.150835 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Failure getting users, quitting Jul 10 00:19:03.150942 oslogin_cache_refresh[1491]: Failure getting users, quitting Jul 10 00:19:03.151063 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:19:03.151099 oslogin_cache_refresh[1491]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:19:03.151198 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Refreshing group entry cache Jul 10 00:19:03.151229 oslogin_cache_refresh[1491]: Refreshing group entry cache Jul 10 00:19:03.152590 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Failure getting groups, quitting Jul 10 00:19:03.152772 oslogin_cache_refresh[1491]: Failure getting groups, quitting Jul 10 00:19:03.153417 google_oslogin_nss_cache[1491]: oslogin_cache_refresh[1491]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:19:03.152881 oslogin_cache_refresh[1491]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:19:03.153810 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:19:03.164875 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:19:03.167299 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:19:03.169783 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:19:03.178424 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:19:03.189777 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:19:03.197491 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:19:03.198922 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:19:03.199491 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:19:03.201310 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:19:03.206065 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:19:03.285272 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:19:03.284636 dbus-daemon[1487]: [system] SELinux support is enabled Jul 10 00:19:03.293150 jq[1500]: true Jul 10 00:19:03.303447 extend-filesystems[1490]: Found /dev/vda6 Jul 10 00:19:03.307117 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:19:03.321167 tar[1502]: linux-amd64/LICENSE Jul 10 00:19:03.321167 tar[1502]: linux-amd64/helm Jul 10 00:19:03.307208 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:19:03.308682 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:19:03.308732 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:19:03.310887 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:19:03.312490 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:19:03.361894 extend-filesystems[1490]: Found /dev/vda9 Jul 10 00:19:03.380337 update_engine[1497]: I20250710 00:19:03.380198 1497 main.cc:92] Flatcar Update Engine starting Jul 10 00:19:03.380624 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:19:03.385862 extend-filesystems[1490]: Checking size of /dev/vda9 Jul 10 00:19:03.402984 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:19:03.404915 update_engine[1497]: I20250710 00:19:03.404778 1497 update_check_scheduler.cc:74] Next update check in 5m52s Jul 10 00:19:03.410767 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:19:03.412136 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:19:03.427259 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:19:03.460058 coreos-metadata[1486]: Jul 10 00:19:03.459 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 10 00:19:03.463748 coreos-metadata[1486]: Jul 10 00:19:03.463 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Jul 10 00:19:03.473880 jq[1520]: true Jul 10 00:19:03.478054 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:19:03.496086 extend-filesystems[1490]: Resized partition /dev/vda9 Jul 10 00:19:03.502449 extend-filesystems[1537]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:19:03.507607 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 10 00:19:03.659231 bash[1551]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:19:03.662889 systemd-logind[1496]: New seat seat0. Jul 10 00:19:03.665207 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:19:03.679471 systemd[1]: Starting sshkeys.service... Jul 10 00:19:03.680125 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:19:03.758676 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:19:03.773187 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 10 00:19:03.774052 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 00:19:03.778201 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 00:19:03.820779 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:19:03.820779 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 10 00:19:03.820779 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 10 00:19:03.826347 extend-filesystems[1490]: Resized filesystem in /dev/vda9 Jul 10 00:19:03.826313 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:19:03.826809 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:19:03.915559 systemd-networkd[1455]: eth1: Configuring with /run/systemd/network/10-7a:70:4f:6c:ec:d0.network. Jul 10 00:19:03.916607 systemd-networkd[1455]: eth1: Link UP Jul 10 00:19:03.916937 systemd-networkd[1455]: eth1: Gained carrier Jul 10 00:19:03.929592 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 10 00:19:03.955866 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jul 10 00:19:03.964832 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 10 00:19:03.965470 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:19:04.013983 systemd-networkd[1455]: eth0: Configuring with /run/systemd/network/10-aa:21:dd:99:97:91.network. Jul 10 00:19:04.025929 systemd-networkd[1455]: eth0: Link UP Jul 10 00:19:04.025932 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 10 00:19:04.026786 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 10 00:19:04.038855 systemd-networkd[1455]: eth0: Gained carrier Jul 10 00:19:04.047650 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 10 00:19:04.050540 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 10 00:19:04.059882 kernel: ISO 9660 Extensions: RRIP_1991A Jul 10 00:19:04.068872 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 10 00:19:04.117986 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 10 00:19:04.141709 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:19:04.146349 coreos-metadata[1558]: Jul 10 00:19:04.146 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 10 00:19:04.174528 coreos-metadata[1558]: Jul 10 00:19:04.173 INFO Fetch successful Jul 10 00:19:04.190971 unknown[1558]: wrote ssh authorized keys file for user: core Jul 10 00:19:04.204424 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:19:04.231344 update-ssh-keys[1584]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:19:04.237021 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 00:19:04.245819 systemd[1]: Finished sshkeys.service. Jul 10 00:19:04.297067 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:19:04.304592 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 10 00:19:04.313866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:19:04.322434 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:19:04.382974 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:19:04.394413 containerd[1524]: time="2025-07-10T00:19:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:19:04.394413 containerd[1524]: time="2025-07-10T00:19:04.393824819Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:19:04.412947 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:19:04.443962 containerd[1524]: time="2025-07-10T00:19:04.443804706Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.787µs" Jul 10 00:19:04.443962 containerd[1524]: time="2025-07-10T00:19:04.443871742Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:19:04.443962 containerd[1524]: time="2025-07-10T00:19:04.443901568Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:19:04.444126 containerd[1524]: time="2025-07-10T00:19:04.444092746Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:19:04.444126 containerd[1524]: time="2025-07-10T00:19:04.444109611Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:19:04.444212 containerd[1524]: time="2025-07-10T00:19:04.444137765Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:19:04.444212 containerd[1524]: time="2025-07-10T00:19:04.444194449Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:19:04.444212 containerd[1524]: time="2025-07-10T00:19:04.444206157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:19:04.453568 containerd[1524]: time="2025-07-10T00:19:04.453509430Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:19:04.453568 containerd[1524]: time="2025-07-10T00:19:04.453558819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:19:04.453733 containerd[1524]: time="2025-07-10T00:19:04.453584181Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:19:04.453733 containerd[1524]: time="2025-07-10T00:19:04.453603826Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:19:04.453867 containerd[1524]: time="2025-07-10T00:19:04.453824787Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:19:04.454291 containerd[1524]: time="2025-07-10T00:19:04.454250650Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:19:04.454456 containerd[1524]: time="2025-07-10T00:19:04.454318061Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:19:04.454456 containerd[1524]: time="2025-07-10T00:19:04.454335852Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:19:04.456487 containerd[1524]: time="2025-07-10T00:19:04.456424472Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:19:04.456938 containerd[1524]: time="2025-07-10T00:19:04.456904801Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:19:04.457078 containerd[1524]: time="2025-07-10T00:19:04.457060367Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:19:04.463895 coreos-metadata[1486]: Jul 10 00:19:04.463 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Jul 10 00:19:04.470281 containerd[1524]: time="2025-07-10T00:19:04.470195680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:19:04.470512 containerd[1524]: time="2025-07-10T00:19:04.470332733Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:19:04.470512 containerd[1524]: time="2025-07-10T00:19:04.470354775Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:19:04.470512 containerd[1524]: time="2025-07-10T00:19:04.470447232Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:19:04.470512 containerd[1524]: time="2025-07-10T00:19:04.470466205Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:19:04.470512 containerd[1524]: time="2025-07-10T00:19:04.470481232Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:19:04.470512 containerd[1524]: time="2025-07-10T00:19:04.470496880Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:19:04.470512 containerd[1524]: time="2025-07-10T00:19:04.470509751Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:19:04.470760 containerd[1524]: time="2025-07-10T00:19:04.470521766Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:19:04.470760 containerd[1524]: time="2025-07-10T00:19:04.470537623Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:19:04.470760 containerd[1524]: time="2025-07-10T00:19:04.470547681Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:19:04.470760 containerd[1524]: time="2025-07-10T00:19:04.470561599Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:19:04.470760 containerd[1524]: time="2025-07-10T00:19:04.470748127Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470781584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470801719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470813521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470827703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470842607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470854319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470874438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470886744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470898094Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:19:04.470919 containerd[1524]: time="2025-07-10T00:19:04.470910502Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:19:04.471301 containerd[1524]: time="2025-07-10T00:19:04.470998845Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:19:04.471301 containerd[1524]: time="2025-07-10T00:19:04.471015757Z" level=info msg="Start snapshots syncer" Jul 10 00:19:04.471467 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:19:04.476417 containerd[1524]: time="2025-07-10T00:19:04.475632901Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:19:04.476417 containerd[1524]: time="2025-07-10T00:19:04.476063448Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:19:04.476753 containerd[1524]: time="2025-07-10T00:19:04.476128580Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:19:04.476753 containerd[1524]: time="2025-07-10T00:19:04.476240513Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:19:04.479784 coreos-metadata[1486]: Jul 10 00:19:04.478 INFO Fetch successful Jul 10 00:19:04.479344 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:19:04.482483 containerd[1524]: time="2025-07-10T00:19:04.482373048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482491289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482506929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482518246Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482534032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482545967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482558581Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482587733Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482599212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:19:04.482620 containerd[1524]: time="2025-07-10T00:19:04.482611398Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484132181Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484258742Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484273228Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484285647Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484294331Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484304667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484316795Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484335428Z" level=info msg="runtime interface created" Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484341543Z" level=info msg="created NRI interface" Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484350568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.484375100Z" level=info msg="Connect containerd service" Jul 10 00:19:04.485563 containerd[1524]: time="2025-07-10T00:19:04.485259937Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:19:04.495023 containerd[1524]: time="2025-07-10T00:19:04.494941050Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:19:04.528917 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 10 00:19:04.537419 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:19:04.565954 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:19:04.567062 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:19:04.574928 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:19:04.620007 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:19:04.622791 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:19:04.674226 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:19:04.681701 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:19:04.688701 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:19:04.689545 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:19:04.791496 containerd[1524]: time="2025-07-10T00:19:04.790145009Z" level=info msg="Start subscribing containerd event" Jul 10 00:19:04.791496 containerd[1524]: time="2025-07-10T00:19:04.790244588Z" level=info msg="Start recovering state" Jul 10 00:19:04.793410 containerd[1524]: time="2025-07-10T00:19:04.793177793Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:19:04.793410 containerd[1524]: time="2025-07-10T00:19:04.793272816Z" level=info msg="Start event monitor" Jul 10 00:19:04.793410 containerd[1524]: time="2025-07-10T00:19:04.793300887Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:19:04.793410 containerd[1524]: time="2025-07-10T00:19:04.793324267Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:19:04.793410 containerd[1524]: time="2025-07-10T00:19:04.793360024Z" level=info msg="Start streaming server" Jul 10 00:19:04.793737 containerd[1524]: time="2025-07-10T00:19:04.793706120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:19:04.793808 containerd[1524]: time="2025-07-10T00:19:04.793797521Z" level=info msg="runtime interface starting up..." Jul 10 00:19:04.793850 containerd[1524]: time="2025-07-10T00:19:04.793841792Z" level=info msg="starting plugins..." Jul 10 00:19:04.793925 containerd[1524]: time="2025-07-10T00:19:04.793915043Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:19:04.797517 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:19:04.801110 containerd[1524]: time="2025-07-10T00:19:04.797702420Z" level=info msg="containerd successfully booted in 0.418977s" Jul 10 00:19:04.940809 systemd-networkd[1455]: eth1: Gained IPv6LL Jul 10 00:19:04.949516 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:19:04.950690 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:19:04.952549 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 10 00:19:04.956992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:19:04.962533 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:19:05.033360 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 10 00:19:05.083419 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 10 00:19:05.089229 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:19:05.137033 kernel: Console: switching to colour dummy device 80x25 Jul 10 00:19:05.137141 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 10 00:19:05.137159 kernel: [drm] features: -context_init Jul 10 00:19:05.178052 kernel: [drm] number of scanouts: 1 Jul 10 00:19:05.178139 kernel: [drm] number of cap sets: 0 Jul 10 00:19:05.181494 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 10 00:19:05.258366 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:19:05.259023 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:19:05.264842 systemd[1]: Started sshd@0-164.90.146.220:22-147.75.109.163:57164.service - OpenSSH per-connection server daemon (147.75.109.163:57164). Jul 10 00:19:05.314430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:19:05.345159 systemd-logind[1496]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:19:05.450892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:19:05.451688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:19:05.456021 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:19:05.466089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:19:05.485606 sshd[1654]: Accepted publickey for core from 147.75.109.163 port 57164 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:05.491578 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:05.513448 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:19:05.516520 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:19:05.576173 systemd-logind[1496]: New session 1 of user core. Jul 10 00:19:05.577546 systemd-networkd[1455]: eth0: Gained IPv6LL Jul 10 00:19:05.578997 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 10 00:19:05.592582 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:19:05.603308 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:19:05.629545 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:19:05.656225 systemd-logind[1496]: New session c1 of user core. Jul 10 00:19:05.682695 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:19:05.734880 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:19:05.870663 tar[1502]: linux-amd64/README.md Jul 10 00:19:05.910495 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:19:05.933151 systemd[1668]: Queued start job for default target default.target. Jul 10 00:19:05.940355 systemd[1668]: Created slice app.slice - User Application Slice. Jul 10 00:19:05.940410 systemd[1668]: Reached target paths.target - Paths. Jul 10 00:19:05.940470 systemd[1668]: Reached target timers.target - Timers. Jul 10 00:19:05.943534 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:19:05.968939 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:19:05.969042 systemd[1668]: Reached target sockets.target - Sockets. Jul 10 00:19:05.969278 systemd[1668]: Reached target basic.target - Basic System. Jul 10 00:19:05.969472 systemd[1668]: Reached target default.target - Main User Target. Jul 10 00:19:05.969533 systemd[1668]: Startup finished in 282ms. Jul 10 00:19:05.969841 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:19:05.978990 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:19:06.064500 systemd[1]: Started sshd@1-164.90.146.220:22-147.75.109.163:47820.service - OpenSSH per-connection server daemon (147.75.109.163:47820). Jul 10 00:19:06.165270 sshd[1686]: Accepted publickey for core from 147.75.109.163 port 47820 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:06.167051 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:06.175457 systemd-logind[1496]: New session 2 of user core. Jul 10 00:19:06.183680 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:19:06.260053 sshd[1688]: Connection closed by 147.75.109.163 port 47820 Jul 10 00:19:06.260844 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:06.275764 systemd[1]: sshd@1-164.90.146.220:22-147.75.109.163:47820.service: Deactivated successfully. Jul 10 00:19:06.279677 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:19:06.284503 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:19:06.287666 systemd[1]: Started sshd@2-164.90.146.220:22-147.75.109.163:47824.service - OpenSSH per-connection server daemon (147.75.109.163:47824). Jul 10 00:19:06.291279 systemd-logind[1496]: Removed session 2. Jul 10 00:19:06.358982 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 47824 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:06.360409 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:06.375320 systemd-logind[1496]: New session 3 of user core. Jul 10 00:19:06.380897 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:19:06.453043 sshd[1696]: Connection closed by 147.75.109.163 port 47824 Jul 10 00:19:06.455810 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:06.463198 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:19:06.464022 systemd[1]: sshd@2-164.90.146.220:22-147.75.109.163:47824.service: Deactivated successfully. Jul 10 00:19:06.468657 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:19:06.472108 systemd-logind[1496]: Removed session 3. Jul 10 00:19:06.751215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:19:06.752000 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:19:06.754041 systemd[1]: Startup finished in 4.216s (kernel) + 6.659s (initrd) + 9.445s (userspace) = 20.321s. Jul 10 00:19:06.759532 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:19:07.603598 kubelet[1706]: E0710 00:19:07.603524 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:19:07.607507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:19:07.607677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:19:07.608289 systemd[1]: kubelet.service: Consumed 1.462s CPU time, 266.5M memory peak. Jul 10 00:19:16.473267 systemd[1]: Started sshd@3-164.90.146.220:22-147.75.109.163:52762.service - OpenSSH per-connection server daemon (147.75.109.163:52762). Jul 10 00:19:16.558910 sshd[1718]: Accepted publickey for core from 147.75.109.163 port 52762 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:16.560916 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:16.568575 systemd-logind[1496]: New session 4 of user core. Jul 10 00:19:16.578702 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:19:16.640220 sshd[1720]: Connection closed by 147.75.109.163 port 52762 Jul 10 00:19:16.641034 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:16.658167 systemd[1]: sshd@3-164.90.146.220:22-147.75.109.163:52762.service: Deactivated successfully. Jul 10 00:19:16.660871 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:19:16.662500 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:19:16.665893 systemd[1]: Started sshd@4-164.90.146.220:22-147.75.109.163:52774.service - OpenSSH per-connection server daemon (147.75.109.163:52774). Jul 10 00:19:16.668041 systemd-logind[1496]: Removed session 4. Jul 10 00:19:16.728545 sshd[1726]: Accepted publickey for core from 147.75.109.163 port 52774 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:16.730482 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:16.737351 systemd-logind[1496]: New session 5 of user core. Jul 10 00:19:16.744659 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:19:16.805504 sshd[1728]: Connection closed by 147.75.109.163 port 52774 Jul 10 00:19:16.805308 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:16.816669 systemd[1]: sshd@4-164.90.146.220:22-147.75.109.163:52774.service: Deactivated successfully. Jul 10 00:19:16.819472 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:19:16.820853 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:19:16.825250 systemd[1]: Started sshd@5-164.90.146.220:22-147.75.109.163:52784.service - OpenSSH per-connection server daemon (147.75.109.163:52784). Jul 10 00:19:16.826404 systemd-logind[1496]: Removed session 5. Jul 10 00:19:16.889033 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 52784 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:16.891014 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:16.899459 systemd-logind[1496]: New session 6 of user core. Jul 10 00:19:16.917819 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:19:16.982504 sshd[1736]: Connection closed by 147.75.109.163 port 52784 Jul 10 00:19:16.982496 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:16.999055 systemd[1]: sshd@5-164.90.146.220:22-147.75.109.163:52784.service: Deactivated successfully. Jul 10 00:19:17.001655 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:19:17.003690 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:19:17.007368 systemd[1]: Started sshd@6-164.90.146.220:22-147.75.109.163:52788.service - OpenSSH per-connection server daemon (147.75.109.163:52788). Jul 10 00:19:17.009645 systemd-logind[1496]: Removed session 6. Jul 10 00:19:17.067821 sshd[1742]: Accepted publickey for core from 147.75.109.163 port 52788 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:17.070214 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:17.079145 systemd-logind[1496]: New session 7 of user core. Jul 10 00:19:17.085811 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:19:17.163358 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:19:17.164286 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:19:17.184450 sudo[1745]: pam_unix(sudo:session): session closed for user root Jul 10 00:19:17.190259 sshd[1744]: Connection closed by 147.75.109.163 port 52788 Jul 10 00:19:17.188961 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:17.209461 systemd[1]: sshd@6-164.90.146.220:22-147.75.109.163:52788.service: Deactivated successfully. Jul 10 00:19:17.211966 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:19:17.213675 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:19:17.219092 systemd[1]: Started sshd@7-164.90.146.220:22-147.75.109.163:52802.service - OpenSSH per-connection server daemon (147.75.109.163:52802). Jul 10 00:19:17.221634 systemd-logind[1496]: Removed session 7. Jul 10 00:19:17.307337 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 52802 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:17.310281 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:17.318238 systemd-logind[1496]: New session 8 of user core. Jul 10 00:19:17.330747 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:19:17.393740 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:19:17.394102 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:19:17.401593 sudo[1755]: pam_unix(sudo:session): session closed for user root Jul 10 00:19:17.411351 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:19:17.412049 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:19:17.429930 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:19:17.501354 augenrules[1777]: No rules Jul 10 00:19:17.502513 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:19:17.502857 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:19:17.507190 sudo[1754]: pam_unix(sudo:session): session closed for user root Jul 10 00:19:17.511510 sshd[1753]: Connection closed by 147.75.109.163 port 52802 Jul 10 00:19:17.512716 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:17.531891 systemd[1]: sshd@7-164.90.146.220:22-147.75.109.163:52802.service: Deactivated successfully. Jul 10 00:19:17.535039 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:19:17.538133 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:19:17.544960 systemd[1]: Started sshd@8-164.90.146.220:22-147.75.109.163:52804.service - OpenSSH per-connection server daemon (147.75.109.163:52804). Jul 10 00:19:17.546008 systemd-logind[1496]: Removed session 8. Jul 10 00:19:17.616220 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 52804 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:19:17.618309 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:17.619700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:19:17.622127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:19:17.630631 systemd-logind[1496]: New session 9 of user core. Jul 10 00:19:17.636814 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:19:17.706495 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:19:17.706881 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:19:17.831296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:19:17.846961 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:19:17.948504 kubelet[1806]: E0710 00:19:17.948412 1806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:19:17.954971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:19:17.955209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:19:17.955674 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.7M memory peak. Jul 10 00:19:18.391523 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:19:18.414221 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:19:18.847672 dockerd[1822]: time="2025-07-10T00:19:18.847572027Z" level=info msg="Starting up" Jul 10 00:19:18.852992 dockerd[1822]: time="2025-07-10T00:19:18.851263866Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:19:18.894728 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2178125791-merged.mount: Deactivated successfully. Jul 10 00:19:18.934080 dockerd[1822]: time="2025-07-10T00:19:18.933365009Z" level=info msg="Loading containers: start." Jul 10 00:19:18.947473 kernel: Initializing XFRM netlink socket Jul 10 00:19:19.259468 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 10 00:19:19.945006 systemd-resolved[1403]: Clock change detected. Flushing caches. Jul 10 00:19:19.945257 systemd-timesyncd[1425]: Contacted time server 137.110.222.27:123 (2.flatcar.pool.ntp.org). Jul 10 00:19:19.945339 systemd-timesyncd[1425]: Initial clock synchronization to Thu 2025-07-10 00:19:19.944756 UTC. Jul 10 00:19:20.000395 systemd-networkd[1455]: docker0: Link UP Jul 10 00:19:20.008634 dockerd[1822]: time="2025-07-10T00:19:20.008543713Z" level=info msg="Loading containers: done." Jul 10 00:19:20.032284 dockerd[1822]: time="2025-07-10T00:19:20.032020076Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:19:20.032284 dockerd[1822]: time="2025-07-10T00:19:20.032160118Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:19:20.032572 dockerd[1822]: time="2025-07-10T00:19:20.032396681Z" level=info msg="Initializing buildkit" Jul 10 00:19:20.032641 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3894100985-merged.mount: Deactivated successfully. Jul 10 00:19:20.063377 dockerd[1822]: time="2025-07-10T00:19:20.063261675Z" level=info msg="Completed buildkit initialization" Jul 10 00:19:20.074390 dockerd[1822]: time="2025-07-10T00:19:20.074312469Z" level=info msg="Daemon has completed initialization" Jul 10 00:19:20.074621 dockerd[1822]: time="2025-07-10T00:19:20.074396009Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:19:20.075103 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:19:20.938768 containerd[1524]: time="2025-07-10T00:19:20.938656301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 00:19:21.558446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119716125.mount: Deactivated successfully. Jul 10 00:19:22.074458 systemd[1]: Started sshd@9-164.90.146.220:22-144.91.80.232:56678.service - OpenSSH per-connection server daemon (144.91.80.232:56678). Jul 10 00:19:22.734421 sshd[2087]: Invalid user from 144.91.80.232 port 56678 Jul 10 00:19:23.143900 containerd[1524]: time="2025-07-10T00:19:23.143789915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:23.146010 containerd[1524]: time="2025-07-10T00:19:23.145563918Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 10 00:19:23.146797 containerd[1524]: time="2025-07-10T00:19:23.146710925Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:23.150952 containerd[1524]: time="2025-07-10T00:19:23.150838809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:23.152666 containerd[1524]: time="2025-07-10T00:19:23.152262469Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.212751695s" Jul 10 00:19:23.152666 containerd[1524]: time="2025-07-10T00:19:23.152325524Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 10 00:19:23.153270 containerd[1524]: time="2025-07-10T00:19:23.153223389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 00:19:24.745970 containerd[1524]: time="2025-07-10T00:19:24.744769768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:24.745970 containerd[1524]: time="2025-07-10T00:19:24.745110265Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 10 00:19:24.747046 containerd[1524]: time="2025-07-10T00:19:24.746999424Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:24.750812 containerd[1524]: time="2025-07-10T00:19:24.750746786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:24.752305 containerd[1524]: time="2025-07-10T00:19:24.752243700Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.59897167s" Jul 10 00:19:24.752305 containerd[1524]: time="2025-07-10T00:19:24.752297817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 10 00:19:24.753188 containerd[1524]: time="2025-07-10T00:19:24.753139742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 00:19:26.161001 containerd[1524]: time="2025-07-10T00:19:26.160087720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:26.161899 containerd[1524]: time="2025-07-10T00:19:26.161830947Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 10 00:19:26.162986 containerd[1524]: time="2025-07-10T00:19:26.162323186Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:26.167339 containerd[1524]: time="2025-07-10T00:19:26.167217469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:26.169064 containerd[1524]: time="2025-07-10T00:19:26.168571749Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.414565385s" Jul 10 00:19:26.169064 containerd[1524]: time="2025-07-10T00:19:26.168666347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 10 00:19:26.169363 containerd[1524]: time="2025-07-10T00:19:26.169327966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:19:26.182112 systemd-resolved[1403]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jul 10 00:19:27.334124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076188465.mount: Deactivated successfully. Jul 10 00:19:28.030208 containerd[1524]: time="2025-07-10T00:19:28.030076459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:28.031726 containerd[1524]: time="2025-07-10T00:19:28.031405768Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 10 00:19:28.032620 containerd[1524]: time="2025-07-10T00:19:28.032518812Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:28.034898 containerd[1524]: time="2025-07-10T00:19:28.034845179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:28.036129 containerd[1524]: time="2025-07-10T00:19:28.036078802Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.866710733s" Jul 10 00:19:28.036369 containerd[1524]: time="2025-07-10T00:19:28.036328182Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 10 00:19:28.037583 containerd[1524]: time="2025-07-10T00:19:28.037329896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 00:19:28.550499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461728402.mount: Deactivated successfully. Jul 10 00:19:28.816659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:19:28.819783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:19:29.082670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:19:29.096145 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:19:29.181836 kubelet[2157]: E0710 00:19:29.181768 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:19:29.188091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:19:29.188279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:19:29.189454 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.4M memory peak. Jul 10 00:19:29.276172 systemd-resolved[1403]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jul 10 00:19:29.762567 containerd[1524]: time="2025-07-10T00:19:29.762422633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:29.763844 containerd[1524]: time="2025-07-10T00:19:29.763553793Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 10 00:19:29.764707 containerd[1524]: time="2025-07-10T00:19:29.764642568Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:29.767722 containerd[1524]: time="2025-07-10T00:19:29.767658919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:29.769471 containerd[1524]: time="2025-07-10T00:19:29.769158149Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.731783755s" Jul 10 00:19:29.769471 containerd[1524]: time="2025-07-10T00:19:29.769210948Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 10 00:19:29.769964 containerd[1524]: time="2025-07-10T00:19:29.769869610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:19:30.058645 sshd[2087]: Connection closed by invalid user 144.91.80.232 port 56678 [preauth] Jul 10 00:19:30.060046 systemd[1]: sshd@9-164.90.146.220:22-144.91.80.232:56678.service: Deactivated successfully. Jul 10 00:19:30.294484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028712586.mount: Deactivated successfully. Jul 10 00:19:30.302275 containerd[1524]: time="2025-07-10T00:19:30.301263331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:19:30.302275 containerd[1524]: time="2025-07-10T00:19:30.302208126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:19:30.303347 containerd[1524]: time="2025-07-10T00:19:30.303289540Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:19:30.306426 containerd[1524]: time="2025-07-10T00:19:30.306364126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:19:30.307843 containerd[1524]: time="2025-07-10T00:19:30.307767413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 537.845676ms" Jul 10 00:19:30.308193 containerd[1524]: time="2025-07-10T00:19:30.308091820Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:19:30.309726 containerd[1524]: time="2025-07-10T00:19:30.308995911Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 00:19:30.796876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233427857.mount: Deactivated successfully. Jul 10 00:19:32.720001 containerd[1524]: time="2025-07-10T00:19:32.719019528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:32.720580 containerd[1524]: time="2025-07-10T00:19:32.720371835Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 10 00:19:32.721265 containerd[1524]: time="2025-07-10T00:19:32.721219646Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:32.724452 containerd[1524]: time="2025-07-10T00:19:32.724397104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:32.725953 containerd[1524]: time="2025-07-10T00:19:32.725891898Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.416849925s" Jul 10 00:19:32.726135 containerd[1524]: time="2025-07-10T00:19:32.726117560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 10 00:19:36.457796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:19:36.458571 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.4M memory peak. Jul 10 00:19:36.462123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:19:36.510578 systemd[1]: Reload requested from client PID 2262 ('systemctl') (unit session-9.scope)... Jul 10 00:19:36.510603 systemd[1]: Reloading... Jul 10 00:19:36.680977 zram_generator::config[2308]: No configuration found. Jul 10 00:19:36.841613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:19:36.995361 systemd[1]: Reloading finished in 483 ms. Jul 10 00:19:37.072021 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:19:37.072145 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:19:37.072675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:19:37.073496 systemd[1]: kubelet.service: Consumed 139ms CPU time, 97.8M memory peak. Jul 10 00:19:37.075881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:19:37.268517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:19:37.282467 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:19:37.345184 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:19:37.345797 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:19:37.345900 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:19:37.347767 kubelet[2359]: I0710 00:19:37.347671 2359 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:19:38.136751 kubelet[2359]: I0710 00:19:38.136631 2359 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:19:38.136751 kubelet[2359]: I0710 00:19:38.136687 2359 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:19:38.137840 kubelet[2359]: I0710 00:19:38.137759 2359 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:19:38.179322 kubelet[2359]: I0710 00:19:38.179112 2359 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:19:38.181235 kubelet[2359]: E0710 00:19:38.181027 2359 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://164.90.146.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:19:38.196978 kubelet[2359]: I0710 00:19:38.196856 2359 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:19:38.206445 kubelet[2359]: I0710 00:19:38.206375 2359 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:19:38.208183 kubelet[2359]: I0710 00:19:38.208082 2359 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:19:38.211956 kubelet[2359]: I0710 00:19:38.208156 2359 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-5827fce73f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:19:38.211956 kubelet[2359]: I0710 00:19:38.211921 2359 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:19:38.211956 kubelet[2359]: I0710 00:19:38.211954 2359 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:19:38.213142 kubelet[2359]: I0710 00:19:38.213081 2359 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:19:38.217967 kubelet[2359]: I0710 00:19:38.217529 2359 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:19:38.217967 kubelet[2359]: I0710 00:19:38.217603 2359 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:19:38.217967 kubelet[2359]: I0710 00:19:38.217645 2359 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:19:38.217967 kubelet[2359]: I0710 00:19:38.217666 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:19:38.228201 kubelet[2359]: E0710 00:19:38.227692 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.90.146.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-5827fce73f&limit=500&resourceVersion=0\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:19:38.230884 kubelet[2359]: E0710 00:19:38.230833 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.90.146.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:19:38.231137 kubelet[2359]: I0710 00:19:38.231059 2359 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:19:38.232170 kubelet[2359]: I0710 00:19:38.231891 2359 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:19:38.234970 kubelet[2359]: W0710 00:19:38.232962 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:19:38.238125 kubelet[2359]: I0710 00:19:38.238085 2359 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:19:38.238125 kubelet[2359]: I0710 00:19:38.238159 2359 server.go:1289] "Started kubelet" Jul 10 00:19:38.239274 kubelet[2359]: I0710 00:19:38.239189 2359 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:19:38.246767 kubelet[2359]: I0710 00:19:38.246704 2359 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:19:38.250448 kubelet[2359]: I0710 00:19:38.250307 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:19:38.250904 kubelet[2359]: I0710 00:19:38.250863 2359 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:19:38.256993 kubelet[2359]: E0710 00:19:38.255281 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.90.146.220:6443/api/v1/namespaces/default/events\": dial tcp 164.90.146.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-5827fce73f.1850bbd4ed6ff3ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-5827fce73f,UID:ci-4344.1.1-n-5827fce73f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-5827fce73f,},FirstTimestamp:2025-07-10 00:19:38.23812089 +0000 UTC m=+0.949503723,LastTimestamp:2025-07-10 00:19:38.23812089 +0000 UTC m=+0.949503723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-5827fce73f,}" Jul 10 00:19:38.260860 kubelet[2359]: I0710 00:19:38.260625 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:19:38.262445 kubelet[2359]: I0710 00:19:38.262401 2359 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:19:38.266968 kubelet[2359]: E0710 00:19:38.266810 2359 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5827fce73f\" not found" Jul 10 00:19:38.266968 kubelet[2359]: I0710 00:19:38.266887 2359 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:19:38.267536 kubelet[2359]: I0710 00:19:38.267512 2359 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:19:38.267729 kubelet[2359]: I0710 00:19:38.267716 2359 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:19:38.268476 kubelet[2359]: E0710 00:19:38.268438 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.90.146.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:19:38.274234 kubelet[2359]: E0710 00:19:38.274076 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.146.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-5827fce73f?timeout=10s\": dial tcp 164.90.146.220:6443: connect: connection refused" interval="200ms" Jul 10 00:19:38.274813 kubelet[2359]: I0710 00:19:38.274774 2359 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:19:38.275128 kubelet[2359]: I0710 00:19:38.275077 2359 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:19:38.283997 kubelet[2359]: E0710 00:19:38.283963 2359 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:19:38.284517 kubelet[2359]: I0710 00:19:38.284490 2359 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:19:38.303968 kubelet[2359]: I0710 00:19:38.303178 2359 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:19:38.304827 kubelet[2359]: I0710 00:19:38.304789 2359 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:19:38.304827 kubelet[2359]: I0710 00:19:38.304826 2359 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:19:38.304972 kubelet[2359]: I0710 00:19:38.304854 2359 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:19:38.304972 kubelet[2359]: I0710 00:19:38.304865 2359 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:19:38.305156 kubelet[2359]: E0710 00:19:38.305127 2359 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:19:38.315856 kubelet[2359]: E0710 00:19:38.315800 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.90.146.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:19:38.324741 kubelet[2359]: I0710 00:19:38.324698 2359 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:19:38.324741 kubelet[2359]: I0710 00:19:38.324742 2359 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:19:38.325007 kubelet[2359]: I0710 00:19:38.324772 2359 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:19:38.326379 kubelet[2359]: I0710 00:19:38.326342 2359 policy_none.go:49] "None policy: Start" Jul 10 00:19:38.326379 kubelet[2359]: I0710 00:19:38.326383 2359 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:19:38.326535 kubelet[2359]: I0710 00:19:38.326401 2359 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:19:38.335033 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:19:38.352458 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:19:38.360129 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:19:38.367787 kubelet[2359]: E0710 00:19:38.367719 2359 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5827fce73f\" not found" Jul 10 00:19:38.370953 kubelet[2359]: E0710 00:19:38.370634 2359 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:19:38.370953 kubelet[2359]: I0710 00:19:38.370867 2359 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:19:38.370953 kubelet[2359]: I0710 00:19:38.370880 2359 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:19:38.371415 kubelet[2359]: I0710 00:19:38.371395 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:19:38.374668 kubelet[2359]: E0710 00:19:38.374537 2359 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:19:38.374668 kubelet[2359]: E0710 00:19:38.374583 2359 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-5827fce73f\" not found" Jul 10 00:19:38.422744 systemd[1]: Created slice kubepods-burstable-pod2073ceac392c2727c8924d0020fecf27.slice - libcontainer container kubepods-burstable-pod2073ceac392c2727c8924d0020fecf27.slice. Jul 10 00:19:38.430970 kubelet[2359]: E0710 00:19:38.430572 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.435811 systemd[1]: Created slice kubepods-burstable-pod262032abb11ed7f03142bbe218009979.slice - libcontainer container kubepods-burstable-pod262032abb11ed7f03142bbe218009979.slice. Jul 10 00:19:38.445802 kubelet[2359]: E0710 00:19:38.445760 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.450792 systemd[1]: Created slice kubepods-burstable-pod079c3fffeacfc018d419efd55b515d50.slice - libcontainer container kubepods-burstable-pod079c3fffeacfc018d419efd55b515d50.slice. Jul 10 00:19:38.454605 kubelet[2359]: E0710 00:19:38.454563 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469151 kubelet[2359]: I0710 00:19:38.469079 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/079c3fffeacfc018d419efd55b515d50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" (UID: \"079c3fffeacfc018d419efd55b515d50\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469151 kubelet[2359]: I0710 00:19:38.469137 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469151 kubelet[2359]: I0710 00:19:38.469166 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/262032abb11ed7f03142bbe218009979-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-5827fce73f\" (UID: \"262032abb11ed7f03142bbe218009979\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469512 kubelet[2359]: I0710 00:19:38.469189 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/079c3fffeacfc018d419efd55b515d50-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" (UID: \"079c3fffeacfc018d419efd55b515d50\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469512 kubelet[2359]: I0710 00:19:38.469212 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469512 kubelet[2359]: I0710 00:19:38.469237 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469512 kubelet[2359]: I0710 00:19:38.469264 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469512 kubelet[2359]: I0710 00:19:38.469289 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.469712 kubelet[2359]: I0710 00:19:38.469344 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/079c3fffeacfc018d419efd55b515d50-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" (UID: \"079c3fffeacfc018d419efd55b515d50\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.472872 kubelet[2359]: I0710 00:19:38.472825 2359 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.473626 kubelet[2359]: E0710 00:19:38.473583 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.146.220:6443/api/v1/nodes\": dial tcp 164.90.146.220:6443: connect: connection refused" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.474797 kubelet[2359]: E0710 00:19:38.474746 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.146.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-5827fce73f?timeout=10s\": dial tcp 164.90.146.220:6443: connect: connection refused" interval="400ms" Jul 10 00:19:38.675464 kubelet[2359]: I0710 00:19:38.674871 2359 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.675628 kubelet[2359]: E0710 00:19:38.675501 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.146.220:6443/api/v1/nodes\": dial tcp 164.90.146.220:6443: connect: connection refused" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:38.732254 kubelet[2359]: E0710 00:19:38.732187 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:38.733633 containerd[1524]: time="2025-07-10T00:19:38.733216793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-5827fce73f,Uid:2073ceac392c2727c8924d0020fecf27,Namespace:kube-system,Attempt:0,}" Jul 10 00:19:38.746669 kubelet[2359]: E0710 00:19:38.746616 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:38.755465 kubelet[2359]: E0710 00:19:38.755422 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:38.768476 containerd[1524]: time="2025-07-10T00:19:38.768349893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-5827fce73f,Uid:079c3fffeacfc018d419efd55b515d50,Namespace:kube-system,Attempt:0,}" Jul 10 00:19:38.769571 containerd[1524]: time="2025-07-10T00:19:38.768833811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-5827fce73f,Uid:262032abb11ed7f03142bbe218009979,Namespace:kube-system,Attempt:0,}" Jul 10 00:19:38.867538 systemd[1]: Started sshd@10-164.90.146.220:22-80.94.95.115:28530.service - OpenSSH per-connection server daemon (80.94.95.115:28530). Jul 10 00:19:38.875980 kubelet[2359]: E0710 00:19:38.875382 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.146.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-5827fce73f?timeout=10s\": dial tcp 164.90.146.220:6443: connect: connection refused" interval="800ms" Jul 10 00:19:38.890505 containerd[1524]: time="2025-07-10T00:19:38.889852677Z" level=info msg="connecting to shim 7d42f5e242e2e6e22283090d2a81567f4bf26a30753d9cc7c9a26b582e37013a" address="unix:///run/containerd/s/5c887b7c4656fbfbaf570603a25bc1ff487df30ae849917066c6173583a415dd" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:19:38.890741 containerd[1524]: time="2025-07-10T00:19:38.890187457Z" level=info msg="connecting to shim c58fedd5b0dcc9f17cb12d75a261b454bcdc977ce6f9a1e7e33ad2f5055ef0f7" address="unix:///run/containerd/s/4a0ec47f447c2fc1a48720b0eb3211e52dd505a8645a87c65085ca24cb675e34" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:19:38.895626 containerd[1524]: time="2025-07-10T00:19:38.895574225Z" level=info msg="connecting to shim 8fd59dc763a34aaef5455f8ba9b4807dcb4be2180adaed736dbb8d44d9579be5" address="unix:///run/containerd/s/365ec09c4cce9b49d121ea1799916f338ed378bcefdb21f41f8102378e0a77d9" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:19:39.022350 systemd[1]: Started cri-containerd-7d42f5e242e2e6e22283090d2a81567f4bf26a30753d9cc7c9a26b582e37013a.scope - libcontainer container 7d42f5e242e2e6e22283090d2a81567f4bf26a30753d9cc7c9a26b582e37013a. Jul 10 00:19:39.038081 systemd[1]: Started cri-containerd-8fd59dc763a34aaef5455f8ba9b4807dcb4be2180adaed736dbb8d44d9579be5.scope - libcontainer container 8fd59dc763a34aaef5455f8ba9b4807dcb4be2180adaed736dbb8d44d9579be5. Jul 10 00:19:39.043634 systemd[1]: Started cri-containerd-c58fedd5b0dcc9f17cb12d75a261b454bcdc977ce6f9a1e7e33ad2f5055ef0f7.scope - libcontainer container c58fedd5b0dcc9f17cb12d75a261b454bcdc977ce6f9a1e7e33ad2f5055ef0f7. Jul 10 00:19:39.058288 kubelet[2359]: E0710 00:19:39.056870 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.90.146.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-5827fce73f&limit=500&resourceVersion=0\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:19:39.082434 kubelet[2359]: I0710 00:19:39.081647 2359 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:39.087573 kubelet[2359]: E0710 00:19:39.087502 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.146.220:6443/api/v1/nodes\": dial tcp 164.90.146.220:6443: connect: connection refused" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:39.143459 containerd[1524]: time="2025-07-10T00:19:39.142781001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-5827fce73f,Uid:079c3fffeacfc018d419efd55b515d50,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d42f5e242e2e6e22283090d2a81567f4bf26a30753d9cc7c9a26b582e37013a\"" Jul 10 00:19:39.146136 kubelet[2359]: E0710 00:19:39.146081 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:39.150980 kubelet[2359]: E0710 00:19:39.150424 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.90.146.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:19:39.151416 kubelet[2359]: E0710 00:19:39.151210 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.90.146.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:19:39.155098 containerd[1524]: time="2025-07-10T00:19:39.154688655Z" level=info msg="CreateContainer within sandbox \"7d42f5e242e2e6e22283090d2a81567f4bf26a30753d9cc7c9a26b582e37013a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:19:39.156779 containerd[1524]: time="2025-07-10T00:19:39.156728602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-5827fce73f,Uid:2073ceac392c2727c8924d0020fecf27,Namespace:kube-system,Attempt:0,} returns sandbox id \"c58fedd5b0dcc9f17cb12d75a261b454bcdc977ce6f9a1e7e33ad2f5055ef0f7\"" Jul 10 00:19:39.158703 kubelet[2359]: E0710 00:19:39.158663 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:39.165976 containerd[1524]: time="2025-07-10T00:19:39.165040686Z" level=info msg="CreateContainer within sandbox \"c58fedd5b0dcc9f17cb12d75a261b454bcdc977ce6f9a1e7e33ad2f5055ef0f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:19:39.167378 containerd[1524]: time="2025-07-10T00:19:39.167320270Z" level=info msg="Container 1807e4ab929e97e176729bf6e7f7c599ea237732b89e20d29da9ba97edaef711: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:19:39.180075 containerd[1524]: time="2025-07-10T00:19:39.180033602Z" level=info msg="Container b350d242139e1e0c0f310c4d7d45a848d86281cb20fa43375ca05262c7959fce: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:19:39.185142 containerd[1524]: time="2025-07-10T00:19:39.185088815Z" level=info msg="CreateContainer within sandbox \"7d42f5e242e2e6e22283090d2a81567f4bf26a30753d9cc7c9a26b582e37013a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1807e4ab929e97e176729bf6e7f7c599ea237732b89e20d29da9ba97edaef711\"" Jul 10 00:19:39.186545 containerd[1524]: time="2025-07-10T00:19:39.186470099Z" level=info msg="StartContainer for \"1807e4ab929e97e176729bf6e7f7c599ea237732b89e20d29da9ba97edaef711\"" Jul 10 00:19:39.188690 containerd[1524]: time="2025-07-10T00:19:39.188639290Z" level=info msg="connecting to shim 1807e4ab929e97e176729bf6e7f7c599ea237732b89e20d29da9ba97edaef711" address="unix:///run/containerd/s/5c887b7c4656fbfbaf570603a25bc1ff487df30ae849917066c6173583a415dd" protocol=ttrpc version=3 Jul 10 00:19:39.195970 containerd[1524]: time="2025-07-10T00:19:39.195889911Z" level=info msg="CreateContainer within sandbox \"c58fedd5b0dcc9f17cb12d75a261b454bcdc977ce6f9a1e7e33ad2f5055ef0f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b350d242139e1e0c0f310c4d7d45a848d86281cb20fa43375ca05262c7959fce\"" Jul 10 00:19:39.196991 containerd[1524]: time="2025-07-10T00:19:39.196904280Z" level=info msg="StartContainer for \"b350d242139e1e0c0f310c4d7d45a848d86281cb20fa43375ca05262c7959fce\"" Jul 10 00:19:39.202700 containerd[1524]: time="2025-07-10T00:19:39.202596526Z" level=info msg="connecting to shim b350d242139e1e0c0f310c4d7d45a848d86281cb20fa43375ca05262c7959fce" address="unix:///run/containerd/s/4a0ec47f447c2fc1a48720b0eb3211e52dd505a8645a87c65085ca24cb675e34" protocol=ttrpc version=3 Jul 10 00:19:39.207998 containerd[1524]: time="2025-07-10T00:19:39.207772737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-5827fce73f,Uid:262032abb11ed7f03142bbe218009979,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fd59dc763a34aaef5455f8ba9b4807dcb4be2180adaed736dbb8d44d9579be5\"" Jul 10 00:19:39.209784 kubelet[2359]: E0710 00:19:39.209741 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:39.217540 containerd[1524]: time="2025-07-10T00:19:39.217427092Z" level=info msg="CreateContainer within sandbox \"8fd59dc763a34aaef5455f8ba9b4807dcb4be2180adaed736dbb8d44d9579be5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:19:39.231066 containerd[1524]: time="2025-07-10T00:19:39.230892935Z" level=info msg="Container 17b6113bdf9261b1df2651356bb638c325228ab562986a2e4a48b664357f142e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:19:39.240308 systemd[1]: Started cri-containerd-1807e4ab929e97e176729bf6e7f7c599ea237732b89e20d29da9ba97edaef711.scope - libcontainer container 1807e4ab929e97e176729bf6e7f7c599ea237732b89e20d29da9ba97edaef711. Jul 10 00:19:39.249732 kubelet[2359]: E0710 00:19:39.249418 2359 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.90.146.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.146.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:19:39.257062 containerd[1524]: time="2025-07-10T00:19:39.256808767Z" level=info msg="CreateContainer within sandbox \"8fd59dc763a34aaef5455f8ba9b4807dcb4be2180adaed736dbb8d44d9579be5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17b6113bdf9261b1df2651356bb638c325228ab562986a2e4a48b664357f142e\"" Jul 10 00:19:39.258055 containerd[1524]: time="2025-07-10T00:19:39.257924063Z" level=info msg="StartContainer for \"17b6113bdf9261b1df2651356bb638c325228ab562986a2e4a48b664357f142e\"" Jul 10 00:19:39.258476 systemd[1]: Started cri-containerd-b350d242139e1e0c0f310c4d7d45a848d86281cb20fa43375ca05262c7959fce.scope - libcontainer container b350d242139e1e0c0f310c4d7d45a848d86281cb20fa43375ca05262c7959fce. Jul 10 00:19:39.262439 containerd[1524]: time="2025-07-10T00:19:39.262380380Z" level=info msg="connecting to shim 17b6113bdf9261b1df2651356bb638c325228ab562986a2e4a48b664357f142e" address="unix:///run/containerd/s/365ec09c4cce9b49d121ea1799916f338ed378bcefdb21f41f8102378e0a77d9" protocol=ttrpc version=3 Jul 10 00:19:39.303145 systemd[1]: Started cri-containerd-17b6113bdf9261b1df2651356bb638c325228ab562986a2e4a48b664357f142e.scope - libcontainer container 17b6113bdf9261b1df2651356bb638c325228ab562986a2e4a48b664357f142e. Jul 10 00:19:39.420861 containerd[1524]: time="2025-07-10T00:19:39.420802819Z" level=info msg="StartContainer for \"1807e4ab929e97e176729bf6e7f7c599ea237732b89e20d29da9ba97edaef711\" returns successfully" Jul 10 00:19:39.432727 containerd[1524]: time="2025-07-10T00:19:39.432687978Z" level=info msg="StartContainer for \"17b6113bdf9261b1df2651356bb638c325228ab562986a2e4a48b664357f142e\" returns successfully" Jul 10 00:19:39.443224 containerd[1524]: time="2025-07-10T00:19:39.443172845Z" level=info msg="StartContainer for \"b350d242139e1e0c0f310c4d7d45a848d86281cb20fa43375ca05262c7959fce\" returns successfully" Jul 10 00:19:39.676288 kubelet[2359]: E0710 00:19:39.676225 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.146.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-5827fce73f?timeout=10s\": dial tcp 164.90.146.220:6443: connect: connection refused" interval="1.6s" Jul 10 00:19:39.890345 kubelet[2359]: I0710 00:19:39.890296 2359 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:40.353917 kubelet[2359]: E0710 00:19:40.353869 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:40.354569 kubelet[2359]: E0710 00:19:40.354045 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:40.359479 kubelet[2359]: E0710 00:19:40.359261 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:40.359479 kubelet[2359]: E0710 00:19:40.359415 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:40.364112 kubelet[2359]: E0710 00:19:40.364075 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:40.364304 kubelet[2359]: E0710 00:19:40.364249 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:40.768675 sshd[2407]: Invalid user user from 80.94.95.115 port 28530 Jul 10 00:19:41.031090 sshd[2407]: Connection closed by invalid user user 80.94.95.115 port 28530 [preauth] Jul 10 00:19:41.033963 systemd[1]: sshd@10-164.90.146.220:22-80.94.95.115:28530.service: Deactivated successfully. Jul 10 00:19:41.369877 kubelet[2359]: E0710 00:19:41.368495 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:41.369877 kubelet[2359]: E0710 00:19:41.368740 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:41.372307 kubelet[2359]: E0710 00:19:41.371534 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:41.372307 kubelet[2359]: E0710 00:19:41.371695 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:41.372307 kubelet[2359]: E0710 00:19:41.372115 2359 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:41.372602 kubelet[2359]: E0710 00:19:41.372582 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:41.927061 kubelet[2359]: E0710 00:19:41.927011 2359 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-5827fce73f\" not found" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.105052 kubelet[2359]: I0710 00:19:42.104984 2359 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.105052 kubelet[2359]: E0710 00:19:42.105044 2359 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-n-5827fce73f\": node \"ci-4344.1.1-n-5827fce73f\" not found" Jul 10 00:19:42.169917 kubelet[2359]: I0710 00:19:42.169835 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.182986 kubelet[2359]: E0710 00:19:42.181377 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.182986 kubelet[2359]: I0710 00:19:42.181423 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.185970 kubelet[2359]: E0710 00:19:42.185870 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-5827fce73f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.185970 kubelet[2359]: I0710 00:19:42.185916 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.189797 kubelet[2359]: E0710 00:19:42.189729 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.234891 kubelet[2359]: I0710 00:19:42.234782 2359 apiserver.go:52] "Watching apiserver" Jul 10 00:19:42.268010 kubelet[2359]: I0710 00:19:42.267915 2359 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:19:42.368224 kubelet[2359]: I0710 00:19:42.368175 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.368736 kubelet[2359]: I0710 00:19:42.368705 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.369413 kubelet[2359]: I0710 00:19:42.369205 2359 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.372293 kubelet[2359]: E0710 00:19:42.372237 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-n-5827fce73f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.372720 kubelet[2359]: E0710 00:19:42.372470 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:42.373418 kubelet[2359]: E0710 00:19:42.373326 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.373564 kubelet[2359]: E0710 00:19:42.373471 2359 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:42.373846 kubelet[2359]: E0710 00:19:42.373800 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:42.373980 kubelet[2359]: E0710 00:19:42.373894 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:44.762463 systemd[1]: Reload requested from client PID 2646 ('systemctl') (unit session-9.scope)... Jul 10 00:19:44.762979 systemd[1]: Reloading... Jul 10 00:19:44.889261 zram_generator::config[2689]: No configuration found. Jul 10 00:19:45.038487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:19:45.224950 systemd[1]: Reloading finished in 461 ms. Jul 10 00:19:45.257678 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:19:45.274512 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:19:45.274842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:19:45.274923 systemd[1]: kubelet.service: Consumed 1.485s CPU time, 126M memory peak. Jul 10 00:19:45.280684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:19:45.496442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:19:45.513650 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:19:45.608788 kubelet[2739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:19:45.610001 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:19:45.610001 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:19:45.610001 kubelet[2739]: I0710 00:19:45.609521 2739 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:19:45.619838 kubelet[2739]: I0710 00:19:45.619743 2739 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:19:45.619838 kubelet[2739]: I0710 00:19:45.619785 2739 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:19:45.620128 kubelet[2739]: I0710 00:19:45.620074 2739 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:19:45.624996 kubelet[2739]: I0710 00:19:45.622724 2739 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 00:19:45.635750 kubelet[2739]: I0710 00:19:45.635687 2739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:19:45.648898 kubelet[2739]: I0710 00:19:45.648866 2739 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:19:45.654732 kubelet[2739]: I0710 00:19:45.654686 2739 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:19:45.655424 kubelet[2739]: I0710 00:19:45.655372 2739 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:19:45.655803 kubelet[2739]: I0710 00:19:45.655559 2739 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-5827fce73f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:19:45.656043 kubelet[2739]: I0710 00:19:45.656024 2739 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:19:45.656141 kubelet[2739]: I0710 00:19:45.656129 2739 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:19:45.656303 kubelet[2739]: I0710 00:19:45.656286 2739 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:19:45.656641 kubelet[2739]: I0710 00:19:45.656623 2739 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:19:45.656742 kubelet[2739]: I0710 00:19:45.656730 2739 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:19:45.656838 kubelet[2739]: I0710 00:19:45.656828 2739 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:19:45.656918 kubelet[2739]: I0710 00:19:45.656907 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:19:45.661335 kubelet[2739]: I0710 00:19:45.661294 2739 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:19:45.662421 kubelet[2739]: I0710 00:19:45.662380 2739 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:19:45.667502 kubelet[2739]: I0710 00:19:45.667399 2739 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:19:45.667862 kubelet[2739]: I0710 00:19:45.667737 2739 server.go:1289] "Started kubelet" Jul 10 00:19:45.671798 kubelet[2739]: I0710 00:19:45.671761 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:19:45.674773 kubelet[2739]: I0710 00:19:45.674324 2739 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:19:45.689648 kubelet[2739]: I0710 00:19:45.689541 2739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:19:45.690541 kubelet[2739]: I0710 00:19:45.690506 2739 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:19:45.701915 kubelet[2739]: I0710 00:19:45.701869 2739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:19:45.706853 kubelet[2739]: I0710 00:19:45.706818 2739 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:19:45.707069 kubelet[2739]: E0710 00:19:45.706965 2739 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5827fce73f\" not found" Jul 10 00:19:45.708096 kubelet[2739]: I0710 00:19:45.708060 2739 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:19:45.708516 kubelet[2739]: I0710 00:19:45.708223 2739 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:19:45.710977 kubelet[2739]: E0710 00:19:45.709627 2739 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:19:45.711690 kubelet[2739]: I0710 00:19:45.711636 2739 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:19:45.711864 kubelet[2739]: I0710 00:19:45.711836 2739 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:19:45.712484 kubelet[2739]: I0710 00:19:45.712430 2739 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:19:45.721019 kubelet[2739]: I0710 00:19:45.720975 2739 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:19:45.746289 kubelet[2739]: I0710 00:19:45.745970 2739 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:19:45.758407 kubelet[2739]: I0710 00:19:45.757201 2739 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:19:45.758407 kubelet[2739]: I0710 00:19:45.757441 2739 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:19:45.758407 kubelet[2739]: I0710 00:19:45.757490 2739 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:19:45.758407 kubelet[2739]: I0710 00:19:45.757502 2739 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:19:45.758407 kubelet[2739]: E0710 00:19:45.757633 2739 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:19:45.857905 kubelet[2739]: E0710 00:19:45.857805 2739 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:19:45.922201 kubelet[2739]: I0710 00:19:45.922160 2739 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:19:45.922201 kubelet[2739]: I0710 00:19:45.922187 2739 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:19:45.922201 kubelet[2739]: I0710 00:19:45.922223 2739 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:19:45.923545 kubelet[2739]: I0710 00:19:45.922439 2739 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:19:45.923545 kubelet[2739]: I0710 00:19:45.922452 2739 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:19:45.923545 kubelet[2739]: I0710 00:19:45.922475 2739 policy_none.go:49] "None policy: Start" Jul 10 00:19:45.923545 kubelet[2739]: I0710 00:19:45.922489 2739 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:19:45.923545 kubelet[2739]: I0710 00:19:45.922502 2739 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:19:45.923545 kubelet[2739]: I0710 00:19:45.922622 2739 state_mem.go:75] "Updated machine memory state" Jul 10 00:19:45.945194 kubelet[2739]: E0710 00:19:45.944921 2739 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:19:45.945371 kubelet[2739]: I0710 00:19:45.945269 2739 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:19:45.945371 kubelet[2739]: I0710 00:19:45.945285 2739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:19:45.948467 kubelet[2739]: I0710 00:19:45.945975 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:19:45.954018 kubelet[2739]: E0710 00:19:45.953974 2739 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:19:46.060721 kubelet[2739]: I0710 00:19:46.060565 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.063378 kubelet[2739]: I0710 00:19:46.063331 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.066376 kubelet[2739]: I0710 00:19:46.065235 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.068112 kubelet[2739]: I0710 00:19:46.067350 2739 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.078744 kubelet[2739]: I0710 00:19:46.078692 2739 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:19:46.085751 kubelet[2739]: I0710 00:19:46.084526 2739 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:19:46.087136 kubelet[2739]: I0710 00:19:46.086626 2739 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:19:46.094866 kubelet[2739]: I0710 00:19:46.094809 2739 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.096807 kubelet[2739]: I0710 00:19:46.095111 2739 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.110800 kubelet[2739]: I0710 00:19:46.110211 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.110800 kubelet[2739]: I0710 00:19:46.110305 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.110800 kubelet[2739]: I0710 00:19:46.110353 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.110800 kubelet[2739]: I0710 00:19:46.110375 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/079c3fffeacfc018d419efd55b515d50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" (UID: \"079c3fffeacfc018d419efd55b515d50\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.110800 kubelet[2739]: I0710 00:19:46.110395 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.111204 kubelet[2739]: I0710 00:19:46.110442 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/262032abb11ed7f03142bbe218009979-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-5827fce73f\" (UID: \"262032abb11ed7f03142bbe218009979\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.111204 kubelet[2739]: I0710 00:19:46.110458 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/079c3fffeacfc018d419efd55b515d50-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" (UID: \"079c3fffeacfc018d419efd55b515d50\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.111204 kubelet[2739]: I0710 00:19:46.110474 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/079c3fffeacfc018d419efd55b515d50-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" (UID: \"079c3fffeacfc018d419efd55b515d50\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.111204 kubelet[2739]: I0710 00:19:46.110519 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2073ceac392c2727c8924d0020fecf27-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-5827fce73f\" (UID: \"2073ceac392c2727c8924d0020fecf27\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.379824 kubelet[2739]: E0710 00:19:46.379745 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:46.385532 kubelet[2739]: E0710 00:19:46.385393 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:46.388433 kubelet[2739]: E0710 00:19:46.388314 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:46.658285 kubelet[2739]: I0710 00:19:46.658082 2739 apiserver.go:52] "Watching apiserver" Jul 10 00:19:46.708899 kubelet[2739]: I0710 00:19:46.708849 2739 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:19:46.829405 kubelet[2739]: I0710 00:19:46.829339 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.830974 kubelet[2739]: E0710 00:19:46.830895 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:46.833244 kubelet[2739]: E0710 00:19:46.833171 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:46.856336 kubelet[2739]: I0710 00:19:46.856297 2739 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 10 00:19:46.856969 kubelet[2739]: E0710 00:19:46.856645 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-n-5827fce73f\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" Jul 10 00:19:46.856969 kubelet[2739]: E0710 00:19:46.856911 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:46.893966 kubelet[2739]: I0710 00:19:46.892733 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5827fce73f" podStartSLOduration=0.892705669 podStartE2EDuration="892.705669ms" podCreationTimestamp="2025-07-10 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:19:46.877055646 +0000 UTC m=+1.350543761" watchObservedRunningTime="2025-07-10 00:19:46.892705669 +0000 UTC m=+1.366193772" Jul 10 00:19:46.908904 kubelet[2739]: I0710 00:19:46.908575 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5827fce73f" podStartSLOduration=0.908548657 podStartE2EDuration="908.548657ms" podCreationTimestamp="2025-07-10 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:19:46.893560985 +0000 UTC m=+1.367049080" watchObservedRunningTime="2025-07-10 00:19:46.908548657 +0000 UTC m=+1.382036762" Jul 10 00:19:47.833429 kubelet[2739]: E0710 00:19:47.833242 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:47.833429 kubelet[2739]: E0710 00:19:47.833341 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:48.835426 kubelet[2739]: E0710 00:19:48.835282 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:48.982967 kubelet[2739]: E0710 00:19:48.982874 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:49.098131 update_engine[1497]: I20250710 00:19:49.097182 1497 update_attempter.cc:509] Updating boot flags... Jul 10 00:19:49.837974 kubelet[2739]: E0710 00:19:49.837815 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:50.577807 kubelet[2739]: I0710 00:19:50.577614 2739 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:19:50.578491 containerd[1524]: time="2025-07-10T00:19:50.578418143Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:19:50.580011 kubelet[2739]: I0710 00:19:50.579551 2739 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:19:51.336468 kubelet[2739]: E0710 00:19:51.336053 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:51.368614 kubelet[2739]: I0710 00:19:51.368385 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-5827fce73f" podStartSLOduration=5.368362574 podStartE2EDuration="5.368362574s" podCreationTimestamp="2025-07-10 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:19:46.91054165 +0000 UTC m=+1.384029753" watchObservedRunningTime="2025-07-10 00:19:51.368362574 +0000 UTC m=+5.841850676" Jul 10 00:19:51.622098 systemd[1]: Created slice kubepods-besteffort-podb3079f91_6042_405f_9931_6e3b6bc27f6d.slice - libcontainer container kubepods-besteffort-podb3079f91_6042_405f_9931_6e3b6bc27f6d.slice. Jul 10 00:19:51.755416 kubelet[2739]: I0710 00:19:51.755204 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3079f91-6042-405f-9931-6e3b6bc27f6d-kube-proxy\") pod \"kube-proxy-f4slp\" (UID: \"b3079f91-6042-405f-9931-6e3b6bc27f6d\") " pod="kube-system/kube-proxy-f4slp" Jul 10 00:19:51.756177 kubelet[2739]: I0710 00:19:51.756069 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3079f91-6042-405f-9931-6e3b6bc27f6d-xtables-lock\") pod \"kube-proxy-f4slp\" (UID: \"b3079f91-6042-405f-9931-6e3b6bc27f6d\") " pod="kube-system/kube-proxy-f4slp" Jul 10 00:19:51.756177 kubelet[2739]: I0710 00:19:51.756122 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3079f91-6042-405f-9931-6e3b6bc27f6d-lib-modules\") pod \"kube-proxy-f4slp\" (UID: \"b3079f91-6042-405f-9931-6e3b6bc27f6d\") " pod="kube-system/kube-proxy-f4slp" Jul 10 00:19:51.756177 kubelet[2739]: I0710 00:19:51.756142 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwmxr\" (UniqueName: \"kubernetes.io/projected/b3079f91-6042-405f-9931-6e3b6bc27f6d-kube-api-access-wwmxr\") pod \"kube-proxy-f4slp\" (UID: \"b3079f91-6042-405f-9931-6e3b6bc27f6d\") " pod="kube-system/kube-proxy-f4slp" Jul 10 00:19:51.798383 systemd[1]: Created slice kubepods-besteffort-podf5bf369a_a766_4d68_88d6_697678910c22.slice - libcontainer container kubepods-besteffort-podf5bf369a_a766_4d68_88d6_697678910c22.slice. Jul 10 00:19:51.841836 kubelet[2739]: E0710 00:19:51.841607 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:51.856899 kubelet[2739]: I0710 00:19:51.856839 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxcw\" (UniqueName: \"kubernetes.io/projected/f5bf369a-a766-4d68-88d6-697678910c22-kube-api-access-llxcw\") pod \"tigera-operator-747864d56d-9jnpj\" (UID: \"f5bf369a-a766-4d68-88d6-697678910c22\") " pod="tigera-operator/tigera-operator-747864d56d-9jnpj" Jul 10 00:19:51.857208 kubelet[2739]: I0710 00:19:51.856914 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5bf369a-a766-4d68-88d6-697678910c22-var-lib-calico\") pod \"tigera-operator-747864d56d-9jnpj\" (UID: \"f5bf369a-a766-4d68-88d6-697678910c22\") " pod="tigera-operator/tigera-operator-747864d56d-9jnpj" Jul 10 00:19:51.937418 kubelet[2739]: E0710 00:19:51.937223 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:51.939018 containerd[1524]: time="2025-07-10T00:19:51.938897774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4slp,Uid:b3079f91-6042-405f-9931-6e3b6bc27f6d,Namespace:kube-system,Attempt:0,}" Jul 10 00:19:51.973312 containerd[1524]: time="2025-07-10T00:19:51.973190432Z" level=info msg="connecting to shim 5cbfbcf215ab965234d7191177712e463aa96296e071b087a00b78a5616db85c" address="unix:///run/containerd/s/968ea8921b5ddf1ac63b347e1a77db0ba3ff41d301953a6a66c91cfb6566632c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:19:52.017281 systemd[1]: Started cri-containerd-5cbfbcf215ab965234d7191177712e463aa96296e071b087a00b78a5616db85c.scope - libcontainer container 5cbfbcf215ab965234d7191177712e463aa96296e071b087a00b78a5616db85c. Jul 10 00:19:52.057050 containerd[1524]: time="2025-07-10T00:19:52.056881599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4slp,Uid:b3079f91-6042-405f-9931-6e3b6bc27f6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cbfbcf215ab965234d7191177712e463aa96296e071b087a00b78a5616db85c\"" Jul 10 00:19:52.058296 kubelet[2739]: E0710 00:19:52.058268 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:52.064879 containerd[1524]: time="2025-07-10T00:19:52.064186643Z" level=info msg="CreateContainer within sandbox \"5cbfbcf215ab965234d7191177712e463aa96296e071b087a00b78a5616db85c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:19:52.087591 containerd[1524]: time="2025-07-10T00:19:52.087532402Z" level=info msg="Container 9b4d3a8367c259452048759cf4ee14ed14d843124886a743c170fa050d7d6282: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:19:52.096370 containerd[1524]: time="2025-07-10T00:19:52.096305876Z" level=info msg="CreateContainer within sandbox \"5cbfbcf215ab965234d7191177712e463aa96296e071b087a00b78a5616db85c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9b4d3a8367c259452048759cf4ee14ed14d843124886a743c170fa050d7d6282\"" Jul 10 00:19:52.098224 containerd[1524]: time="2025-07-10T00:19:52.098166683Z" level=info msg="StartContainer for \"9b4d3a8367c259452048759cf4ee14ed14d843124886a743c170fa050d7d6282\"" Jul 10 00:19:52.100463 containerd[1524]: time="2025-07-10T00:19:52.100359923Z" level=info msg="connecting to shim 9b4d3a8367c259452048759cf4ee14ed14d843124886a743c170fa050d7d6282" address="unix:///run/containerd/s/968ea8921b5ddf1ac63b347e1a77db0ba3ff41d301953a6a66c91cfb6566632c" protocol=ttrpc version=3 Jul 10 00:19:52.104153 containerd[1524]: time="2025-07-10T00:19:52.104098993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-9jnpj,Uid:f5bf369a-a766-4d68-88d6-697678910c22,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:19:52.134409 systemd[1]: Started cri-containerd-9b4d3a8367c259452048759cf4ee14ed14d843124886a743c170fa050d7d6282.scope - libcontainer container 9b4d3a8367c259452048759cf4ee14ed14d843124886a743c170fa050d7d6282. Jul 10 00:19:52.140840 containerd[1524]: time="2025-07-10T00:19:52.140765779Z" level=info msg="connecting to shim 9d046cdf346bd5b3046d0bdbb38f6e7dcf7681b1d9f24020e998fb2193e0e741" address="unix:///run/containerd/s/86f0930dda50197d6d3ecb513ade0e8385fb09c1169ee67f48d2f409e630b238" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:19:52.185432 systemd[1]: Started cri-containerd-9d046cdf346bd5b3046d0bdbb38f6e7dcf7681b1d9f24020e998fb2193e0e741.scope - libcontainer container 9d046cdf346bd5b3046d0bdbb38f6e7dcf7681b1d9f24020e998fb2193e0e741. Jul 10 00:19:52.217775 containerd[1524]: time="2025-07-10T00:19:52.217619273Z" level=info msg="StartContainer for \"9b4d3a8367c259452048759cf4ee14ed14d843124886a743c170fa050d7d6282\" returns successfully" Jul 10 00:19:52.285865 containerd[1524]: time="2025-07-10T00:19:52.285796673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-9jnpj,Uid:f5bf369a-a766-4d68-88d6-697678910c22,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9d046cdf346bd5b3046d0bdbb38f6e7dcf7681b1d9f24020e998fb2193e0e741\"" Jul 10 00:19:52.291894 containerd[1524]: time="2025-07-10T00:19:52.291637307Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:19:52.296088 systemd-resolved[1403]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jul 10 00:19:52.851369 kubelet[2739]: E0710 00:19:52.851295 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:52.852429 kubelet[2739]: E0710 00:19:52.851369 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:52.866155 kubelet[2739]: I0710 00:19:52.865980 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f4slp" podStartSLOduration=1.865391024 podStartE2EDuration="1.865391024s" podCreationTimestamp="2025-07-10 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:19:52.864507401 +0000 UTC m=+7.337995523" watchObservedRunningTime="2025-07-10 00:19:52.865391024 +0000 UTC m=+7.338879128" Jul 10 00:19:52.883214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35553891.mount: Deactivated successfully. Jul 10 00:19:53.689072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027968527.mount: Deactivated successfully. Jul 10 00:19:55.645784 containerd[1524]: time="2025-07-10T00:19:55.645703777Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:55.647461 containerd[1524]: time="2025-07-10T00:19:55.647118841Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 00:19:55.648220 containerd[1524]: time="2025-07-10T00:19:55.648174706Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:55.651343 containerd[1524]: time="2025-07-10T00:19:55.651274647Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:19:55.652660 containerd[1524]: time="2025-07-10T00:19:55.652604915Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.360709107s" Jul 10 00:19:55.652870 containerd[1524]: time="2025-07-10T00:19:55.652843728Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 00:19:55.659910 containerd[1524]: time="2025-07-10T00:19:55.659691717Z" level=info msg="CreateContainer within sandbox \"9d046cdf346bd5b3046d0bdbb38f6e7dcf7681b1d9f24020e998fb2193e0e741\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:19:55.670104 containerd[1524]: time="2025-07-10T00:19:55.670043228Z" level=info msg="Container 09ee69e77fc803e7bab2e88574d55969daac8a1c55b3c78f75556c06f64569c7: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:19:55.684755 containerd[1524]: time="2025-07-10T00:19:55.684679839Z" level=info msg="CreateContainer within sandbox \"9d046cdf346bd5b3046d0bdbb38f6e7dcf7681b1d9f24020e998fb2193e0e741\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"09ee69e77fc803e7bab2e88574d55969daac8a1c55b3c78f75556c06f64569c7\"" Jul 10 00:19:55.688031 containerd[1524]: time="2025-07-10T00:19:55.686995978Z" level=info msg="StartContainer for \"09ee69e77fc803e7bab2e88574d55969daac8a1c55b3c78f75556c06f64569c7\"" Jul 10 00:19:55.689415 containerd[1524]: time="2025-07-10T00:19:55.688359470Z" level=info msg="connecting to shim 09ee69e77fc803e7bab2e88574d55969daac8a1c55b3c78f75556c06f64569c7" address="unix:///run/containerd/s/86f0930dda50197d6d3ecb513ade0e8385fb09c1169ee67f48d2f409e630b238" protocol=ttrpc version=3 Jul 10 00:19:55.721448 systemd[1]: Started cri-containerd-09ee69e77fc803e7bab2e88574d55969daac8a1c55b3c78f75556c06f64569c7.scope - libcontainer container 09ee69e77fc803e7bab2e88574d55969daac8a1c55b3c78f75556c06f64569c7. Jul 10 00:19:55.774047 containerd[1524]: time="2025-07-10T00:19:55.773862547Z" level=info msg="StartContainer for \"09ee69e77fc803e7bab2e88574d55969daac8a1c55b3c78f75556c06f64569c7\" returns successfully" Jul 10 00:19:55.880024 kubelet[2739]: I0710 00:19:55.879874 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-9jnpj" podStartSLOduration=1.5157339749999998 podStartE2EDuration="4.879783494s" podCreationTimestamp="2025-07-10 00:19:51 +0000 UTC" firstStartedPulling="2025-07-10 00:19:52.290116862 +0000 UTC m=+6.763604957" lastFinishedPulling="2025-07-10 00:19:55.654166391 +0000 UTC m=+10.127654476" observedRunningTime="2025-07-10 00:19:55.878644742 +0000 UTC m=+10.352132867" watchObservedRunningTime="2025-07-10 00:19:55.879783494 +0000 UTC m=+10.353271598" Jul 10 00:19:58.242977 kubelet[2739]: E0710 00:19:58.242874 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:58.878859 kubelet[2739]: E0710 00:19:58.877583 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:19:58.996460 kubelet[2739]: E0710 00:19:58.996054 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:02.288273 sudo[1792]: pam_unix(sudo:session): session closed for user root Jul 10 00:20:02.293457 sshd[1791]: Connection closed by 147.75.109.163 port 52804 Jul 10 00:20:02.295646 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jul 10 00:20:02.305558 systemd[1]: sshd@8-164.90.146.220:22-147.75.109.163:52804.service: Deactivated successfully. Jul 10 00:20:02.305921 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:20:02.312737 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:20:02.314277 systemd[1]: session-9.scope: Consumed 6.906s CPU time, 167M memory peak. Jul 10 00:20:02.324547 systemd-logind[1496]: Removed session 9. Jul 10 00:20:08.625528 systemd[1]: Created slice kubepods-besteffort-pod7f4783ca_9206_431f_82b1_1e66c294bcf8.slice - libcontainer container kubepods-besteffort-pod7f4783ca_9206_431f_82b1_1e66c294bcf8.slice. Jul 10 00:20:08.684380 kubelet[2739]: I0710 00:20:08.684268 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7f4783ca-9206-431f-82b1-1e66c294bcf8-typha-certs\") pod \"calico-typha-8667f8c99b-wksb7\" (UID: \"7f4783ca-9206-431f-82b1-1e66c294bcf8\") " pod="calico-system/calico-typha-8667f8c99b-wksb7" Jul 10 00:20:08.684380 kubelet[2739]: I0710 00:20:08.684354 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v62p7\" (UniqueName: \"kubernetes.io/projected/7f4783ca-9206-431f-82b1-1e66c294bcf8-kube-api-access-v62p7\") pod \"calico-typha-8667f8c99b-wksb7\" (UID: \"7f4783ca-9206-431f-82b1-1e66c294bcf8\") " pod="calico-system/calico-typha-8667f8c99b-wksb7" Jul 10 00:20:08.684380 kubelet[2739]: I0710 00:20:08.684398 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4783ca-9206-431f-82b1-1e66c294bcf8-tigera-ca-bundle\") pod \"calico-typha-8667f8c99b-wksb7\" (UID: \"7f4783ca-9206-431f-82b1-1e66c294bcf8\") " pod="calico-system/calico-typha-8667f8c99b-wksb7" Jul 10 00:20:08.939902 kubelet[2739]: E0710 00:20:08.939766 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:08.941968 containerd[1524]: time="2025-07-10T00:20:08.941386715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8667f8c99b-wksb7,Uid:7f4783ca-9206-431f-82b1-1e66c294bcf8,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:08.956854 systemd[1]: Created slice kubepods-besteffort-pod9303e838_8fcc_4476_a545_f967ce11cca0.slice - libcontainer container kubepods-besteffort-pod9303e838_8fcc_4476_a545_f967ce11cca0.slice. Jul 10 00:20:08.986933 kubelet[2739]: I0710 00:20:08.986874 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-cni-net-dir\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.987629 kubelet[2739]: I0710 00:20:08.986959 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9303e838-8fcc-4476-a545-f967ce11cca0-node-certs\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.987629 kubelet[2739]: I0710 00:20:08.986992 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-xtables-lock\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.987629 kubelet[2739]: I0710 00:20:08.987021 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-flexvol-driver-host\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.987629 kubelet[2739]: I0710 00:20:08.987044 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntn62\" (UniqueName: \"kubernetes.io/projected/9303e838-8fcc-4476-a545-f967ce11cca0-kube-api-access-ntn62\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.987629 kubelet[2739]: I0710 00:20:08.987071 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9303e838-8fcc-4476-a545-f967ce11cca0-tigera-ca-bundle\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.989974 kubelet[2739]: I0710 00:20:08.987100 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-cni-log-dir\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.989974 kubelet[2739]: I0710 00:20:08.987123 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-policysync\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.989974 kubelet[2739]: I0710 00:20:08.987151 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-var-run-calico\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.989974 kubelet[2739]: I0710 00:20:08.987177 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-cni-bin-dir\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.989974 kubelet[2739]: I0710 00:20:08.987202 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-lib-modules\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:08.990253 kubelet[2739]: I0710 00:20:08.987225 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9303e838-8fcc-4476-a545-f967ce11cca0-var-lib-calico\") pod \"calico-node-pg8g7\" (UID: \"9303e838-8fcc-4476-a545-f967ce11cca0\") " pod="calico-system/calico-node-pg8g7" Jul 10 00:20:09.000224 containerd[1524]: time="2025-07-10T00:20:09.000147769Z" level=info msg="connecting to shim 5ae015316e864e63082bfb00ef906accdfd09932910944072beb29fd5e1c76a4" address="unix:///run/containerd/s/193de308b88515f265fad38388eb62d28eb6023584483d0d038029dc7032d6b9" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:09.070286 systemd[1]: Started cri-containerd-5ae015316e864e63082bfb00ef906accdfd09932910944072beb29fd5e1c76a4.scope - libcontainer container 5ae015316e864e63082bfb00ef906accdfd09932910944072beb29fd5e1c76a4. Jul 10 00:20:09.095115 kubelet[2739]: E0710 00:20:09.095066 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.095115 kubelet[2739]: W0710 00:20:09.095103 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.095341 kubelet[2739]: E0710 00:20:09.095137 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.115085 kubelet[2739]: E0710 00:20:09.113511 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.115085 kubelet[2739]: W0710 00:20:09.113545 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.115085 kubelet[2739]: E0710 00:20:09.113581 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.129735 kubelet[2739]: E0710 00:20:09.129564 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.129735 kubelet[2739]: W0710 00:20:09.129596 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.129735 kubelet[2739]: E0710 00:20:09.129630 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.211369 kubelet[2739]: E0710 00:20:09.211211 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h245p" podUID="bfd653e1-5546-4bf6-9c11-78c2c2efc214" Jul 10 00:20:09.268907 containerd[1524]: time="2025-07-10T00:20:09.268563157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pg8g7,Uid:9303e838-8fcc-4476-a545-f967ce11cca0,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:09.288724 kubelet[2739]: E0710 00:20:09.288677 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.290663 kubelet[2739]: W0710 00:20:09.290121 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.290663 kubelet[2739]: E0710 00:20:09.290176 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.291517 kubelet[2739]: E0710 00:20:09.290890 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.291517 kubelet[2739]: W0710 00:20:09.290963 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.291517 kubelet[2739]: E0710 00:20:09.290994 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.292785 kubelet[2739]: E0710 00:20:09.292097 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.292785 kubelet[2739]: W0710 00:20:09.292121 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.292785 kubelet[2739]: E0710 00:20:09.292160 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.293242 kubelet[2739]: E0710 00:20:09.292850 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.293242 kubelet[2739]: W0710 00:20:09.292867 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.293242 kubelet[2739]: E0710 00:20:09.292905 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.297355 kubelet[2739]: E0710 00:20:09.296362 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.297355 kubelet[2739]: W0710 00:20:09.296396 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.297355 kubelet[2739]: E0710 00:20:09.296543 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.297612 kubelet[2739]: E0710 00:20:09.297419 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.297612 kubelet[2739]: W0710 00:20:09.297439 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.297612 kubelet[2739]: E0710 00:20:09.297567 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.300274 kubelet[2739]: E0710 00:20:09.298325 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.300274 kubelet[2739]: W0710 00:20:09.298394 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.300274 kubelet[2739]: E0710 00:20:09.298498 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.300274 kubelet[2739]: E0710 00:20:09.299194 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.300274 kubelet[2739]: W0710 00:20:09.299247 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.300274 kubelet[2739]: E0710 00:20:09.299278 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.300851 kubelet[2739]: E0710 00:20:09.300726 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.300851 kubelet[2739]: W0710 00:20:09.300787 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.300851 kubelet[2739]: E0710 00:20:09.300813 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.303969 kubelet[2739]: E0710 00:20:09.301334 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.303969 kubelet[2739]: W0710 00:20:09.301355 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.303969 kubelet[2739]: E0710 00:20:09.301375 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.303969 kubelet[2739]: E0710 00:20:09.301836 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.303969 kubelet[2739]: W0710 00:20:09.301852 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.303969 kubelet[2739]: E0710 00:20:09.301870 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.303969 kubelet[2739]: E0710 00:20:09.302178 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.303969 kubelet[2739]: W0710 00:20:09.302231 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.303969 kubelet[2739]: E0710 00:20:09.302247 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.303969 kubelet[2739]: E0710 00:20:09.302564 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.304601 kubelet[2739]: W0710 00:20:09.302576 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.304601 kubelet[2739]: E0710 00:20:09.302591 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.304601 kubelet[2739]: E0710 00:20:09.302864 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.304601 kubelet[2739]: W0710 00:20:09.302877 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.304601 kubelet[2739]: E0710 00:20:09.302915 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.304601 kubelet[2739]: E0710 00:20:09.303212 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.304601 kubelet[2739]: W0710 00:20:09.303226 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.304601 kubelet[2739]: E0710 00:20:09.303241 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.304601 kubelet[2739]: E0710 00:20:09.303483 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.304601 kubelet[2739]: W0710 00:20:09.303497 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.305113 kubelet[2739]: E0710 00:20:09.303511 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.306349 kubelet[2739]: E0710 00:20:09.305414 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.306349 kubelet[2739]: W0710 00:20:09.305442 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.306349 kubelet[2739]: E0710 00:20:09.305463 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.306349 kubelet[2739]: E0710 00:20:09.305723 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.306349 kubelet[2739]: W0710 00:20:09.305736 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.306349 kubelet[2739]: E0710 00:20:09.305750 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.306349 kubelet[2739]: E0710 00:20:09.306041 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.306349 kubelet[2739]: W0710 00:20:09.306060 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.306349 kubelet[2739]: E0710 00:20:09.306074 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.306349 kubelet[2739]: E0710 00:20:09.306382 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.310266 kubelet[2739]: W0710 00:20:09.306399 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.310266 kubelet[2739]: E0710 00:20:09.306415 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.310266 kubelet[2739]: E0710 00:20:09.307167 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.310266 kubelet[2739]: W0710 00:20:09.307181 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.310266 kubelet[2739]: E0710 00:20:09.307214 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.310266 kubelet[2739]: I0710 00:20:09.307264 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkl5k\" (UniqueName: \"kubernetes.io/projected/bfd653e1-5546-4bf6-9c11-78c2c2efc214-kube-api-access-hkl5k\") pod \"csi-node-driver-h245p\" (UID: \"bfd653e1-5546-4bf6-9c11-78c2c2efc214\") " pod="calico-system/csi-node-driver-h245p" Jul 10 00:20:09.310266 kubelet[2739]: E0710 00:20:09.307708 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.310266 kubelet[2739]: W0710 00:20:09.307725 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.310266 kubelet[2739]: E0710 00:20:09.307770 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.310609 containerd[1524]: time="2025-07-10T00:20:09.301712478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8667f8c99b-wksb7,Uid:7f4783ca-9206-431f-82b1-1e66c294bcf8,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ae015316e864e63082bfb00ef906accdfd09932910944072beb29fd5e1c76a4\"" Jul 10 00:20:09.310730 kubelet[2739]: I0710 00:20:09.307829 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bfd653e1-5546-4bf6-9c11-78c2c2efc214-varrun\") pod \"csi-node-driver-h245p\" (UID: \"bfd653e1-5546-4bf6-9c11-78c2c2efc214\") " pod="calico-system/csi-node-driver-h245p" Jul 10 00:20:09.310730 kubelet[2739]: E0710 00:20:09.308272 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.310730 kubelet[2739]: W0710 00:20:09.308286 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.310730 kubelet[2739]: E0710 00:20:09.308355 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.310730 kubelet[2739]: I0710 00:20:09.308384 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfd653e1-5546-4bf6-9c11-78c2c2efc214-kubelet-dir\") pod \"csi-node-driver-h245p\" (UID: \"bfd653e1-5546-4bf6-9c11-78c2c2efc214\") " pod="calico-system/csi-node-driver-h245p" Jul 10 00:20:09.310730 kubelet[2739]: E0710 00:20:09.308758 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.310730 kubelet[2739]: W0710 00:20:09.308866 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.310730 kubelet[2739]: E0710 00:20:09.308893 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.311079 kubelet[2739]: I0710 00:20:09.309028 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bfd653e1-5546-4bf6-9c11-78c2c2efc214-registration-dir\") pod \"csi-node-driver-h245p\" (UID: \"bfd653e1-5546-4bf6-9c11-78c2c2efc214\") " pod="calico-system/csi-node-driver-h245p" Jul 10 00:20:09.311079 kubelet[2739]: E0710 00:20:09.309970 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.311079 kubelet[2739]: W0710 00:20:09.310007 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.311079 kubelet[2739]: E0710 00:20:09.310060 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.311079 kubelet[2739]: I0710 00:20:09.310100 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bfd653e1-5546-4bf6-9c11-78c2c2efc214-socket-dir\") pod \"csi-node-driver-h245p\" (UID: \"bfd653e1-5546-4bf6-9c11-78c2c2efc214\") " pod="calico-system/csi-node-driver-h245p" Jul 10 00:20:09.312723 kubelet[2739]: E0710 00:20:09.312530 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.312723 kubelet[2739]: W0710 00:20:09.312565 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.312723 kubelet[2739]: E0710 00:20:09.312592 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.313824 kubelet[2739]: E0710 00:20:09.313802 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.314871 kubelet[2739]: W0710 00:20:09.313992 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.315282 kubelet[2739]: E0710 00:20:09.314992 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.315282 kubelet[2739]: E0710 00:20:09.315026 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:09.316046 kubelet[2739]: E0710 00:20:09.316021 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.316813 kubelet[2739]: W0710 00:20:09.316170 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.316813 kubelet[2739]: E0710 00:20:09.316212 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.317092 kubelet[2739]: E0710 00:20:09.317073 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.317971 kubelet[2739]: W0710 00:20:09.317189 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.318306 kubelet[2739]: E0710 00:20:09.318125 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.319501 kubelet[2739]: E0710 00:20:09.319467 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.320919 containerd[1524]: time="2025-07-10T00:20:09.320868083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:20:09.321115 kubelet[2739]: W0710 00:20:09.321077 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.321264 kubelet[2739]: E0710 00:20:09.321242 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.325202 kubelet[2739]: E0710 00:20:09.325157 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.326371 kubelet[2739]: W0710 00:20:09.325399 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.326371 kubelet[2739]: E0710 00:20:09.325448 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.327962 kubelet[2739]: E0710 00:20:09.327250 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.327962 kubelet[2739]: W0710 00:20:09.327281 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.327962 kubelet[2739]: E0710 00:20:09.327374 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.331292 kubelet[2739]: E0710 00:20:09.330054 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.331292 kubelet[2739]: W0710 00:20:09.330081 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.331292 kubelet[2739]: E0710 00:20:09.330112 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.332479 kubelet[2739]: E0710 00:20:09.331744 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.332479 kubelet[2739]: W0710 00:20:09.331772 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.332479 kubelet[2739]: E0710 00:20:09.331814 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.335485 kubelet[2739]: E0710 00:20:09.335233 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.335485 kubelet[2739]: W0710 00:20:09.335282 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.335485 kubelet[2739]: E0710 00:20:09.335312 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.354961 containerd[1524]: time="2025-07-10T00:20:09.353321056Z" level=info msg="connecting to shim 0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346" address="unix:///run/containerd/s/382d1492a84cd50115579ff8c63e4c0c8b3ad6285e7875b0d747ce17f9fd3109" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:09.414612 kubelet[2739]: E0710 00:20:09.412268 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.414612 kubelet[2739]: W0710 00:20:09.412304 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.414612 kubelet[2739]: E0710 00:20:09.412336 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.414612 kubelet[2739]: E0710 00:20:09.413091 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.414612 kubelet[2739]: W0710 00:20:09.413112 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.414612 kubelet[2739]: E0710 00:20:09.413192 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.414612 kubelet[2739]: E0710 00:20:09.414604 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.415403 kubelet[2739]: W0710 00:20:09.414629 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.415403 kubelet[2739]: E0710 00:20:09.414657 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.415488 kubelet[2739]: E0710 00:20:09.415407 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.415488 kubelet[2739]: W0710 00:20:09.415424 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.415488 kubelet[2739]: E0710 00:20:09.415449 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.418301 kubelet[2739]: E0710 00:20:09.418080 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.418301 kubelet[2739]: W0710 00:20:09.418110 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.418301 kubelet[2739]: E0710 00:20:09.418140 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.419033 kubelet[2739]: E0710 00:20:09.419006 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.419033 kubelet[2739]: W0710 00:20:09.419032 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.425398 kubelet[2739]: E0710 00:20:09.419059 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.425398 kubelet[2739]: E0710 00:20:09.419549 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.425398 kubelet[2739]: W0710 00:20:09.419565 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.425398 kubelet[2739]: E0710 00:20:09.420965 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.425398 kubelet[2739]: E0710 00:20:09.421317 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.425398 kubelet[2739]: W0710 00:20:09.421332 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.425398 kubelet[2739]: E0710 00:20:09.421351 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.425398 kubelet[2739]: E0710 00:20:09.421528 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.425398 kubelet[2739]: W0710 00:20:09.421537 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.425398 kubelet[2739]: E0710 00:20:09.421548 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.425829 kubelet[2739]: E0710 00:20:09.421732 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.425829 kubelet[2739]: W0710 00:20:09.421743 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.425829 kubelet[2739]: E0710 00:20:09.421754 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.425829 kubelet[2739]: E0710 00:20:09.421915 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.425829 kubelet[2739]: W0710 00:20:09.421923 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.425829 kubelet[2739]: E0710 00:20:09.421949 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.425829 kubelet[2739]: E0710 00:20:09.422204 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.425829 kubelet[2739]: W0710 00:20:09.422214 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.425829 kubelet[2739]: E0710 00:20:09.422226 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.425829 kubelet[2739]: E0710 00:20:09.422427 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.427547 kubelet[2739]: W0710 00:20:09.422435 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.427547 kubelet[2739]: E0710 00:20:09.422446 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.427547 kubelet[2739]: E0710 00:20:09.422662 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.427547 kubelet[2739]: W0710 00:20:09.422678 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.427547 kubelet[2739]: E0710 00:20:09.422699 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.427547 kubelet[2739]: E0710 00:20:09.423023 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.427547 kubelet[2739]: W0710 00:20:09.423038 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.427547 kubelet[2739]: E0710 00:20:09.423055 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.427547 kubelet[2739]: E0710 00:20:09.423294 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.427547 kubelet[2739]: W0710 00:20:09.423306 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.428438 kubelet[2739]: E0710 00:20:09.423319 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.428438 kubelet[2739]: E0710 00:20:09.423506 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.428438 kubelet[2739]: W0710 00:20:09.423515 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.428438 kubelet[2739]: E0710 00:20:09.423526 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.428438 kubelet[2739]: E0710 00:20:09.425780 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.428438 kubelet[2739]: W0710 00:20:09.425801 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.428438 kubelet[2739]: E0710 00:20:09.425826 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.428438 kubelet[2739]: E0710 00:20:09.426268 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.428438 kubelet[2739]: W0710 00:20:09.426284 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.428438 kubelet[2739]: E0710 00:20:09.426300 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.428796 kubelet[2739]: E0710 00:20:09.426554 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.428796 kubelet[2739]: W0710 00:20:09.426566 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.428796 kubelet[2739]: E0710 00:20:09.426580 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.428796 kubelet[2739]: E0710 00:20:09.426810 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.428796 kubelet[2739]: W0710 00:20:09.426823 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.428796 kubelet[2739]: E0710 00:20:09.426837 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.428796 kubelet[2739]: E0710 00:20:09.427150 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.428796 kubelet[2739]: W0710 00:20:09.427163 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.428796 kubelet[2739]: E0710 00:20:09.427179 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.428796 kubelet[2739]: E0710 00:20:09.427459 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.430296 kubelet[2739]: W0710 00:20:09.427472 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.430296 kubelet[2739]: E0710 00:20:09.427487 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.430296 kubelet[2739]: E0710 00:20:09.429958 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.430296 kubelet[2739]: W0710 00:20:09.429995 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.430296 kubelet[2739]: E0710 00:20:09.430023 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.430527 kubelet[2739]: E0710 00:20:09.430510 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.430527 kubelet[2739]: W0710 00:20:09.430524 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.430608 kubelet[2739]: E0710 00:20:09.430543 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.454367 systemd[1]: Started cri-containerd-0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346.scope - libcontainer container 0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346. Jul 10 00:20:09.480298 kubelet[2739]: E0710 00:20:09.479028 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:09.480298 kubelet[2739]: W0710 00:20:09.479079 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:09.480298 kubelet[2739]: E0710 00:20:09.479120 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:09.568602 containerd[1524]: time="2025-07-10T00:20:09.568548919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pg8g7,Uid:9303e838-8fcc-4476-a545-f967ce11cca0,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346\"" Jul 10 00:20:10.761041 kubelet[2739]: E0710 00:20:10.759447 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h245p" podUID="bfd653e1-5546-4bf6-9c11-78c2c2efc214" Jul 10 00:20:11.053981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493667546.mount: Deactivated successfully. Jul 10 00:20:12.261482 containerd[1524]: time="2025-07-10T00:20:12.261415862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 10 00:20:12.293648 containerd[1524]: time="2025-07-10T00:20:12.293540223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:12.296121 containerd[1524]: time="2025-07-10T00:20:12.296047982Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:12.297543 containerd[1524]: time="2025-07-10T00:20:12.297493394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.976573163s" Jul 10 00:20:12.297807 containerd[1524]: time="2025-07-10T00:20:12.297619542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 00:20:12.302157 containerd[1524]: time="2025-07-10T00:20:12.301943570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:20:12.305549 containerd[1524]: time="2025-07-10T00:20:12.305476932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:12.331976 containerd[1524]: time="2025-07-10T00:20:12.331369947Z" level=info msg="CreateContainer within sandbox \"5ae015316e864e63082bfb00ef906accdfd09932910944072beb29fd5e1c76a4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:20:12.344459 containerd[1524]: time="2025-07-10T00:20:12.344389350Z" level=info msg="Container da843e31f9fc76accbdc1ac6161306087f6e4b3f0478903274f46fbd95106bfa: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:12.361973 containerd[1524]: time="2025-07-10T00:20:12.361890462Z" level=info msg="CreateContainer within sandbox \"5ae015316e864e63082bfb00ef906accdfd09932910944072beb29fd5e1c76a4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"da843e31f9fc76accbdc1ac6161306087f6e4b3f0478903274f46fbd95106bfa\"" Jul 10 00:20:12.363652 containerd[1524]: time="2025-07-10T00:20:12.363500983Z" level=info msg="StartContainer for \"da843e31f9fc76accbdc1ac6161306087f6e4b3f0478903274f46fbd95106bfa\"" Jul 10 00:20:12.368799 containerd[1524]: time="2025-07-10T00:20:12.368729950Z" level=info msg="connecting to shim da843e31f9fc76accbdc1ac6161306087f6e4b3f0478903274f46fbd95106bfa" address="unix:///run/containerd/s/193de308b88515f265fad38388eb62d28eb6023584483d0d038029dc7032d6b9" protocol=ttrpc version=3 Jul 10 00:20:12.407293 systemd[1]: Started cri-containerd-da843e31f9fc76accbdc1ac6161306087f6e4b3f0478903274f46fbd95106bfa.scope - libcontainer container da843e31f9fc76accbdc1ac6161306087f6e4b3f0478903274f46fbd95106bfa. Jul 10 00:20:12.492340 containerd[1524]: time="2025-07-10T00:20:12.492265717Z" level=info msg="StartContainer for \"da843e31f9fc76accbdc1ac6161306087f6e4b3f0478903274f46fbd95106bfa\" returns successfully" Jul 10 00:20:12.758909 kubelet[2739]: E0710 00:20:12.758825 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h245p" podUID="bfd653e1-5546-4bf6-9c11-78c2c2efc214" Jul 10 00:20:12.954213 kubelet[2739]: E0710 00:20:12.953467 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:13.032766 kubelet[2739]: E0710 00:20:13.032175 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.032766 kubelet[2739]: W0710 00:20:13.032208 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.032766 kubelet[2739]: E0710 00:20:13.032237 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.032766 kubelet[2739]: E0710 00:20:13.032773 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.033529 kubelet[2739]: W0710 00:20:13.032783 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.033529 kubelet[2739]: E0710 00:20:13.032795 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.033529 kubelet[2739]: E0710 00:20:13.033231 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.033529 kubelet[2739]: W0710 00:20:13.033246 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.033529 kubelet[2739]: E0710 00:20:13.033264 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.033841 kubelet[2739]: E0710 00:20:13.033529 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.033841 kubelet[2739]: W0710 00:20:13.033547 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.033841 kubelet[2739]: E0710 00:20:13.033558 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.033841 kubelet[2739]: E0710 00:20:13.033796 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.033841 kubelet[2739]: W0710 00:20:13.033805 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.033841 kubelet[2739]: E0710 00:20:13.033816 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.034166 kubelet[2739]: E0710 00:20:13.034141 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.034213 kubelet[2739]: W0710 00:20:13.034172 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.034213 kubelet[2739]: E0710 00:20:13.034188 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.034454 kubelet[2739]: E0710 00:20:13.034436 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.034454 kubelet[2739]: W0710 00:20:13.034449 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.034684 kubelet[2739]: E0710 00:20:13.034459 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.034684 kubelet[2739]: E0710 00:20:13.034664 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.034684 kubelet[2739]: W0710 00:20:13.034674 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.034763 kubelet[2739]: E0710 00:20:13.034684 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.034972 kubelet[2739]: E0710 00:20:13.034958 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.035026 kubelet[2739]: W0710 00:20:13.034972 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.035026 kubelet[2739]: E0710 00:20:13.034983 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.035282 kubelet[2739]: E0710 00:20:13.035266 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.035282 kubelet[2739]: W0710 00:20:13.035281 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.035359 kubelet[2739]: E0710 00:20:13.035294 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.035529 kubelet[2739]: E0710 00:20:13.035515 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.035582 kubelet[2739]: W0710 00:20:13.035567 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.035614 kubelet[2739]: E0710 00:20:13.035586 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.035852 kubelet[2739]: E0710 00:20:13.035837 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.035852 kubelet[2739]: W0710 00:20:13.035851 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.035941 kubelet[2739]: E0710 00:20:13.035863 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.036473 kubelet[2739]: E0710 00:20:13.036215 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.036473 kubelet[2739]: W0710 00:20:13.036238 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.036473 kubelet[2739]: E0710 00:20:13.036254 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.036791 kubelet[2739]: E0710 00:20:13.036552 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.036791 kubelet[2739]: W0710 00:20:13.036753 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.036791 kubelet[2739]: E0710 00:20:13.036777 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.037686 kubelet[2739]: E0710 00:20:13.037220 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.037686 kubelet[2739]: W0710 00:20:13.037233 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.037686 kubelet[2739]: E0710 00:20:13.037248 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.058424 kubelet[2739]: E0710 00:20:13.058377 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.058424 kubelet[2739]: W0710 00:20:13.058411 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.058424 kubelet[2739]: E0710 00:20:13.058439 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.058774 kubelet[2739]: E0710 00:20:13.058747 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.058774 kubelet[2739]: W0710 00:20:13.058767 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.059025 kubelet[2739]: E0710 00:20:13.058784 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.059291 kubelet[2739]: E0710 00:20:13.059266 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.059341 kubelet[2739]: W0710 00:20:13.059293 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.059341 kubelet[2739]: E0710 00:20:13.059316 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.059786 kubelet[2739]: E0710 00:20:13.059692 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.059786 kubelet[2739]: W0710 00:20:13.059711 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.059786 kubelet[2739]: E0710 00:20:13.059726 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.060261 kubelet[2739]: E0710 00:20:13.060170 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.060261 kubelet[2739]: W0710 00:20:13.060186 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.060261 kubelet[2739]: E0710 00:20:13.060201 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.060858 kubelet[2739]: E0710 00:20:13.060749 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.060858 kubelet[2739]: W0710 00:20:13.060766 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.060858 kubelet[2739]: E0710 00:20:13.060781 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.061539 kubelet[2739]: E0710 00:20:13.061430 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.061539 kubelet[2739]: W0710 00:20:13.061442 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.061539 kubelet[2739]: E0710 00:20:13.061455 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.062109 kubelet[2739]: E0710 00:20:13.061969 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.062109 kubelet[2739]: W0710 00:20:13.061986 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.062109 kubelet[2739]: E0710 00:20:13.062003 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.062670 kubelet[2739]: E0710 00:20:13.062650 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.062888 kubelet[2739]: W0710 00:20:13.062788 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.062888 kubelet[2739]: E0710 00:20:13.062811 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.063297 kubelet[2739]: E0710 00:20:13.063278 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.063435 kubelet[2739]: W0710 00:20:13.063397 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.063435 kubelet[2739]: E0710 00:20:13.063419 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.063876 kubelet[2739]: E0710 00:20:13.063835 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.063876 kubelet[2739]: W0710 00:20:13.063849 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.063876 kubelet[2739]: E0710 00:20:13.063861 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.064399 kubelet[2739]: E0710 00:20:13.064366 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.064399 kubelet[2739]: W0710 00:20:13.064387 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.064399 kubelet[2739]: E0710 00:20:13.064401 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.064659 kubelet[2739]: E0710 00:20:13.064645 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.064659 kubelet[2739]: W0710 00:20:13.064657 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.064735 kubelet[2739]: E0710 00:20:13.064668 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.065297 kubelet[2739]: E0710 00:20:13.065164 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.065297 kubelet[2739]: W0710 00:20:13.065180 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.065297 kubelet[2739]: E0710 00:20:13.065196 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.066080 kubelet[2739]: E0710 00:20:13.066049 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.066080 kubelet[2739]: W0710 00:20:13.066072 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.066197 kubelet[2739]: E0710 00:20:13.066090 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.066407 kubelet[2739]: E0710 00:20:13.066384 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.066407 kubelet[2739]: W0710 00:20:13.066402 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.066509 kubelet[2739]: E0710 00:20:13.066417 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.067355 kubelet[2739]: E0710 00:20:13.067154 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.067355 kubelet[2739]: W0710 00:20:13.067179 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.067355 kubelet[2739]: E0710 00:20:13.067198 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.067645 kubelet[2739]: E0710 00:20:13.067627 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:20:13.067805 kubelet[2739]: W0710 00:20:13.067739 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:20:13.067805 kubelet[2739]: E0710 00:20:13.067765 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:20:13.769319 containerd[1524]: time="2025-07-10T00:20:13.769232805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:13.770967 containerd[1524]: time="2025-07-10T00:20:13.770744265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 10 00:20:13.773240 containerd[1524]: time="2025-07-10T00:20:13.772905747Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:13.775473 containerd[1524]: time="2025-07-10T00:20:13.775417272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:13.777244 containerd[1524]: time="2025-07-10T00:20:13.776924964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.4729468s" Jul 10 00:20:13.777244 containerd[1524]: time="2025-07-10T00:20:13.777165138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 00:20:13.784187 containerd[1524]: time="2025-07-10T00:20:13.783966148Z" level=info msg="CreateContainer within sandbox \"0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:20:13.803863 containerd[1524]: time="2025-07-10T00:20:13.802207352Z" level=info msg="Container b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:13.830050 containerd[1524]: time="2025-07-10T00:20:13.812908398Z" level=info msg="CreateContainer within sandbox \"0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1\"" Jul 10 00:20:13.831269 containerd[1524]: time="2025-07-10T00:20:13.831215954Z" level=info msg="StartContainer for \"b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1\"" Jul 10 00:20:13.833698 containerd[1524]: time="2025-07-10T00:20:13.833631610Z" level=info msg="connecting to shim b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1" address="unix:///run/containerd/s/382d1492a84cd50115579ff8c63e4c0c8b3ad6285e7875b0d747ce17f9fd3109" protocol=ttrpc version=3 Jul 10 00:20:13.881387 systemd[1]: Started cri-containerd-b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1.scope - libcontainer container b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1. Jul 10 00:20:13.959144 kubelet[2739]: I0710 00:20:13.959113 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:20:13.960957 kubelet[2739]: E0710 00:20:13.960746 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:13.996214 containerd[1524]: time="2025-07-10T00:20:13.995693822Z" level=info msg="StartContainer for \"b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1\" returns successfully" Jul 10 00:20:14.023568 systemd[1]: cri-containerd-b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1.scope: Deactivated successfully. Jul 10 00:20:14.088370 containerd[1524]: time="2025-07-10T00:20:14.088049650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1\" id:\"b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1\" pid:3433 exited_at:{seconds:1752106814 nanos:27635504}" Jul 10 00:20:14.088370 containerd[1524]: time="2025-07-10T00:20:14.088066598Z" level=info msg="received exit event container_id:\"b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1\" id:\"b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1\" pid:3433 exited_at:{seconds:1752106814 nanos:27635504}" Jul 10 00:20:14.135488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3e0a62ec878e80d959b2b96d92c2d1f4325b2a0c272fd189ee031b4d2992fa1-rootfs.mount: Deactivated successfully. Jul 10 00:20:14.758692 kubelet[2739]: E0710 00:20:14.758610 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h245p" podUID="bfd653e1-5546-4bf6-9c11-78c2c2efc214" Jul 10 00:20:14.969628 containerd[1524]: time="2025-07-10T00:20:14.969252547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:20:15.000997 kubelet[2739]: I0710 00:20:15.000869 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8667f8c99b-wksb7" podStartSLOduration=4.020431385 podStartE2EDuration="7.000820493s" podCreationTimestamp="2025-07-10 00:20:08 +0000 UTC" firstStartedPulling="2025-07-10 00:20:09.319684189 +0000 UTC m=+23.793172277" lastFinishedPulling="2025-07-10 00:20:12.300073302 +0000 UTC m=+26.773561385" observedRunningTime="2025-07-10 00:20:12.980476243 +0000 UTC m=+27.453964345" watchObservedRunningTime="2025-07-10 00:20:15.000820493 +0000 UTC m=+29.474308622" Jul 10 00:20:16.759012 kubelet[2739]: E0710 00:20:16.758955 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h245p" podUID="bfd653e1-5546-4bf6-9c11-78c2c2efc214" Jul 10 00:20:18.758326 kubelet[2739]: E0710 00:20:18.758248 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h245p" podUID="bfd653e1-5546-4bf6-9c11-78c2c2efc214" Jul 10 00:20:19.516959 containerd[1524]: time="2025-07-10T00:20:19.516560250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:19.518459 containerd[1524]: time="2025-07-10T00:20:19.517977142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 00:20:19.521835 containerd[1524]: time="2025-07-10T00:20:19.521751208Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:19.527923 containerd[1524]: time="2025-07-10T00:20:19.527797656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:19.528857 containerd[1524]: time="2025-07-10T00:20:19.528748864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.558022609s" Jul 10 00:20:19.529099 containerd[1524]: time="2025-07-10T00:20:19.528966379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 00:20:19.540005 containerd[1524]: time="2025-07-10T00:20:19.539879422Z" level=info msg="CreateContainer within sandbox \"0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:20:19.561501 containerd[1524]: time="2025-07-10T00:20:19.561362220Z" level=info msg="Container 97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:19.572652 containerd[1524]: time="2025-07-10T00:20:19.572555357Z" level=info msg="CreateContainer within sandbox \"0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe\"" Jul 10 00:20:19.574044 containerd[1524]: time="2025-07-10T00:20:19.574002760Z" level=info msg="StartContainer for \"97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe\"" Jul 10 00:20:19.577078 containerd[1524]: time="2025-07-10T00:20:19.576792732Z" level=info msg="connecting to shim 97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe" address="unix:///run/containerd/s/382d1492a84cd50115579ff8c63e4c0c8b3ad6285e7875b0d747ce17f9fd3109" protocol=ttrpc version=3 Jul 10 00:20:19.617419 systemd[1]: Started cri-containerd-97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe.scope - libcontainer container 97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe. Jul 10 00:20:19.705535 containerd[1524]: time="2025-07-10T00:20:19.705365553Z" level=info msg="StartContainer for \"97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe\" returns successfully" Jul 10 00:20:20.503565 systemd[1]: cri-containerd-97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe.scope: Deactivated successfully. Jul 10 00:20:20.504561 systemd[1]: cri-containerd-97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe.scope: Consumed 769ms CPU time, 165.3M memory peak, 12.9M read from disk, 171.2M written to disk. Jul 10 00:20:20.534000 containerd[1524]: time="2025-07-10T00:20:20.532985990Z" level=info msg="received exit event container_id:\"97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe\" id:\"97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe\" pid:3492 exited_at:{seconds:1752106820 nanos:506200366}" Jul 10 00:20:20.536062 containerd[1524]: time="2025-07-10T00:20:20.535998142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe\" id:\"97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe\" pid:3492 exited_at:{seconds:1752106820 nanos:506200366}" Jul 10 00:20:20.590126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97b36a185b651cbe9c19746a025f11aae0cd9dff90fa46d670d30eb311f86bfe-rootfs.mount: Deactivated successfully. Jul 10 00:20:20.594720 kubelet[2739]: I0710 00:20:20.594652 2739 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:20:20.696672 systemd[1]: Created slice kubepods-burstable-pod6a837f3f_c164_4e44_99ea_ceab22be88a6.slice - libcontainer container kubepods-burstable-pod6a837f3f_c164_4e44_99ea_ceab22be88a6.slice. Jul 10 00:20:20.750925 kubelet[2739]: I0710 00:20:20.750870 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9r98\" (UniqueName: \"kubernetes.io/projected/6a837f3f-c164-4e44-99ea-ceab22be88a6-kube-api-access-j9r98\") pod \"coredns-674b8bbfcf-k8gwz\" (UID: \"6a837f3f-c164-4e44-99ea-ceab22be88a6\") " pod="kube-system/coredns-674b8bbfcf-k8gwz" Jul 10 00:20:20.750925 kubelet[2739]: I0710 00:20:20.751170 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvqq\" (UniqueName: \"kubernetes.io/projected/d81c931c-4dd6-41a0-b1bb-333fcc77f26a-kube-api-access-pvvqq\") pod \"calico-kube-controllers-9db756b89-q5p8k\" (UID: \"d81c931c-4dd6-41a0-b1bb-333fcc77f26a\") " pod="calico-system/calico-kube-controllers-9db756b89-q5p8k" Jul 10 00:20:20.750925 kubelet[2739]: I0710 00:20:20.751239 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d81c931c-4dd6-41a0-b1bb-333fcc77f26a-tigera-ca-bundle\") pod \"calico-kube-controllers-9db756b89-q5p8k\" (UID: \"d81c931c-4dd6-41a0-b1bb-333fcc77f26a\") " pod="calico-system/calico-kube-controllers-9db756b89-q5p8k" Jul 10 00:20:20.751596 kubelet[2739]: I0710 00:20:20.751273 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a837f3f-c164-4e44-99ea-ceab22be88a6-config-volume\") pod \"coredns-674b8bbfcf-k8gwz\" (UID: \"6a837f3f-c164-4e44-99ea-ceab22be88a6\") " pod="kube-system/coredns-674b8bbfcf-k8gwz" Jul 10 00:20:20.752268 systemd[1]: Created slice kubepods-besteffort-pod7b32e0ba_807d_4e45_b247_49987273641d.slice - libcontainer container kubepods-besteffort-pod7b32e0ba_807d_4e45_b247_49987273641d.slice. Jul 10 00:20:20.769891 systemd[1]: Created slice kubepods-burstable-pod64618632_ac6c_43c3_8ba4_5661b7f8a1d6.slice - libcontainer container kubepods-burstable-pod64618632_ac6c_43c3_8ba4_5661b7f8a1d6.slice. Jul 10 00:20:20.795706 systemd[1]: Created slice kubepods-besteffort-pod67fce285_476a_4570_a058_cfa725f341fd.slice - libcontainer container kubepods-besteffort-pod67fce285_476a_4570_a058_cfa725f341fd.slice. Jul 10 00:20:20.817260 systemd[1]: Created slice kubepods-besteffort-podfdfb1967_1ea6_4004_9de6_57714a08b7b9.slice - libcontainer container kubepods-besteffort-podfdfb1967_1ea6_4004_9de6_57714a08b7b9.slice. Jul 10 00:20:20.829990 systemd[1]: Created slice kubepods-besteffort-podd81c931c_4dd6_41a0_b1bb_333fcc77f26a.slice - libcontainer container kubepods-besteffort-podd81c931c_4dd6_41a0_b1bb_333fcc77f26a.slice. Jul 10 00:20:20.845915 systemd[1]: Created slice kubepods-besteffort-pod83ac65be_5d2a_492a_a383_0309423e2826.slice - libcontainer container kubepods-besteffort-pod83ac65be_5d2a_492a_a383_0309423e2826.slice. Jul 10 00:20:20.851615 kubelet[2739]: I0710 00:20:20.851559 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74477b64-adde-429b-818b-b2d22cff585f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-rd4jt\" (UID: \"74477b64-adde-429b-818b-b2d22cff585f\") " pod="calico-system/goldmane-768f4c5c69-rd4jt" Jul 10 00:20:20.851615 kubelet[2739]: I0710 00:20:20.851604 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7m5r\" (UniqueName: \"kubernetes.io/projected/74477b64-adde-429b-818b-b2d22cff585f-kube-api-access-m7m5r\") pod \"goldmane-768f4c5c69-rd4jt\" (UID: \"74477b64-adde-429b-818b-b2d22cff585f\") " pod="calico-system/goldmane-768f4c5c69-rd4jt" Jul 10 00:20:20.851615 kubelet[2739]: I0710 00:20:20.851624 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67fce285-476a-4570-a058-cfa725f341fd-whisker-backend-key-pair\") pod \"whisker-7bf8bcb448-md65b\" (UID: \"67fce285-476a-4570-a058-cfa725f341fd\") " pod="calico-system/whisker-7bf8bcb448-md65b" Jul 10 00:20:20.851872 kubelet[2739]: I0710 00:20:20.851644 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwx2\" (UniqueName: \"kubernetes.io/projected/67fce285-476a-4570-a058-cfa725f341fd-kube-api-access-sjwx2\") pod \"whisker-7bf8bcb448-md65b\" (UID: \"67fce285-476a-4570-a058-cfa725f341fd\") " pod="calico-system/whisker-7bf8bcb448-md65b" Jul 10 00:20:20.851872 kubelet[2739]: I0710 00:20:20.851672 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64618632-ac6c-43c3-8ba4-5661b7f8a1d6-config-volume\") pod \"coredns-674b8bbfcf-6dsvs\" (UID: \"64618632-ac6c-43c3-8ba4-5661b7f8a1d6\") " pod="kube-system/coredns-674b8bbfcf-6dsvs" Jul 10 00:20:20.851872 kubelet[2739]: I0710 00:20:20.851687 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fce285-476a-4570-a058-cfa725f341fd-whisker-ca-bundle\") pod \"whisker-7bf8bcb448-md65b\" (UID: \"67fce285-476a-4570-a058-cfa725f341fd\") " pod="calico-system/whisker-7bf8bcb448-md65b" Jul 10 00:20:20.851872 kubelet[2739]: I0710 00:20:20.851706 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fdfb1967-1ea6-4004-9de6-57714a08b7b9-calico-apiserver-certs\") pod \"calico-apiserver-54c749cb9c-7bmzr\" (UID: \"fdfb1967-1ea6-4004-9de6-57714a08b7b9\") " pod="calico-apiserver/calico-apiserver-54c749cb9c-7bmzr" Jul 10 00:20:20.851872 kubelet[2739]: I0710 00:20:20.851723 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5prp4\" (UniqueName: \"kubernetes.io/projected/fdfb1967-1ea6-4004-9de6-57714a08b7b9-kube-api-access-5prp4\") pod \"calico-apiserver-54c749cb9c-7bmzr\" (UID: \"fdfb1967-1ea6-4004-9de6-57714a08b7b9\") " pod="calico-apiserver/calico-apiserver-54c749cb9c-7bmzr" Jul 10 00:20:20.852136 kubelet[2739]: I0710 00:20:20.851742 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/83ac65be-5d2a-492a-a383-0309423e2826-calico-apiserver-certs\") pod \"calico-apiserver-c6cfbdf59-42j5g\" (UID: \"83ac65be-5d2a-492a-a383-0309423e2826\") " pod="calico-apiserver/calico-apiserver-c6cfbdf59-42j5g" Jul 10 00:20:20.852136 kubelet[2739]: I0710 00:20:20.851759 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlb7\" (UniqueName: \"kubernetes.io/projected/83ac65be-5d2a-492a-a383-0309423e2826-kube-api-access-tdlb7\") pod \"calico-apiserver-c6cfbdf59-42j5g\" (UID: \"83ac65be-5d2a-492a-a383-0309423e2826\") " pod="calico-apiserver/calico-apiserver-c6cfbdf59-42j5g" Jul 10 00:20:20.852136 kubelet[2739]: I0710 00:20:20.851777 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b32e0ba-807d-4e45-b247-49987273641d-calico-apiserver-certs\") pod \"calico-apiserver-54c749cb9c-w2k2c\" (UID: \"7b32e0ba-807d-4e45-b247-49987273641d\") " pod="calico-apiserver/calico-apiserver-54c749cb9c-w2k2c" Jul 10 00:20:20.852136 kubelet[2739]: I0710 00:20:20.851794 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m94p\" (UniqueName: \"kubernetes.io/projected/7b32e0ba-807d-4e45-b247-49987273641d-kube-api-access-9m94p\") pod \"calico-apiserver-54c749cb9c-w2k2c\" (UID: \"7b32e0ba-807d-4e45-b247-49987273641d\") " pod="calico-apiserver/calico-apiserver-54c749cb9c-w2k2c" Jul 10 00:20:20.852136 kubelet[2739]: I0710 00:20:20.851812 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwkg\" (UniqueName: \"kubernetes.io/projected/64618632-ac6c-43c3-8ba4-5661b7f8a1d6-kube-api-access-jwwkg\") pod \"coredns-674b8bbfcf-6dsvs\" (UID: \"64618632-ac6c-43c3-8ba4-5661b7f8a1d6\") " pod="kube-system/coredns-674b8bbfcf-6dsvs" Jul 10 00:20:20.852363 kubelet[2739]: I0710 00:20:20.851827 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74477b64-adde-429b-818b-b2d22cff585f-config\") pod \"goldmane-768f4c5c69-rd4jt\" (UID: \"74477b64-adde-429b-818b-b2d22cff585f\") " pod="calico-system/goldmane-768f4c5c69-rd4jt" Jul 10 00:20:20.852363 kubelet[2739]: I0710 00:20:20.851879 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/74477b64-adde-429b-818b-b2d22cff585f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-rd4jt\" (UID: \"74477b64-adde-429b-818b-b2d22cff585f\") " pod="calico-system/goldmane-768f4c5c69-rd4jt" Jul 10 00:20:20.862234 systemd[1]: Created slice kubepods-besteffort-pod74477b64_adde_429b_818b_b2d22cff585f.slice - libcontainer container kubepods-besteffort-pod74477b64_adde_429b_818b_b2d22cff585f.slice. Jul 10 00:20:20.878921 systemd[1]: Created slice kubepods-besteffort-podbfd653e1_5546_4bf6_9c11_78c2c2efc214.slice - libcontainer container kubepods-besteffort-podbfd653e1_5546_4bf6_9c11_78c2c2efc214.slice. Jul 10 00:20:20.893409 containerd[1524]: time="2025-07-10T00:20:20.892557695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h245p,Uid:bfd653e1-5546-4bf6-9c11-78c2c2efc214,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:21.008943 kubelet[2739]: E0710 00:20:21.008871 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:21.029733 containerd[1524]: time="2025-07-10T00:20:21.028448900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8gwz,Uid:6a837f3f-c164-4e44-99ea-ceab22be88a6,Namespace:kube-system,Attempt:0,}" Jul 10 00:20:21.113356 containerd[1524]: time="2025-07-10T00:20:21.112913239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bf8bcb448-md65b,Uid:67fce285-476a-4570-a058-cfa725f341fd,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:21.130420 containerd[1524]: time="2025-07-10T00:20:21.130356658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c749cb9c-7bmzr,Uid:fdfb1967-1ea6-4004-9de6-57714a08b7b9,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:20:21.141667 containerd[1524]: time="2025-07-10T00:20:21.141192859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9db756b89-q5p8k,Uid:d81c931c-4dd6-41a0-b1bb-333fcc77f26a,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:21.157363 containerd[1524]: time="2025-07-10T00:20:21.154865542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:20:21.160012 containerd[1524]: time="2025-07-10T00:20:21.158764693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6cfbdf59-42j5g,Uid:83ac65be-5d2a-492a-a383-0309423e2826,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:20:21.174815 containerd[1524]: time="2025-07-10T00:20:21.174747458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rd4jt,Uid:74477b64-adde-429b-818b-b2d22cff585f,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:21.383973 kubelet[2739]: E0710 00:20:21.383316 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:21.390004 containerd[1524]: time="2025-07-10T00:20:21.388418988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c749cb9c-w2k2c,Uid:7b32e0ba-807d-4e45-b247-49987273641d,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:20:21.390004 containerd[1524]: time="2025-07-10T00:20:21.389424879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6dsvs,Uid:64618632-ac6c-43c3-8ba4-5661b7f8a1d6,Namespace:kube-system,Attempt:0,}" Jul 10 00:20:21.754136 containerd[1524]: time="2025-07-10T00:20:21.753552504Z" level=error msg="Failed to destroy network for sandbox \"116ac5e05afbfb9b0ed03a5ff6b20d2da63456f0f99695172b50c7637e2514a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.758511 systemd[1]: run-netns-cni\x2d019fef95\x2d8565\x2d6bdb\x2d36fa\x2def870c0a493c.mount: Deactivated successfully. Jul 10 00:20:21.767310 containerd[1524]: time="2025-07-10T00:20:21.767261038Z" level=error msg="Failed to destroy network for sandbox \"c9803d01d7f90e597797236ce6b822a005bf34251a928c587167b357f6ef1062\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.772648 systemd[1]: run-netns-cni\x2d1b96c9a8\x2ddcf0\x2dab1b\x2d2a94\x2dbc831888181c.mount: Deactivated successfully. Jul 10 00:20:21.776172 containerd[1524]: time="2025-07-10T00:20:21.771550020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c749cb9c-7bmzr,Uid:fdfb1967-1ea6-4004-9de6-57714a08b7b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"116ac5e05afbfb9b0ed03a5ff6b20d2da63456f0f99695172b50c7637e2514a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.776436 kubelet[2739]: E0710 00:20:21.776354 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116ac5e05afbfb9b0ed03a5ff6b20d2da63456f0f99695172b50c7637e2514a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.777286 kubelet[2739]: E0710 00:20:21.776475 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116ac5e05afbfb9b0ed03a5ff6b20d2da63456f0f99695172b50c7637e2514a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54c749cb9c-7bmzr" Jul 10 00:20:21.777286 kubelet[2739]: E0710 00:20:21.776504 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116ac5e05afbfb9b0ed03a5ff6b20d2da63456f0f99695172b50c7637e2514a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54c749cb9c-7bmzr" Jul 10 00:20:21.777286 kubelet[2739]: E0710 00:20:21.776587 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54c749cb9c-7bmzr_calico-apiserver(fdfb1967-1ea6-4004-9de6-57714a08b7b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54c749cb9c-7bmzr_calico-apiserver(fdfb1967-1ea6-4004-9de6-57714a08b7b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"116ac5e05afbfb9b0ed03a5ff6b20d2da63456f0f99695172b50c7637e2514a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54c749cb9c-7bmzr" podUID="fdfb1967-1ea6-4004-9de6-57714a08b7b9" Jul 10 00:20:21.786004 containerd[1524]: time="2025-07-10T00:20:21.785904847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8gwz,Uid:6a837f3f-c164-4e44-99ea-ceab22be88a6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9803d01d7f90e597797236ce6b822a005bf34251a928c587167b357f6ef1062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.786568 kubelet[2739]: E0710 00:20:21.786233 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9803d01d7f90e597797236ce6b822a005bf34251a928c587167b357f6ef1062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.786568 kubelet[2739]: E0710 00:20:21.786299 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9803d01d7f90e597797236ce6b822a005bf34251a928c587167b357f6ef1062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k8gwz" Jul 10 00:20:21.786568 kubelet[2739]: E0710 00:20:21.786323 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9803d01d7f90e597797236ce6b822a005bf34251a928c587167b357f6ef1062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k8gwz" Jul 10 00:20:21.786708 kubelet[2739]: E0710 00:20:21.786380 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-k8gwz_kube-system(6a837f3f-c164-4e44-99ea-ceab22be88a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-k8gwz_kube-system(6a837f3f-c164-4e44-99ea-ceab22be88a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9803d01d7f90e597797236ce6b822a005bf34251a928c587167b357f6ef1062\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k8gwz" podUID="6a837f3f-c164-4e44-99ea-ceab22be88a6" Jul 10 00:20:21.835324 containerd[1524]: time="2025-07-10T00:20:21.834759850Z" level=error msg="Failed to destroy network for sandbox \"71064ff2128830e5a4120e88f675477cd702590e062307fcdc0fb01236de652f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.841588 systemd[1]: run-netns-cni\x2d91e1c737\x2d31bf\x2d13ce\x2d17a9\x2de29b509eec06.mount: Deactivated successfully. Jul 10 00:20:21.843770 containerd[1524]: time="2025-07-10T00:20:21.843692867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h245p,Uid:bfd653e1-5546-4bf6-9c11-78c2c2efc214,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71064ff2128830e5a4120e88f675477cd702590e062307fcdc0fb01236de652f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.846990 kubelet[2739]: E0710 00:20:21.844531 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71064ff2128830e5a4120e88f675477cd702590e062307fcdc0fb01236de652f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.846990 kubelet[2739]: E0710 00:20:21.844606 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71064ff2128830e5a4120e88f675477cd702590e062307fcdc0fb01236de652f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h245p" Jul 10 00:20:21.846990 kubelet[2739]: E0710 00:20:21.844760 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71064ff2128830e5a4120e88f675477cd702590e062307fcdc0fb01236de652f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h245p" Jul 10 00:20:21.847191 kubelet[2739]: E0710 00:20:21.844875 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h245p_calico-system(bfd653e1-5546-4bf6-9c11-78c2c2efc214)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h245p_calico-system(bfd653e1-5546-4bf6-9c11-78c2c2efc214)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71064ff2128830e5a4120e88f675477cd702590e062307fcdc0fb01236de652f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h245p" podUID="bfd653e1-5546-4bf6-9c11-78c2c2efc214" Jul 10 00:20:21.888203 containerd[1524]: time="2025-07-10T00:20:21.887109613Z" level=error msg="Failed to destroy network for sandbox \"4f212c641741a83e0651dca5738ed405f3fad418dc5e852a56296d0f517fe6b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.893336 systemd[1]: run-netns-cni\x2ddda245ef\x2dfda1\x2da629\x2d0fcd\x2dd8f1d1d3c5be.mount: Deactivated successfully. Jul 10 00:20:21.900454 containerd[1524]: time="2025-07-10T00:20:21.898980271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bf8bcb448-md65b,Uid:67fce285-476a-4570-a058-cfa725f341fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f212c641741a83e0651dca5738ed405f3fad418dc5e852a56296d0f517fe6b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.900703 kubelet[2739]: E0710 00:20:21.899321 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f212c641741a83e0651dca5738ed405f3fad418dc5e852a56296d0f517fe6b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.900703 kubelet[2739]: E0710 00:20:21.899398 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f212c641741a83e0651dca5738ed405f3fad418dc5e852a56296d0f517fe6b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bf8bcb448-md65b" Jul 10 00:20:21.900703 kubelet[2739]: E0710 00:20:21.899435 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f212c641741a83e0651dca5738ed405f3fad418dc5e852a56296d0f517fe6b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bf8bcb448-md65b" Jul 10 00:20:21.901208 kubelet[2739]: E0710 00:20:21.899513 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bf8bcb448-md65b_calico-system(67fce285-476a-4570-a058-cfa725f341fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bf8bcb448-md65b_calico-system(67fce285-476a-4570-a058-cfa725f341fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f212c641741a83e0651dca5738ed405f3fad418dc5e852a56296d0f517fe6b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bf8bcb448-md65b" podUID="67fce285-476a-4570-a058-cfa725f341fd" Jul 10 00:20:21.916464 containerd[1524]: time="2025-07-10T00:20:21.916131239Z" level=error msg="Failed to destroy network for sandbox \"97d4af10f43c1a8fa6f3168b9d5cd9ca71318aab605702e6b7806d7fc721a42d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.919364 containerd[1524]: time="2025-07-10T00:20:21.919085405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9db756b89-q5p8k,Uid:d81c931c-4dd6-41a0-b1bb-333fcc77f26a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d4af10f43c1a8fa6f3168b9d5cd9ca71318aab605702e6b7806d7fc721a42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.920774 kubelet[2739]: E0710 00:20:21.920159 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d4af10f43c1a8fa6f3168b9d5cd9ca71318aab605702e6b7806d7fc721a42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.920774 kubelet[2739]: E0710 00:20:21.920229 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d4af10f43c1a8fa6f3168b9d5cd9ca71318aab605702e6b7806d7fc721a42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9db756b89-q5p8k" Jul 10 00:20:21.920774 kubelet[2739]: E0710 00:20:21.920254 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97d4af10f43c1a8fa6f3168b9d5cd9ca71318aab605702e6b7806d7fc721a42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9db756b89-q5p8k" Jul 10 00:20:21.921859 kubelet[2739]: E0710 00:20:21.920317 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9db756b89-q5p8k_calico-system(d81c931c-4dd6-41a0-b1bb-333fcc77f26a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9db756b89-q5p8k_calico-system(d81c931c-4dd6-41a0-b1bb-333fcc77f26a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97d4af10f43c1a8fa6f3168b9d5cd9ca71318aab605702e6b7806d7fc721a42d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9db756b89-q5p8k" podUID="d81c931c-4dd6-41a0-b1bb-333fcc77f26a" Jul 10 00:20:21.944744 containerd[1524]: time="2025-07-10T00:20:21.944585709Z" level=error msg="Failed to destroy network for sandbox \"caed7cdd5303561755cd2811ceb620f26107e9b8307b004c228530f247287b12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.947165 containerd[1524]: time="2025-07-10T00:20:21.947100738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rd4jt,Uid:74477b64-adde-429b-818b-b2d22cff585f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"caed7cdd5303561755cd2811ceb620f26107e9b8307b004c228530f247287b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.948324 kubelet[2739]: E0710 00:20:21.948192 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caed7cdd5303561755cd2811ceb620f26107e9b8307b004c228530f247287b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.948324 kubelet[2739]: E0710 00:20:21.948281 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caed7cdd5303561755cd2811ceb620f26107e9b8307b004c228530f247287b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-rd4jt" Jul 10 00:20:21.948324 kubelet[2739]: E0710 00:20:21.948309 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caed7cdd5303561755cd2811ceb620f26107e9b8307b004c228530f247287b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-rd4jt" Jul 10 00:20:21.949769 kubelet[2739]: E0710 00:20:21.948380 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-rd4jt_calico-system(74477b64-adde-429b-818b-b2d22cff585f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-rd4jt_calico-system(74477b64-adde-429b-818b-b2d22cff585f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caed7cdd5303561755cd2811ceb620f26107e9b8307b004c228530f247287b12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-rd4jt" podUID="74477b64-adde-429b-818b-b2d22cff585f" Jul 10 00:20:21.951165 containerd[1524]: time="2025-07-10T00:20:21.951002657Z" level=error msg="Failed to destroy network for sandbox \"28a5744a7353dfce8164ad09b9096a2c4c58a6beee46751900a5666a3bdbb5f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.953525 containerd[1524]: time="2025-07-10T00:20:21.952948522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6dsvs,Uid:64618632-ac6c-43c3-8ba4-5661b7f8a1d6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a5744a7353dfce8164ad09b9096a2c4c58a6beee46751900a5666a3bdbb5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.954041 kubelet[2739]: E0710 00:20:21.953553 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a5744a7353dfce8164ad09b9096a2c4c58a6beee46751900a5666a3bdbb5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.954041 kubelet[2739]: E0710 00:20:21.953653 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a5744a7353dfce8164ad09b9096a2c4c58a6beee46751900a5666a3bdbb5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6dsvs" Jul 10 00:20:21.954041 kubelet[2739]: E0710 00:20:21.953683 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a5744a7353dfce8164ad09b9096a2c4c58a6beee46751900a5666a3bdbb5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6dsvs" Jul 10 00:20:21.954190 kubelet[2739]: E0710 00:20:21.953764 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6dsvs_kube-system(64618632-ac6c-43c3-8ba4-5661b7f8a1d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6dsvs_kube-system(64618632-ac6c-43c3-8ba4-5661b7f8a1d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28a5744a7353dfce8164ad09b9096a2c4c58a6beee46751900a5666a3bdbb5f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6dsvs" podUID="64618632-ac6c-43c3-8ba4-5661b7f8a1d6" Jul 10 00:20:21.962486 containerd[1524]: time="2025-07-10T00:20:21.962420278Z" level=error msg="Failed to destroy network for sandbox \"28040b2d0cfa9be26ef623bd34f6945eaf91d3f119eb1332d2e3a421043e9d97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.964754 containerd[1524]: time="2025-07-10T00:20:21.964693625Z" level=error msg="Failed to destroy network for sandbox \"f6e432cdbf28d88f9cf37d561dae28733e1ad783484cbba132d5d867c75d1362\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.965620 containerd[1524]: time="2025-07-10T00:20:21.964698554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6cfbdf59-42j5g,Uid:83ac65be-5d2a-492a-a383-0309423e2826,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28040b2d0cfa9be26ef623bd34f6945eaf91d3f119eb1332d2e3a421043e9d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.965912 kubelet[2739]: E0710 00:20:21.965335 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28040b2d0cfa9be26ef623bd34f6945eaf91d3f119eb1332d2e3a421043e9d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.965912 kubelet[2739]: E0710 00:20:21.965401 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28040b2d0cfa9be26ef623bd34f6945eaf91d3f119eb1332d2e3a421043e9d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6cfbdf59-42j5g" Jul 10 00:20:21.965912 kubelet[2739]: E0710 00:20:21.965423 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28040b2d0cfa9be26ef623bd34f6945eaf91d3f119eb1332d2e3a421043e9d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6cfbdf59-42j5g" Jul 10 00:20:21.966433 kubelet[2739]: E0710 00:20:21.965506 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6cfbdf59-42j5g_calico-apiserver(83ac65be-5d2a-492a-a383-0309423e2826)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6cfbdf59-42j5g_calico-apiserver(83ac65be-5d2a-492a-a383-0309423e2826)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28040b2d0cfa9be26ef623bd34f6945eaf91d3f119eb1332d2e3a421043e9d97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6cfbdf59-42j5g" podUID="83ac65be-5d2a-492a-a383-0309423e2826" Jul 10 00:20:21.968093 containerd[1524]: time="2025-07-10T00:20:21.967987678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c749cb9c-w2k2c,Uid:7b32e0ba-807d-4e45-b247-49987273641d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e432cdbf28d88f9cf37d561dae28733e1ad783484cbba132d5d867c75d1362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.968419 kubelet[2739]: E0710 00:20:21.968377 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e432cdbf28d88f9cf37d561dae28733e1ad783484cbba132d5d867c75d1362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:20:21.968480 kubelet[2739]: E0710 00:20:21.968452 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e432cdbf28d88f9cf37d561dae28733e1ad783484cbba132d5d867c75d1362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54c749cb9c-w2k2c" Jul 10 00:20:21.968530 kubelet[2739]: E0710 00:20:21.968480 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e432cdbf28d88f9cf37d561dae28733e1ad783484cbba132d5d867c75d1362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54c749cb9c-w2k2c" Jul 10 00:20:21.968636 kubelet[2739]: E0710 00:20:21.968590 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54c749cb9c-w2k2c_calico-apiserver(7b32e0ba-807d-4e45-b247-49987273641d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54c749cb9c-w2k2c_calico-apiserver(7b32e0ba-807d-4e45-b247-49987273641d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6e432cdbf28d88f9cf37d561dae28733e1ad783484cbba132d5d867c75d1362\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54c749cb9c-w2k2c" podUID="7b32e0ba-807d-4e45-b247-49987273641d" Jul 10 00:20:22.593096 systemd[1]: run-netns-cni\x2dcfd67485\x2d3a5f\x2d1dcb\x2d7e92\x2d2df5b8e191ab.mount: Deactivated successfully. Jul 10 00:20:22.593267 systemd[1]: run-netns-cni\x2dad9dc50e\x2d9151\x2d235a\x2d51da\x2d34a35ff5c831.mount: Deactivated successfully. Jul 10 00:20:22.593364 systemd[1]: run-netns-cni\x2df55c8099\x2de051\x2dfc02\x2d743e\x2df6c717d44905.mount: Deactivated successfully. Jul 10 00:20:22.593444 systemd[1]: run-netns-cni\x2dda4f024b\x2dbb40\x2dc0f6\x2dbf70\x2d724227868f88.mount: Deactivated successfully. Jul 10 00:20:22.593529 systemd[1]: run-netns-cni\x2d78ec1bf2\x2df523\x2d3749\x2dff41\x2d518307906b38.mount: Deactivated successfully. Jul 10 00:20:28.347046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78311932.mount: Deactivated successfully. Jul 10 00:20:28.384201 containerd[1524]: time="2025-07-10T00:20:28.384117106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:28.385293 containerd[1524]: time="2025-07-10T00:20:28.385083688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 00:20:28.386734 containerd[1524]: time="2025-07-10T00:20:28.386686703Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:28.389916 containerd[1524]: time="2025-07-10T00:20:28.389835805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:28.391197 containerd[1524]: time="2025-07-10T00:20:28.390779841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.235470821s" Jul 10 00:20:28.391197 containerd[1524]: time="2025-07-10T00:20:28.390852119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 00:20:28.435322 containerd[1524]: time="2025-07-10T00:20:28.435180906Z" level=info msg="CreateContainer within sandbox \"0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:20:28.451965 containerd[1524]: time="2025-07-10T00:20:28.450155073Z" level=info msg="Container 0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:28.469319 containerd[1524]: time="2025-07-10T00:20:28.469116694Z" level=info msg="CreateContainer within sandbox \"0dabfc64b617864aab6c7d6087634a02feffbf3dfd9a5f2eeb280bc425629346\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e\"" Jul 10 00:20:28.471115 containerd[1524]: time="2025-07-10T00:20:28.470539914Z" level=info msg="StartContainer for \"0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e\"" Jul 10 00:20:28.478034 containerd[1524]: time="2025-07-10T00:20:28.477902827Z" level=info msg="connecting to shim 0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e" address="unix:///run/containerd/s/382d1492a84cd50115579ff8c63e4c0c8b3ad6285e7875b0d747ce17f9fd3109" protocol=ttrpc version=3 Jul 10 00:20:28.615591 systemd[1]: Started cri-containerd-0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e.scope - libcontainer container 0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e. Jul 10 00:20:28.701668 containerd[1524]: time="2025-07-10T00:20:28.700291476Z" level=info msg="StartContainer for \"0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e\" returns successfully" Jul 10 00:20:28.945524 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:20:28.945736 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:20:29.330791 kubelet[2739]: I0710 00:20:29.329761 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fce285-476a-4570-a058-cfa725f341fd-whisker-ca-bundle\") pod \"67fce285-476a-4570-a058-cfa725f341fd\" (UID: \"67fce285-476a-4570-a058-cfa725f341fd\") " Jul 10 00:20:29.333304 kubelet[2739]: I0710 00:20:29.333242 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjwx2\" (UniqueName: \"kubernetes.io/projected/67fce285-476a-4570-a058-cfa725f341fd-kube-api-access-sjwx2\") pod \"67fce285-476a-4570-a058-cfa725f341fd\" (UID: \"67fce285-476a-4570-a058-cfa725f341fd\") " Jul 10 00:20:29.333786 kubelet[2739]: I0710 00:20:29.333750 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67fce285-476a-4570-a058-cfa725f341fd-whisker-backend-key-pair\") pod \"67fce285-476a-4570-a058-cfa725f341fd\" (UID: \"67fce285-476a-4570-a058-cfa725f341fd\") " Jul 10 00:20:29.335387 kubelet[2739]: I0710 00:20:29.335322 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67fce285-476a-4570-a058-cfa725f341fd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "67fce285-476a-4570-a058-cfa725f341fd" (UID: "67fce285-476a-4570-a058-cfa725f341fd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:20:29.356484 kubelet[2739]: I0710 00:20:29.356161 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fce285-476a-4570-a058-cfa725f341fd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "67fce285-476a-4570-a058-cfa725f341fd" (UID: "67fce285-476a-4570-a058-cfa725f341fd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:20:29.361409 systemd[1]: var-lib-kubelet-pods-67fce285\x2d476a\x2d4570\x2da058\x2dcfa725f341fd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:20:29.363006 kubelet[2739]: I0710 00:20:29.361581 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67fce285-476a-4570-a058-cfa725f341fd-kube-api-access-sjwx2" (OuterVolumeSpecName: "kube-api-access-sjwx2") pod "67fce285-476a-4570-a058-cfa725f341fd" (UID: "67fce285-476a-4570-a058-cfa725f341fd"). InnerVolumeSpecName "kube-api-access-sjwx2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:20:29.363326 systemd[1]: var-lib-kubelet-pods-67fce285\x2d476a\x2d4570\x2da058\x2dcfa725f341fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjwx2.mount: Deactivated successfully. Jul 10 00:20:29.435654 kubelet[2739]: I0710 00:20:29.435537 2739 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fce285-476a-4570-a058-cfa725f341fd-whisker-ca-bundle\") on node \"ci-4344.1.1-n-5827fce73f\" DevicePath \"\"" Jul 10 00:20:29.435956 kubelet[2739]: I0710 00:20:29.435618 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjwx2\" (UniqueName: \"kubernetes.io/projected/67fce285-476a-4570-a058-cfa725f341fd-kube-api-access-sjwx2\") on node \"ci-4344.1.1-n-5827fce73f\" DevicePath \"\"" Jul 10 00:20:29.435956 kubelet[2739]: I0710 00:20:29.435736 2739 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67fce285-476a-4570-a058-cfa725f341fd-whisker-backend-key-pair\") on node \"ci-4344.1.1-n-5827fce73f\" DevicePath \"\"" Jul 10 00:20:29.768060 systemd[1]: Removed slice kubepods-besteffort-pod67fce285_476a_4570_a058_cfa725f341fd.slice - libcontainer container kubepods-besteffort-pod67fce285_476a_4570_a058_cfa725f341fd.slice. Jul 10 00:20:30.255581 kubelet[2739]: I0710 00:20:30.245423 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pg8g7" podStartSLOduration=3.42513067 podStartE2EDuration="22.245385133s" podCreationTimestamp="2025-07-10 00:20:08 +0000 UTC" firstStartedPulling="2025-07-10 00:20:09.572217539 +0000 UTC m=+24.045705621" lastFinishedPulling="2025-07-10 00:20:28.392471999 +0000 UTC m=+42.865960084" observedRunningTime="2025-07-10 00:20:29.276887734 +0000 UTC m=+43.750375858" watchObservedRunningTime="2025-07-10 00:20:30.245385133 +0000 UTC m=+44.718873242" Jul 10 00:20:30.361712 systemd[1]: Created slice kubepods-besteffort-pod17df2909_0155_47ae_9eaf_d6215c963dea.slice - libcontainer container kubepods-besteffort-pod17df2909_0155_47ae_9eaf_d6215c963dea.slice. Jul 10 00:20:30.446863 kubelet[2739]: I0710 00:20:30.446301 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17df2909-0155-47ae-9eaf-d6215c963dea-whisker-backend-key-pair\") pod \"whisker-79b5dcf584-m8qq9\" (UID: \"17df2909-0155-47ae-9eaf-d6215c963dea\") " pod="calico-system/whisker-79b5dcf584-m8qq9" Jul 10 00:20:30.446863 kubelet[2739]: I0710 00:20:30.446452 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gr44\" (UniqueName: \"kubernetes.io/projected/17df2909-0155-47ae-9eaf-d6215c963dea-kube-api-access-5gr44\") pod \"whisker-79b5dcf584-m8qq9\" (UID: \"17df2909-0155-47ae-9eaf-d6215c963dea\") " pod="calico-system/whisker-79b5dcf584-m8qq9" Jul 10 00:20:30.446863 kubelet[2739]: I0710 00:20:30.446514 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17df2909-0155-47ae-9eaf-d6215c963dea-whisker-ca-bundle\") pod \"whisker-79b5dcf584-m8qq9\" (UID: \"17df2909-0155-47ae-9eaf-d6215c963dea\") " pod="calico-system/whisker-79b5dcf584-m8qq9" Jul 10 00:20:30.544578 containerd[1524]: time="2025-07-10T00:20:30.544423809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e\" id:\"f5f904f62a1a8d6052f8a323b13dbef1484158fe2b83abf47c71caa878158a65\" pid:3859 exit_status:1 exited_at:{seconds:1752106830 nanos:529630194}" Jul 10 00:20:30.672090 containerd[1524]: time="2025-07-10T00:20:30.672005083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b5dcf584-m8qq9,Uid:17df2909-0155-47ae-9eaf-d6215c963dea,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:31.224043 systemd-networkd[1455]: cali3047fef01e6: Link UP Jul 10 00:20:31.228590 systemd-networkd[1455]: cali3047fef01e6: Gained carrier Jul 10 00:20:31.300817 containerd[1524]: 2025-07-10 00:20:30.794 [INFO][3922] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:20:31.300817 containerd[1524]: 2025-07-10 00:20:30.847 [INFO][3922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0 whisker-79b5dcf584- calico-system 17df2909-0155-47ae-9eaf-d6215c963dea 930 0 2025-07-10 00:20:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79b5dcf584 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f whisker-79b5dcf584-m8qq9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3047fef01e6 [] [] }} ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Namespace="calico-system" Pod="whisker-79b5dcf584-m8qq9" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-" Jul 10 00:20:31.300817 containerd[1524]: 2025-07-10 00:20:30.848 [INFO][3922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Namespace="calico-system" Pod="whisker-79b5dcf584-m8qq9" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" Jul 10 00:20:31.300817 containerd[1524]: 2025-07-10 00:20:31.069 [INFO][3968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" HandleID="k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Workload="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.073 [INFO][3968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" HandleID="k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Workload="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8470), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-5827fce73f", "pod":"whisker-79b5dcf584-m8qq9", "timestamp":"2025-07-10 00:20:31.069265403 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.073 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.073 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.074 [INFO][3968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.098 [INFO][3968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.126 [INFO][3968] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.137 [INFO][3968] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.144 [INFO][3968] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.301722 containerd[1524]: 2025-07-10 00:20:31.149 [INFO][3968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.302538 containerd[1524]: 2025-07-10 00:20:31.149 [INFO][3968] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.302538 containerd[1524]: 2025-07-10 00:20:31.154 [INFO][3968] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd Jul 10 00:20:31.302538 containerd[1524]: 2025-07-10 00:20:31.161 [INFO][3968] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.302538 containerd[1524]: 2025-07-10 00:20:31.174 [INFO][3968] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.129/26] block=192.168.8.128/26 handle="k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.302538 containerd[1524]: 2025-07-10 00:20:31.174 [INFO][3968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.129/26] handle="k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:31.302538 containerd[1524]: 2025-07-10 00:20:31.174 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:31.302538 containerd[1524]: 2025-07-10 00:20:31.174 [INFO][3968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.129/26] IPv6=[] ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" HandleID="k8s-pod-network.3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Workload="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" Jul 10 00:20:31.302835 containerd[1524]: 2025-07-10 00:20:31.180 [INFO][3922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Namespace="calico-system" Pod="whisker-79b5dcf584-m8qq9" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0", GenerateName:"whisker-79b5dcf584-", Namespace:"calico-system", SelfLink:"", UID:"17df2909-0155-47ae-9eaf-d6215c963dea", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79b5dcf584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"whisker-79b5dcf584-m8qq9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.8.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3047fef01e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:31.302835 containerd[1524]: 2025-07-10 00:20:31.181 [INFO][3922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.129/32] ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Namespace="calico-system" Pod="whisker-79b5dcf584-m8qq9" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" Jul 10 00:20:31.302986 containerd[1524]: 2025-07-10 00:20:31.181 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3047fef01e6 ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Namespace="calico-system" Pod="whisker-79b5dcf584-m8qq9" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" Jul 10 00:20:31.302986 containerd[1524]: 2025-07-10 00:20:31.240 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Namespace="calico-system" Pod="whisker-79b5dcf584-m8qq9" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" Jul 10 00:20:31.303071 containerd[1524]: 2025-07-10 00:20:31.248 [INFO][3922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Namespace="calico-system" Pod="whisker-79b5dcf584-m8qq9" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0", GenerateName:"whisker-79b5dcf584-", Namespace:"calico-system", SelfLink:"", UID:"17df2909-0155-47ae-9eaf-d6215c963dea", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79b5dcf584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd", Pod:"whisker-79b5dcf584-m8qq9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.8.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3047fef01e6", MAC:"da:ab:96:dd:ad:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:31.303137 containerd[1524]: 2025-07-10 00:20:31.286 [INFO][3922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" Namespace="calico-system" Pod="whisker-79b5dcf584-m8qq9" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-whisker--79b5dcf584--m8qq9-eth0" Jul 10 00:20:31.444497 containerd[1524]: time="2025-07-10T00:20:31.444414565Z" level=info msg="connecting to shim 3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd" address="unix:///run/containerd/s/61f9f198ebd62c99fff7bd8bea048d482066bcf1200a9cbf59b0d2725b9a44b7" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:31.528348 systemd[1]: Started cri-containerd-3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd.scope - libcontainer container 3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd. Jul 10 00:20:31.568826 containerd[1524]: time="2025-07-10T00:20:31.568562230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e\" id:\"33c7c47feae11605c8118683e05291bf78ffe7765cedba99adbc12c3bec85b16\" pid:3996 exit_status:1 exited_at:{seconds:1752106831 nanos:568037326}" Jul 10 00:20:31.650628 containerd[1524]: time="2025-07-10T00:20:31.650552869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b5dcf584-m8qq9,Uid:17df2909-0155-47ae-9eaf-d6215c963dea,Namespace:calico-system,Attempt:0,} returns sandbox id \"3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd\"" Jul 10 00:20:31.653730 containerd[1524]: time="2025-07-10T00:20:31.653676244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:20:31.762489 kubelet[2739]: I0710 00:20:31.762406 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67fce285-476a-4570-a058-cfa725f341fd" path="/var/lib/kubelet/pods/67fce285-476a-4570-a058-cfa725f341fd/volumes" Jul 10 00:20:32.444181 systemd-networkd[1455]: cali3047fef01e6: Gained IPv6LL Jul 10 00:20:32.759048 containerd[1524]: time="2025-07-10T00:20:32.758822160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rd4jt,Uid:74477b64-adde-429b-818b-b2d22cff585f,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:32.855983 kubelet[2739]: I0710 00:20:32.855783 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:20:32.858555 kubelet[2739]: E0710 00:20:32.858508 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:33.111153 systemd-networkd[1455]: cali7bb90f54a75: Link UP Jul 10 00:20:33.112095 systemd-networkd[1455]: cali7bb90f54a75: Gained carrier Jul 10 00:20:33.130701 containerd[1524]: time="2025-07-10T00:20:33.130505944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 00:20:33.135623 containerd[1524]: time="2025-07-10T00:20:33.133710303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:33.146882 containerd[1524]: time="2025-07-10T00:20:33.146814988Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:33.147596 containerd[1524]: 2025-07-10 00:20:32.805 [INFO][4088] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:20:33.147596 containerd[1524]: 2025-07-10 00:20:32.833 [INFO][4088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0 goldmane-768f4c5c69- calico-system 74477b64-adde-429b-818b-b2d22cff585f 856 0 2025-07-10 00:20:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f goldmane-768f4c5c69-rd4jt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7bb90f54a75 [] [] }} ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Namespace="calico-system" Pod="goldmane-768f4c5c69-rd4jt" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-" Jul 10 00:20:33.147596 containerd[1524]: 2025-07-10 00:20:32.833 [INFO][4088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Namespace="calico-system" Pod="goldmane-768f4c5c69-rd4jt" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" Jul 10 00:20:33.147596 containerd[1524]: 2025-07-10 00:20:32.969 [INFO][4100] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" HandleID="k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Workload="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:32.970 [INFO][4100] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" HandleID="k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Workload="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036a780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-5827fce73f", "pod":"goldmane-768f4c5c69-rd4jt", "timestamp":"2025-07-10 00:20:32.969195201 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:32.970 [INFO][4100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:32.970 [INFO][4100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:32.970 [INFO][4100] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:32.996 [INFO][4100] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:33.022 [INFO][4100] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:33.049 [INFO][4100] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:33.056 [INFO][4100] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.147850 containerd[1524]: 2025-07-10 00:20:33.063 [INFO][4100] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.148405 containerd[1524]: 2025-07-10 00:20:33.064 [INFO][4100] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.148405 containerd[1524]: 2025-07-10 00:20:33.071 [INFO][4100] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4 Jul 10 00:20:33.148405 containerd[1524]: 2025-07-10 00:20:33.083 [INFO][4100] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.148405 containerd[1524]: 2025-07-10 00:20:33.098 [INFO][4100] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.130/26] block=192.168.8.128/26 handle="k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.148405 containerd[1524]: 2025-07-10 00:20:33.098 [INFO][4100] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.130/26] handle="k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:33.148405 containerd[1524]: 2025-07-10 00:20:33.098 [INFO][4100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:33.148405 containerd[1524]: 2025-07-10 00:20:33.098 [INFO][4100] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.130/26] IPv6=[] ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" HandleID="k8s-pod-network.833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Workload="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" Jul 10 00:20:33.148603 containerd[1524]: 2025-07-10 00:20:33.106 [INFO][4088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Namespace="calico-system" Pod="goldmane-768f4c5c69-rd4jt" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"74477b64-adde-429b-818b-b2d22cff585f", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"goldmane-768f4c5c69-rd4jt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.8.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bb90f54a75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:33.148603 containerd[1524]: 2025-07-10 00:20:33.106 [INFO][4088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.130/32] ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Namespace="calico-system" Pod="goldmane-768f4c5c69-rd4jt" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" Jul 10 00:20:33.148732 containerd[1524]: 2025-07-10 00:20:33.106 [INFO][4088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bb90f54a75 ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Namespace="calico-system" Pod="goldmane-768f4c5c69-rd4jt" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" Jul 10 00:20:33.148732 containerd[1524]: 2025-07-10 00:20:33.114 [INFO][4088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Namespace="calico-system" Pod="goldmane-768f4c5c69-rd4jt" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" Jul 10 00:20:33.148779 containerd[1524]: 2025-07-10 00:20:33.117 [INFO][4088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Namespace="calico-system" Pod="goldmane-768f4c5c69-rd4jt" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"74477b64-adde-429b-818b-b2d22cff585f", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4", Pod:"goldmane-768f4c5c69-rd4jt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.8.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bb90f54a75", MAC:"aa:28:ae:19:c3:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:33.148844 containerd[1524]: 2025-07-10 00:20:33.137 [INFO][4088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" Namespace="calico-system" Pod="goldmane-768f4c5c69-rd4jt" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-goldmane--768f4c5c69--rd4jt-eth0" Jul 10 00:20:33.152451 containerd[1524]: time="2025-07-10T00:20:33.152126362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:33.154969 containerd[1524]: time="2025-07-10T00:20:33.153652938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.499921385s" Jul 10 00:20:33.155230 containerd[1524]: time="2025-07-10T00:20:33.155078808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 00:20:33.164403 containerd[1524]: time="2025-07-10T00:20:33.164353404Z" level=info msg="CreateContainer within sandbox \"3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:20:33.183988 containerd[1524]: time="2025-07-10T00:20:33.183210665Z" level=info msg="Container 98c990e61cc3bb32f857b18609a919b110ddcae4b5b68ad2b5a2a142fddc91cd: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:33.211231 containerd[1524]: time="2025-07-10T00:20:33.211098708Z" level=info msg="CreateContainer within sandbox \"3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"98c990e61cc3bb32f857b18609a919b110ddcae4b5b68ad2b5a2a142fddc91cd\"" Jul 10 00:20:33.212550 containerd[1524]: time="2025-07-10T00:20:33.212464805Z" level=info msg="StartContainer for \"98c990e61cc3bb32f857b18609a919b110ddcae4b5b68ad2b5a2a142fddc91cd\"" Jul 10 00:20:33.234957 containerd[1524]: time="2025-07-10T00:20:33.234464554Z" level=info msg="connecting to shim 833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4" address="unix:///run/containerd/s/05d2bba4ff817ab519d55afb0df24443b6886ca62d5e8afd6dcf0dfb1c67786c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:33.235172 kubelet[2739]: E0710 00:20:33.234891 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:33.252178 containerd[1524]: time="2025-07-10T00:20:33.252115553Z" level=info msg="connecting to shim 98c990e61cc3bb32f857b18609a919b110ddcae4b5b68ad2b5a2a142fddc91cd" address="unix:///run/containerd/s/61f9f198ebd62c99fff7bd8bea048d482066bcf1200a9cbf59b0d2725b9a44b7" protocol=ttrpc version=3 Jul 10 00:20:33.323196 systemd[1]: Started cri-containerd-833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4.scope - libcontainer container 833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4. Jul 10 00:20:33.350333 systemd[1]: Started cri-containerd-98c990e61cc3bb32f857b18609a919b110ddcae4b5b68ad2b5a2a142fddc91cd.scope - libcontainer container 98c990e61cc3bb32f857b18609a919b110ddcae4b5b68ad2b5a2a142fddc91cd. Jul 10 00:20:33.501373 containerd[1524]: time="2025-07-10T00:20:33.501300858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rd4jt,Uid:74477b64-adde-429b-818b-b2d22cff585f,Namespace:calico-system,Attempt:0,} returns sandbox id \"833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4\"" Jul 10 00:20:33.508492 containerd[1524]: time="2025-07-10T00:20:33.508403584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:20:33.553323 containerd[1524]: time="2025-07-10T00:20:33.552270158Z" level=info msg="StartContainer for \"98c990e61cc3bb32f857b18609a919b110ddcae4b5b68ad2b5a2a142fddc91cd\" returns successfully" Jul 10 00:20:33.765886 containerd[1524]: time="2025-07-10T00:20:33.764954766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c749cb9c-w2k2c,Uid:7b32e0ba-807d-4e45-b247-49987273641d,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:20:33.765886 containerd[1524]: time="2025-07-10T00:20:33.765100022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6cfbdf59-42j5g,Uid:83ac65be-5d2a-492a-a383-0309423e2826,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:20:34.141746 systemd-networkd[1455]: calib40a379cd12: Link UP Jul 10 00:20:34.145520 systemd-networkd[1455]: calib40a379cd12: Gained carrier Jul 10 00:20:34.198145 containerd[1524]: 2025-07-10 00:20:33.883 [INFO][4227] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:20:34.198145 containerd[1524]: 2025-07-10 00:20:33.908 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0 calico-apiserver-c6cfbdf59- calico-apiserver 83ac65be-5d2a-492a-a383-0309423e2826 858 0 2025-07-10 00:20:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6cfbdf59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f calico-apiserver-c6cfbdf59-42j5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib40a379cd12 [] [] }} ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Namespace="calico-apiserver" Pod="calico-apiserver-c6cfbdf59-42j5g" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-" Jul 10 00:20:34.198145 containerd[1524]: 2025-07-10 00:20:33.908 [INFO][4227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Namespace="calico-apiserver" Pod="calico-apiserver-c6cfbdf59-42j5g" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" Jul 10 00:20:34.198145 containerd[1524]: 2025-07-10 00:20:33.985 [INFO][4260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" HandleID="k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:33.986 [INFO][4260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" HandleID="k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5880), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-n-5827fce73f", "pod":"calico-apiserver-c6cfbdf59-42j5g", "timestamp":"2025-07-10 00:20:33.985134452 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:33.986 [INFO][4260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:33.986 [INFO][4260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:33.986 [INFO][4260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:34.005 [INFO][4260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:34.018 [INFO][4260] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:34.031 [INFO][4260] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:34.039 [INFO][4260] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199631 containerd[1524]: 2025-07-10 00:20:34.072 [INFO][4260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199986 containerd[1524]: 2025-07-10 00:20:34.072 [INFO][4260] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199986 containerd[1524]: 2025-07-10 00:20:34.075 [INFO][4260] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae Jul 10 00:20:34.199986 containerd[1524]: 2025-07-10 00:20:34.096 [INFO][4260] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199986 containerd[1524]: 2025-07-10 00:20:34.113 [INFO][4260] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.131/26] block=192.168.8.128/26 handle="k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199986 containerd[1524]: 2025-07-10 00:20:34.114 [INFO][4260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.131/26] handle="k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.199986 containerd[1524]: 2025-07-10 00:20:34.116 [INFO][4260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:34.199986 containerd[1524]: 2025-07-10 00:20:34.116 [INFO][4260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.131/26] IPv6=[] ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" HandleID="k8s-pod-network.9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" Jul 10 00:20:34.200226 containerd[1524]: 2025-07-10 00:20:34.128 [INFO][4227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Namespace="calico-apiserver" Pod="calico-apiserver-c6cfbdf59-42j5g" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0", GenerateName:"calico-apiserver-c6cfbdf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"83ac65be-5d2a-492a-a383-0309423e2826", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6cfbdf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"calico-apiserver-c6cfbdf59-42j5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib40a379cd12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:34.200339 containerd[1524]: 2025-07-10 00:20:34.128 [INFO][4227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.131/32] ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Namespace="calico-apiserver" Pod="calico-apiserver-c6cfbdf59-42j5g" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" Jul 10 00:20:34.200339 containerd[1524]: 2025-07-10 00:20:34.128 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib40a379cd12 ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Namespace="calico-apiserver" Pod="calico-apiserver-c6cfbdf59-42j5g" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" Jul 10 00:20:34.200339 containerd[1524]: 2025-07-10 00:20:34.145 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Namespace="calico-apiserver" Pod="calico-apiserver-c6cfbdf59-42j5g" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" Jul 10 00:20:34.200458 containerd[1524]: 2025-07-10 00:20:34.147 [INFO][4227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Namespace="calico-apiserver" Pod="calico-apiserver-c6cfbdf59-42j5g" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0", GenerateName:"calico-apiserver-c6cfbdf59-", Namespace:"calico-apiserver", SelfLink:"", UID:"83ac65be-5d2a-492a-a383-0309423e2826", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6cfbdf59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae", Pod:"calico-apiserver-c6cfbdf59-42j5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib40a379cd12", MAC:"3e:33:fc:75:9f:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:34.200576 containerd[1524]: 2025-07-10 00:20:34.184 [INFO][4227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" Namespace="calico-apiserver" Pod="calico-apiserver-c6cfbdf59-42j5g" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--c6cfbdf59--42j5g-eth0" Jul 10 00:20:34.276179 containerd[1524]: time="2025-07-10T00:20:34.276075389Z" level=info msg="connecting to shim 9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae" address="unix:///run/containerd/s/e724bd42c920d332e8de26c3f0dfc1f64d73bb939d19dd20102cf098eaf86c82" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:34.291055 systemd-networkd[1455]: cali8b9df0de19c: Link UP Jul 10 00:20:34.293157 systemd-networkd[1455]: cali8b9df0de19c: Gained carrier Jul 10 00:20:34.338452 containerd[1524]: 2025-07-10 00:20:33.869 [INFO][4230] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:20:34.338452 containerd[1524]: 2025-07-10 00:20:33.906 [INFO][4230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0 calico-apiserver-54c749cb9c- calico-apiserver 7b32e0ba-807d-4e45-b247-49987273641d 863 0 2025-07-10 00:20:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54c749cb9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f calico-apiserver-54c749cb9c-w2k2c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b9df0de19c [] [] }} ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-w2k2c" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-" Jul 10 00:20:34.338452 containerd[1524]: 2025-07-10 00:20:33.906 [INFO][4230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-w2k2c" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:20:34.338452 containerd[1524]: 2025-07-10 00:20:34.012 [INFO][4270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.013 [INFO][4270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000327870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-n-5827fce73f", "pod":"calico-apiserver-54c749cb9c-w2k2c", "timestamp":"2025-07-10 00:20:34.012833205 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.013 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.116 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.116 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.144 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.172 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.194 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.216 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.338785 containerd[1524]: 2025-07-10 00:20:34.220 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.339054 containerd[1524]: 2025-07-10 00:20:34.220 [INFO][4270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.339054 containerd[1524]: 2025-07-10 00:20:34.224 [INFO][4270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd Jul 10 00:20:34.339054 containerd[1524]: 2025-07-10 00:20:34.230 [INFO][4270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.339054 containerd[1524]: 2025-07-10 00:20:34.267 [INFO][4270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.132/26] block=192.168.8.128/26 handle="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.339054 containerd[1524]: 2025-07-10 00:20:34.268 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.132/26] handle="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:34.339054 containerd[1524]: 2025-07-10 00:20:34.268 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:34.339054 containerd[1524]: 2025-07-10 00:20:34.269 [INFO][4270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.132/26] IPv6=[] ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:20:34.339224 containerd[1524]: 2025-07-10 00:20:34.280 [INFO][4230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-w2k2c" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0", GenerateName:"calico-apiserver-54c749cb9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b32e0ba-807d-4e45-b247-49987273641d", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c749cb9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"calico-apiserver-54c749cb9c-w2k2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b9df0de19c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:34.339284 containerd[1524]: 2025-07-10 00:20:34.280 [INFO][4230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.132/32] ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-w2k2c" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:20:34.339284 containerd[1524]: 2025-07-10 00:20:34.281 [INFO][4230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b9df0de19c ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-w2k2c" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:20:34.339284 containerd[1524]: 2025-07-10 00:20:34.299 [INFO][4230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-w2k2c" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:20:34.339361 containerd[1524]: 2025-07-10 00:20:34.305 [INFO][4230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-w2k2c" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0", GenerateName:"calico-apiserver-54c749cb9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b32e0ba-807d-4e45-b247-49987273641d", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c749cb9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd", Pod:"calico-apiserver-54c749cb9c-w2k2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b9df0de19c", MAC:"76:66:94:c9:aa:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:34.339419 containerd[1524]: 2025-07-10 00:20:34.331 [INFO][4230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-w2k2c" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:20:34.380266 systemd[1]: Started cri-containerd-9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae.scope - libcontainer container 9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae. Jul 10 00:20:34.384037 containerd[1524]: time="2025-07-10T00:20:34.383970447Z" level=info msg="connecting to shim f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" address="unix:///run/containerd/s/68e3d21efbe5f1278745e4461a517dd5cf75083ba49af85bd1b71866fd8747dd" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:34.427284 systemd[1]: Started cri-containerd-f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd.scope - libcontainer container f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd. Jul 10 00:20:34.516801 containerd[1524]: time="2025-07-10T00:20:34.516724538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6cfbdf59-42j5g,Uid:83ac65be-5d2a-492a-a383-0309423e2826,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae\"" Jul 10 00:20:34.595873 containerd[1524]: time="2025-07-10T00:20:34.595807240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c749cb9c-w2k2c,Uid:7b32e0ba-807d-4e45-b247-49987273641d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\"" Jul 10 00:20:34.760951 kubelet[2739]: E0710 00:20:34.759672 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:34.774966 containerd[1524]: time="2025-07-10T00:20:34.774381505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8gwz,Uid:6a837f3f-c164-4e44-99ea-ceab22be88a6,Namespace:kube-system,Attempt:0,}" Jul 10 00:20:35.004463 systemd-networkd[1455]: cali7bb90f54a75: Gained IPv6LL Jul 10 00:20:35.228232 systemd-networkd[1455]: calib020555cca3: Link UP Jul 10 00:20:35.233899 systemd-networkd[1455]: calib020555cca3: Gained carrier Jul 10 00:20:35.295115 containerd[1524]: 2025-07-10 00:20:34.906 [INFO][4399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0 coredns-674b8bbfcf- kube-system 6a837f3f-c164-4e44-99ea-ceab22be88a6 849 0 2025-07-10 00:19:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f coredns-674b8bbfcf-k8gwz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib020555cca3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8gwz" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-" Jul 10 00:20:35.295115 containerd[1524]: 2025-07-10 00:20:34.907 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8gwz" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" Jul 10 00:20:35.295115 containerd[1524]: 2025-07-10 00:20:35.041 [INFO][4413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" HandleID="k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Workload="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.043 [INFO][4413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" HandleID="k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Workload="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036a640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-n-5827fce73f", "pod":"coredns-674b8bbfcf-k8gwz", "timestamp":"2025-07-10 00:20:35.04142523 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.043 [INFO][4413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.044 [INFO][4413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.044 [INFO][4413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.072 [INFO][4413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.087 [INFO][4413] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.104 [INFO][4413] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.114 [INFO][4413] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.295600 containerd[1524]: 2025-07-10 00:20:35.143 [INFO][4413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.297604 containerd[1524]: 2025-07-10 00:20:35.143 [INFO][4413] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.297604 containerd[1524]: 2025-07-10 00:20:35.157 [INFO][4413] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d Jul 10 00:20:35.297604 containerd[1524]: 2025-07-10 00:20:35.181 [INFO][4413] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.297604 containerd[1524]: 2025-07-10 00:20:35.205 [INFO][4413] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.133/26] block=192.168.8.128/26 handle="k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.297604 containerd[1524]: 2025-07-10 00:20:35.205 [INFO][4413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.133/26] handle="k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:35.297604 containerd[1524]: 2025-07-10 00:20:35.206 [INFO][4413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:35.297604 containerd[1524]: 2025-07-10 00:20:35.206 [INFO][4413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.133/26] IPv6=[] ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" HandleID="k8s-pod-network.0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Workload="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" Jul 10 00:20:35.297911 containerd[1524]: 2025-07-10 00:20:35.216 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8gwz" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6a837f3f-c164-4e44-99ea-ceab22be88a6", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"coredns-674b8bbfcf-k8gwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib020555cca3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:35.297911 containerd[1524]: 2025-07-10 00:20:35.216 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.133/32] ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8gwz" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" Jul 10 00:20:35.297911 containerd[1524]: 2025-07-10 00:20:35.216 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib020555cca3 ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8gwz" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" Jul 10 00:20:35.297911 containerd[1524]: 2025-07-10 00:20:35.242 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8gwz" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" Jul 10 00:20:35.297911 containerd[1524]: 2025-07-10 00:20:35.246 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8gwz" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6a837f3f-c164-4e44-99ea-ceab22be88a6", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d", Pod:"coredns-674b8bbfcf-k8gwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib020555cca3", MAC:"66:f0:17:9f:ef:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:35.297911 containerd[1524]: 2025-07-10 00:20:35.280 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" Namespace="kube-system" Pod="coredns-674b8bbfcf-k8gwz" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--k8gwz-eth0" Jul 10 00:20:35.401152 systemd-networkd[1455]: vxlan.calico: Link UP Jul 10 00:20:35.401165 systemd-networkd[1455]: vxlan.calico: Gained carrier Jul 10 00:20:35.442095 containerd[1524]: time="2025-07-10T00:20:35.441486204Z" level=info msg="connecting to shim 0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d" address="unix:///run/containerd/s/1a4a724e05c286b81105b36128be6792916f09c4fefca0546fad5b6a4c378c1d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:35.553651 systemd[1]: Started cri-containerd-0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d.scope - libcontainer container 0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d. Jul 10 00:20:35.692267 containerd[1524]: time="2025-07-10T00:20:35.690913570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k8gwz,Uid:6a837f3f-c164-4e44-99ea-ceab22be88a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d\"" Jul 10 00:20:35.694584 kubelet[2739]: E0710 00:20:35.694543 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:35.707431 containerd[1524]: time="2025-07-10T00:20:35.707368873Z" level=info msg="CreateContainer within sandbox \"0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:20:35.779025 containerd[1524]: time="2025-07-10T00:20:35.778244361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h245p,Uid:bfd653e1-5546-4bf6-9c11-78c2c2efc214,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:35.780132 kubelet[2739]: E0710 00:20:35.779387 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:35.805523 containerd[1524]: time="2025-07-10T00:20:35.805077166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c749cb9c-7bmzr,Uid:fdfb1967-1ea6-4004-9de6-57714a08b7b9,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:20:35.829028 containerd[1524]: time="2025-07-10T00:20:35.828976009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6dsvs,Uid:64618632-ac6c-43c3-8ba4-5661b7f8a1d6,Namespace:kube-system,Attempt:0,}" Jul 10 00:20:35.994998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301595752.mount: Deactivated successfully. Jul 10 00:20:36.034725 systemd-networkd[1455]: calib40a379cd12: Gained IPv6LL Jul 10 00:20:36.035373 systemd-networkd[1455]: cali8b9df0de19c: Gained IPv6LL Jul 10 00:20:36.178060 containerd[1524]: time="2025-07-10T00:20:36.177990949Z" level=info msg="Container 72c65166260de5e66c8d20ca60dc8797e7a1b2b246d2bcc0c67b4af2ea3cb895: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:36.321047 containerd[1524]: time="2025-07-10T00:20:36.313916063Z" level=info msg="CreateContainer within sandbox \"0cc305a3c8145eb999ca03f8753c8eb054763ee5a0ac88346e192a7ad157da7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72c65166260de5e66c8d20ca60dc8797e7a1b2b246d2bcc0c67b4af2ea3cb895\"" Jul 10 00:20:36.363453 containerd[1524]: time="2025-07-10T00:20:36.338522478Z" level=info msg="StartContainer for \"72c65166260de5e66c8d20ca60dc8797e7a1b2b246d2bcc0c67b4af2ea3cb895\"" Jul 10 00:20:36.413252 systemd-networkd[1455]: vxlan.calico: Gained IPv6LL Jul 10 00:20:36.472535 containerd[1524]: time="2025-07-10T00:20:36.470386445Z" level=info msg="connecting to shim 72c65166260de5e66c8d20ca60dc8797e7a1b2b246d2bcc0c67b4af2ea3cb895" address="unix:///run/containerd/s/1a4a724e05c286b81105b36128be6792916f09c4fefca0546fad5b6a4c378c1d" protocol=ttrpc version=3 Jul 10 00:20:36.476298 systemd-networkd[1455]: calib020555cca3: Gained IPv6LL Jul 10 00:20:36.626149 systemd-networkd[1455]: cali1e8cea7a198: Link UP Jul 10 00:20:36.646255 systemd-networkd[1455]: cali1e8cea7a198: Gained carrier Jul 10 00:20:36.664010 systemd[1]: Started cri-containerd-72c65166260de5e66c8d20ca60dc8797e7a1b2b246d2bcc0c67b4af2ea3cb895.scope - libcontainer container 72c65166260de5e66c8d20ca60dc8797e7a1b2b246d2bcc0c67b4af2ea3cb895. Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.139 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0 calico-apiserver-54c749cb9c- calico-apiserver fdfb1967-1ea6-4004-9de6-57714a08b7b9 861 0 2025-07-10 00:20:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54c749cb9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f calico-apiserver-54c749cb9c-7bmzr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e8cea7a198 [] [] }} ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-7bmzr" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.143 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-7bmzr" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.347 [INFO][4551] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.350 [INFO][4551] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000358c20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-n-5827fce73f", "pod":"calico-apiserver-54c749cb9c-7bmzr", "timestamp":"2025-07-10 00:20:36.347746085 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.350 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.351 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.351 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.398 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.446 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.467 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.481 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.501 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.502 [INFO][4551] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.532 [INFO][4551] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.543 [INFO][4551] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.562 [INFO][4551] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.134/26] block=192.168.8.128/26 handle="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.562 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.134/26] handle="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.562 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:36.738777 containerd[1524]: 2025-07-10 00:20:36.562 [INFO][4551] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.134/26] IPv6=[] ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:20:36.742243 containerd[1524]: 2025-07-10 00:20:36.571 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-7bmzr" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0", GenerateName:"calico-apiserver-54c749cb9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdfb1967-1ea6-4004-9de6-57714a08b7b9", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c749cb9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"calico-apiserver-54c749cb9c-7bmzr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e8cea7a198", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:36.742243 containerd[1524]: 2025-07-10 00:20:36.582 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.134/32] ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-7bmzr" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:20:36.742243 containerd[1524]: 2025-07-10 00:20:36.583 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e8cea7a198 ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-7bmzr" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:20:36.742243 containerd[1524]: 2025-07-10 00:20:36.651 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-7bmzr" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:20:36.742243 containerd[1524]: 2025-07-10 00:20:36.674 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-7bmzr" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0", GenerateName:"calico-apiserver-54c749cb9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdfb1967-1ea6-4004-9de6-57714a08b7b9", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54c749cb9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa", Pod:"calico-apiserver-54c749cb9c-7bmzr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e8cea7a198", MAC:"be:32:52:aa:06:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:36.742243 containerd[1524]: 2025-07-10 00:20:36.705 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Namespace="calico-apiserver" Pod="calico-apiserver-54c749cb9c-7bmzr" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:20:36.762557 containerd[1524]: time="2025-07-10T00:20:36.762296443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9db756b89-q5p8k,Uid:d81c931c-4dd6-41a0-b1bb-333fcc77f26a,Namespace:calico-system,Attempt:0,}" Jul 10 00:20:36.795410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037801188.mount: Deactivated successfully. Jul 10 00:20:36.950492 containerd[1524]: time="2025-07-10T00:20:36.950438160Z" level=info msg="connecting to shim 11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" address="unix:///run/containerd/s/2d00badbb72257a51383a1433b532c473c68f399e5ab09662c3ed7ba00fb28ca" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:36.955236 systemd-networkd[1455]: calib1ee787a339: Link UP Jul 10 00:20:36.964245 systemd-networkd[1455]: calib1ee787a339: Gained carrier Jul 10 00:20:37.010734 containerd[1524]: time="2025-07-10T00:20:37.010592848Z" level=info msg="StartContainer for \"72c65166260de5e66c8d20ca60dc8797e7a1b2b246d2bcc0c67b4af2ea3cb895\" returns successfully" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.192 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0 coredns-674b8bbfcf- kube-system 64618632-ac6c-43c3-8ba4-5661b7f8a1d6 854 0 2025-07-10 00:19:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f coredns-674b8bbfcf-6dsvs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1ee787a339 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dsvs" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.194 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dsvs" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.551 [INFO][4556] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" HandleID="k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Workload="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.553 [INFO][4556] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" HandleID="k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Workload="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-n-5827fce73f", "pod":"coredns-674b8bbfcf-6dsvs", "timestamp":"2025-07-10 00:20:36.548237805 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.553 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.562 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.563 [INFO][4556] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.600 [INFO][4556] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.648 [INFO][4556] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.716 [INFO][4556] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.722 [INFO][4556] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.743 [INFO][4556] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.743 [INFO][4556] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.759 [INFO][4556] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8 Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.801 [INFO][4556] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.837 [INFO][4556] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.135/26] block=192.168.8.128/26 handle="k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.848 [INFO][4556] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.135/26] handle="k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.848 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:37.073592 containerd[1524]: 2025-07-10 00:20:36.848 [INFO][4556] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.135/26] IPv6=[] ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" HandleID="k8s-pod-network.45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Workload="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" Jul 10 00:20:37.076345 containerd[1524]: 2025-07-10 00:20:36.886 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dsvs" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"64618632-ac6c-43c3-8ba4-5661b7f8a1d6", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"coredns-674b8bbfcf-6dsvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1ee787a339", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:37.076345 containerd[1524]: 2025-07-10 00:20:36.886 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.135/32] ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dsvs" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" Jul 10 00:20:37.076345 containerd[1524]: 2025-07-10 00:20:36.886 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1ee787a339 ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dsvs" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" Jul 10 00:20:37.076345 containerd[1524]: 2025-07-10 00:20:36.972 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dsvs" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" Jul 10 00:20:37.076345 containerd[1524]: 2025-07-10 00:20:36.979 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dsvs" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"64618632-ac6c-43c3-8ba4-5661b7f8a1d6", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8", Pod:"coredns-674b8bbfcf-6dsvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1ee787a339", MAC:"02:d0:e4:ec:ed:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:37.076345 containerd[1524]: 2025-07-10 00:20:37.051 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6dsvs" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-coredns--674b8bbfcf--6dsvs-eth0" Jul 10 00:20:37.148335 systemd[1]: Started cri-containerd-11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa.scope - libcontainer container 11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa. Jul 10 00:20:37.163382 systemd-networkd[1455]: calib46384001d4: Link UP Jul 10 00:20:37.190303 systemd-networkd[1455]: calib46384001d4: Gained carrier Jul 10 00:20:37.259628 containerd[1524]: time="2025-07-10T00:20:37.259398709Z" level=info msg="connecting to shim 45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8" address="unix:///run/containerd/s/8af3039fd88cea243b0e252a7f2f7f47c979ecaaab6aa614613b03bb1fc848db" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.396 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0 csi-node-driver- calico-system bfd653e1-5546-4bf6-9c11-78c2c2efc214 740 0 2025-07-10 00:20:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f csi-node-driver-h245p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib46384001d4 [] [] }} ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Namespace="calico-system" Pod="csi-node-driver-h245p" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.396 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Namespace="calico-system" Pod="csi-node-driver-h245p" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.811 [INFO][4564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" HandleID="k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Workload="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.813 [INFO][4564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" HandleID="k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Workload="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003951d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-5827fce73f", "pod":"csi-node-driver-h245p", "timestamp":"2025-07-10 00:20:36.811098458 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.813 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.848 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.849 [INFO][4564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.888 [INFO][4564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.930 [INFO][4564] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.972 [INFO][4564] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:36.996 [INFO][4564] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:37.023 [INFO][4564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:37.023 [INFO][4564] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:37.041 [INFO][4564] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:37.060 [INFO][4564] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:37.094 [INFO][4564] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.136/26] block=192.168.8.128/26 handle="k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:37.096 [INFO][4564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.136/26] handle="k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:37.096 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:37.299813 containerd[1524]: 2025-07-10 00:20:37.097 [INFO][4564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.136/26] IPv6=[] ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" HandleID="k8s-pod-network.a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Workload="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" Jul 10 00:20:37.301190 containerd[1524]: 2025-07-10 00:20:37.130 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Namespace="calico-system" Pod="csi-node-driver-h245p" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bfd653e1-5546-4bf6-9c11-78c2c2efc214", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"csi-node-driver-h245p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib46384001d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:37.301190 containerd[1524]: 2025-07-10 00:20:37.131 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.136/32] ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Namespace="calico-system" Pod="csi-node-driver-h245p" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" Jul 10 00:20:37.301190 containerd[1524]: 2025-07-10 00:20:37.131 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib46384001d4 ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Namespace="calico-system" Pod="csi-node-driver-h245p" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" Jul 10 00:20:37.301190 containerd[1524]: 2025-07-10 00:20:37.195 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Namespace="calico-system" Pod="csi-node-driver-h245p" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" Jul 10 00:20:37.301190 containerd[1524]: 2025-07-10 00:20:37.197 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Namespace="calico-system" Pod="csi-node-driver-h245p" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bfd653e1-5546-4bf6-9c11-78c2c2efc214", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac", Pod:"csi-node-driver-h245p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib46384001d4", MAC:"8e:8e:be:ce:1a:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:37.301190 containerd[1524]: 2025-07-10 00:20:37.269 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" Namespace="calico-system" Pod="csi-node-driver-h245p" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-csi--node--driver--h245p-eth0" Jul 10 00:20:37.390678 kubelet[2739]: E0710 00:20:37.389748 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:37.417457 systemd[1]: Started cri-containerd-45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8.scope - libcontainer container 45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8. Jul 10 00:20:37.514965 containerd[1524]: time="2025-07-10T00:20:37.514784274Z" level=info msg="connecting to shim a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac" address="unix:///run/containerd/s/ee393317fd0ce1f34d04f2559896af483a97ff1020d9569baa541ee3f2bebb6e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:37.570587 systemd[1]: Started cri-containerd-a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac.scope - libcontainer container a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac. Jul 10 00:20:37.665912 containerd[1524]: time="2025-07-10T00:20:37.665749103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6dsvs,Uid:64618632-ac6c-43c3-8ba4-5661b7f8a1d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8\"" Jul 10 00:20:37.669735 kubelet[2739]: E0710 00:20:37.668838 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:37.684962 containerd[1524]: time="2025-07-10T00:20:37.684763550Z" level=info msg="CreateContainer within sandbox \"45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:20:37.797353 systemd-networkd[1455]: cali7b1ed5b1867: Link UP Jul 10 00:20:37.800033 systemd-networkd[1455]: cali7b1ed5b1867: Gained carrier Jul 10 00:20:37.840661 kubelet[2739]: I0710 00:20:37.838859 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k8gwz" podStartSLOduration=46.83883641 podStartE2EDuration="46.83883641s" podCreationTimestamp="2025-07-10 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:20:37.508069708 +0000 UTC m=+51.981557835" watchObservedRunningTime="2025-07-10 00:20:37.83883641 +0000 UTC m=+52.312324507" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.239 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0 calico-kube-controllers-9db756b89- calico-system d81c931c-4dd6-41a0-b1bb-333fcc77f26a 864 0 2025-07-10 00:20:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9db756b89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.1-n-5827fce73f calico-kube-controllers-9db756b89-q5p8k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7b1ed5b1867 [] [] }} ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Namespace="calico-system" Pod="calico-kube-controllers-9db756b89-q5p8k" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.242 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Namespace="calico-system" Pod="calico-kube-controllers-9db756b89-q5p8k" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.535 [INFO][4692] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" HandleID="k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.535 [INFO][4692] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" HandleID="k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030e9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-5827fce73f", "pod":"calico-kube-controllers-9db756b89-q5p8k", "timestamp":"2025-07-10 00:20:37.534996372 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5827fce73f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.535 [INFO][4692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.535 [INFO][4692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.535 [INFO][4692] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5827fce73f' Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.592 [INFO][4692] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.618 [INFO][4692] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.641 [INFO][4692] ipam/ipam.go 511: Trying affinity for 192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.651 [INFO][4692] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.660 [INFO][4692] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.661 [INFO][4692] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.671 [INFO][4692] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4 Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.706 [INFO][4692] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.753 [INFO][4692] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.8.137/26] block=192.168.8.128/26 handle="k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.754 [INFO][4692] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.137/26] handle="k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" host="ci-4344.1.1-n-5827fce73f" Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.754 [INFO][4692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:20:37.881003 containerd[1524]: 2025-07-10 00:20:37.754 [INFO][4692] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.137/26] IPv6=[] ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" HandleID="k8s-pod-network.4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" Jul 10 00:20:37.881919 containerd[1524]: 2025-07-10 00:20:37.776 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Namespace="calico-system" Pod="calico-kube-controllers-9db756b89-q5p8k" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0", GenerateName:"calico-kube-controllers-9db756b89-", Namespace:"calico-system", SelfLink:"", UID:"d81c931c-4dd6-41a0-b1bb-333fcc77f26a", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9db756b89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"", Pod:"calico-kube-controllers-9db756b89-q5p8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b1ed5b1867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:37.881919 containerd[1524]: 2025-07-10 00:20:37.776 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.137/32] ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Namespace="calico-system" Pod="calico-kube-controllers-9db756b89-q5p8k" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" Jul 10 00:20:37.881919 containerd[1524]: 2025-07-10 00:20:37.776 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b1ed5b1867 ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Namespace="calico-system" Pod="calico-kube-controllers-9db756b89-q5p8k" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" Jul 10 00:20:37.881919 containerd[1524]: 2025-07-10 00:20:37.800 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Namespace="calico-system" Pod="calico-kube-controllers-9db756b89-q5p8k" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" Jul 10 00:20:37.881919 containerd[1524]: 2025-07-10 00:20:37.804 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Namespace="calico-system" Pod="calico-kube-controllers-9db756b89-q5p8k" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0", GenerateName:"calico-kube-controllers-9db756b89-", Namespace:"calico-system", SelfLink:"", UID:"d81c931c-4dd6-41a0-b1bb-333fcc77f26a", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 20, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9db756b89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5827fce73f", ContainerID:"4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4", Pod:"calico-kube-controllers-9db756b89-q5p8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b1ed5b1867", MAC:"86:b7:a3:fa:d8:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:20:37.881919 containerd[1524]: 2025-07-10 00:20:37.846 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" Namespace="calico-system" Pod="calico-kube-controllers-9db756b89-q5p8k" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--kube--controllers--9db756b89--q5p8k-eth0" Jul 10 00:20:37.890602 containerd[1524]: time="2025-07-10T00:20:37.883171965Z" level=info msg="Container 4a4e5f75c477459873761bf151c42c2ca6d91624c3417c92aabe6c24e0bda4a6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:37.884888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330474996.mount: Deactivated successfully. Jul 10 00:20:37.920414 containerd[1524]: time="2025-07-10T00:20:37.919576116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54c749cb9c-7bmzr,Uid:fdfb1967-1ea6-4004-9de6-57714a08b7b9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\"" Jul 10 00:20:37.944241 containerd[1524]: time="2025-07-10T00:20:37.943399907Z" level=info msg="CreateContainer within sandbox \"45d609901dd2b61f4e12f94a90df08f55d35dc908230fa40677a846a57e483d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a4e5f75c477459873761bf151c42c2ca6d91624c3417c92aabe6c24e0bda4a6\"" Jul 10 00:20:37.956257 containerd[1524]: time="2025-07-10T00:20:37.955406526Z" level=info msg="StartContainer for \"4a4e5f75c477459873761bf151c42c2ca6d91624c3417c92aabe6c24e0bda4a6\"" Jul 10 00:20:37.963893 containerd[1524]: time="2025-07-10T00:20:37.963823101Z" level=info msg="connecting to shim 4a4e5f75c477459873761bf151c42c2ca6d91624c3417c92aabe6c24e0bda4a6" address="unix:///run/containerd/s/8af3039fd88cea243b0e252a7f2f7f47c979ecaaab6aa614613b03bb1fc848db" protocol=ttrpc version=3 Jul 10 00:20:38.071523 systemd[1]: Started cri-containerd-4a4e5f75c477459873761bf151c42c2ca6d91624c3417c92aabe6c24e0bda4a6.scope - libcontainer container 4a4e5f75c477459873761bf151c42c2ca6d91624c3417c92aabe6c24e0bda4a6. Jul 10 00:20:38.098004 containerd[1524]: time="2025-07-10T00:20:38.097673272Z" level=info msg="connecting to shim 4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4" address="unix:///run/containerd/s/9dd62a7683234ef3df0982b202ebdc0f968894e93e62bad4ffd21d53a1fadb5f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:20:38.137582 containerd[1524]: time="2025-07-10T00:20:38.137517850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h245p,Uid:bfd653e1-5546-4bf6-9c11-78c2c2efc214,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac\"" Jul 10 00:20:38.203195 systemd[1]: Started cri-containerd-4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4.scope - libcontainer container 4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4. Jul 10 00:20:38.245629 containerd[1524]: time="2025-07-10T00:20:38.245382278Z" level=info msg="StartContainer for \"4a4e5f75c477459873761bf151c42c2ca6d91624c3417c92aabe6c24e0bda4a6\" returns successfully" Jul 10 00:20:38.332181 systemd-networkd[1455]: calib46384001d4: Gained IPv6LL Jul 10 00:20:38.397048 systemd-networkd[1455]: calib1ee787a339: Gained IPv6LL Jul 10 00:20:38.416785 kubelet[2739]: E0710 00:20:38.416732 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:38.426071 kubelet[2739]: E0710 00:20:38.425719 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:38.526029 systemd-networkd[1455]: cali1e8cea7a198: Gained IPv6LL Jul 10 00:20:38.541325 kubelet[2739]: I0710 00:20:38.541091 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6dsvs" podStartSLOduration=47.541061707 podStartE2EDuration="47.541061707s" podCreationTimestamp="2025-07-10 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:20:38.487586881 +0000 UTC m=+52.961074984" watchObservedRunningTime="2025-07-10 00:20:38.541061707 +0000 UTC m=+53.014549809" Jul 10 00:20:39.054960 containerd[1524]: time="2025-07-10T00:20:39.054817146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9db756b89-q5p8k,Uid:d81c931c-4dd6-41a0-b1bb-333fcc77f26a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4\"" Jul 10 00:20:39.102451 systemd-networkd[1455]: cali7b1ed5b1867: Gained IPv6LL Jul 10 00:20:39.265383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336284373.mount: Deactivated successfully. Jul 10 00:20:39.434040 kubelet[2739]: E0710 00:20:39.433976 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:39.434737 kubelet[2739]: E0710 00:20:39.434436 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:40.394963 containerd[1524]: time="2025-07-10T00:20:40.394204522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:40.397969 containerd[1524]: time="2025-07-10T00:20:40.396609804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 00:20:40.397969 containerd[1524]: time="2025-07-10T00:20:40.397336968Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:40.401813 containerd[1524]: time="2025-07-10T00:20:40.401743238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:40.403416 containerd[1524]: time="2025-07-10T00:20:40.403354950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.894145712s" Jul 10 00:20:40.403416 containerd[1524]: time="2025-07-10T00:20:40.403410978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 00:20:40.409157 containerd[1524]: time="2025-07-10T00:20:40.409105195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:20:40.417127 containerd[1524]: time="2025-07-10T00:20:40.417074790Z" level=info msg="CreateContainer within sandbox \"833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:20:40.425493 containerd[1524]: time="2025-07-10T00:20:40.425436958Z" level=info msg="Container 4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:40.440945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540709974.mount: Deactivated successfully. Jul 10 00:20:40.443875 kubelet[2739]: E0710 00:20:40.443361 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:20:40.448341 containerd[1524]: time="2025-07-10T00:20:40.448279271Z" level=info msg="CreateContainer within sandbox \"833e0d3f6a2fb95e63b05512c1443299764dc747a977c1f9bb7a17d81fcb51a4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\"" Jul 10 00:20:40.451226 containerd[1524]: time="2025-07-10T00:20:40.451170061Z" level=info msg="StartContainer for \"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\"" Jul 10 00:20:40.455017 containerd[1524]: time="2025-07-10T00:20:40.454949546Z" level=info msg="connecting to shim 4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4" address="unix:///run/containerd/s/05d2bba4ff817ab519d55afb0df24443b6886ca62d5e8afd6dcf0dfb1c67786c" protocol=ttrpc version=3 Jul 10 00:20:40.502291 systemd[1]: Started cri-containerd-4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4.scope - libcontainer container 4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4. Jul 10 00:20:40.591003 containerd[1524]: time="2025-07-10T00:20:40.590085362Z" level=info msg="StartContainer for \"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\" returns successfully" Jul 10 00:20:41.466777 kubelet[2739]: I0710 00:20:41.466256 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-rd4jt" podStartSLOduration=26.566938223 podStartE2EDuration="33.466231058s" podCreationTimestamp="2025-07-10 00:20:08 +0000 UTC" firstStartedPulling="2025-07-10 00:20:33.507560013 +0000 UTC m=+47.981048102" lastFinishedPulling="2025-07-10 00:20:40.406852829 +0000 UTC m=+54.880340937" observedRunningTime="2025-07-10 00:20:41.465969136 +0000 UTC m=+55.939457255" watchObservedRunningTime="2025-07-10 00:20:41.466231058 +0000 UTC m=+55.939719151" Jul 10 00:20:41.623231 containerd[1524]: time="2025-07-10T00:20:41.623153734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\" id:\"3a13c7b622008e54bf17884d82e2e01e8242c7c0c5fc9423a9339f2e791b500e\" pid:4975 exit_status:1 exited_at:{seconds:1752106841 nanos:613605535}" Jul 10 00:20:42.663394 containerd[1524]: time="2025-07-10T00:20:42.663316339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\" id:\"71e3a14c6b4b32df8d05814484ef091f4fb635ca8df3bf30f81082693412e1e7\" pid:5003 exit_status:1 exited_at:{seconds:1752106842 nanos:661282533}" Jul 10 00:20:42.860665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284416823.mount: Deactivated successfully. Jul 10 00:20:42.876108 containerd[1524]: time="2025-07-10T00:20:42.876043173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:42.877040 containerd[1524]: time="2025-07-10T00:20:42.876992700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 00:20:42.878448 containerd[1524]: time="2025-07-10T00:20:42.878363335Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:42.882117 containerd[1524]: time="2025-07-10T00:20:42.881333893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:42.882117 containerd[1524]: time="2025-07-10T00:20:42.881975433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.47254868s" Jul 10 00:20:42.882117 containerd[1524]: time="2025-07-10T00:20:42.882011964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 00:20:42.885202 containerd[1524]: time="2025-07-10T00:20:42.885154364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:20:42.891014 containerd[1524]: time="2025-07-10T00:20:42.890664489Z" level=info msg="CreateContainer within sandbox \"3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:20:42.905993 containerd[1524]: time="2025-07-10T00:20:42.903001833Z" level=info msg="Container 3601212b30f89fccd6d35a8182cfb303f532f290e1484ee8ef4e3c7c6a63feb6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:42.921170 containerd[1524]: time="2025-07-10T00:20:42.921017445Z" level=info msg="CreateContainer within sandbox \"3716c5f438c1825c60f4de1c2fbec69d39bb3d6ee2a25418ae0b4b163137afcd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3601212b30f89fccd6d35a8182cfb303f532f290e1484ee8ef4e3c7c6a63feb6\"" Jul 10 00:20:42.922668 containerd[1524]: time="2025-07-10T00:20:42.922598209Z" level=info msg="StartContainer for \"3601212b30f89fccd6d35a8182cfb303f532f290e1484ee8ef4e3c7c6a63feb6\"" Jul 10 00:20:42.926684 containerd[1524]: time="2025-07-10T00:20:42.926620894Z" level=info msg="connecting to shim 3601212b30f89fccd6d35a8182cfb303f532f290e1484ee8ef4e3c7c6a63feb6" address="unix:///run/containerd/s/61f9f198ebd62c99fff7bd8bea048d482066bcf1200a9cbf59b0d2725b9a44b7" protocol=ttrpc version=3 Jul 10 00:20:42.964308 systemd[1]: Started cri-containerd-3601212b30f89fccd6d35a8182cfb303f532f290e1484ee8ef4e3c7c6a63feb6.scope - libcontainer container 3601212b30f89fccd6d35a8182cfb303f532f290e1484ee8ef4e3c7c6a63feb6. Jul 10 00:20:43.058630 containerd[1524]: time="2025-07-10T00:20:43.058574949Z" level=info msg="StartContainer for \"3601212b30f89fccd6d35a8182cfb303f532f290e1484ee8ef4e3c7c6a63feb6\" returns successfully" Jul 10 00:20:43.486515 kubelet[2739]: I0710 00:20:43.486434 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79b5dcf584-m8qq9" podStartSLOduration=2.255882582 podStartE2EDuration="13.486245046s" podCreationTimestamp="2025-07-10 00:20:30 +0000 UTC" firstStartedPulling="2025-07-10 00:20:31.653279337 +0000 UTC m=+46.126767438" lastFinishedPulling="2025-07-10 00:20:42.883641813 +0000 UTC m=+57.357129902" observedRunningTime="2025-07-10 00:20:43.485498851 +0000 UTC m=+57.958986974" watchObservedRunningTime="2025-07-10 00:20:43.486245046 +0000 UTC m=+57.959733145" Jul 10 00:20:43.635470 containerd[1524]: time="2025-07-10T00:20:43.635385347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\" id:\"ab6fd1368c648daa28529318fa81d3f2dc6040a59777b6cd3854a69c5f5fc2b8\" pid:5068 exit_status:1 exited_at:{seconds:1752106843 nanos:634520381}" Jul 10 00:20:46.074381 containerd[1524]: time="2025-07-10T00:20:46.074265069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:46.078440 containerd[1524]: time="2025-07-10T00:20:46.077529486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 00:20:46.078999 containerd[1524]: time="2025-07-10T00:20:46.078387856Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:46.089812 containerd[1524]: time="2025-07-10T00:20:46.088680984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:46.089812 containerd[1524]: time="2025-07-10T00:20:46.089568922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.203703402s" Jul 10 00:20:46.089812 containerd[1524]: time="2025-07-10T00:20:46.089614024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:20:46.148857 containerd[1524]: time="2025-07-10T00:20:46.148692022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:20:46.170863 containerd[1524]: time="2025-07-10T00:20:46.170746488Z" level=info msg="CreateContainer within sandbox \"9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:20:46.186235 containerd[1524]: time="2025-07-10T00:20:46.186192914Z" level=info msg="Container 532facdb7bb16ac3c0177c240797cc89423938c8a498abbdd01179b1c49a2615: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:46.203859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3481662139.mount: Deactivated successfully. Jul 10 00:20:46.219181 containerd[1524]: time="2025-07-10T00:20:46.219121758Z" level=info msg="CreateContainer within sandbox \"9638bcaf94b5d42544a18a852335f89242cf3ccfec3425cfe9912d9036fce8ae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"532facdb7bb16ac3c0177c240797cc89423938c8a498abbdd01179b1c49a2615\"" Jul 10 00:20:46.220467 containerd[1524]: time="2025-07-10T00:20:46.220427400Z" level=info msg="StartContainer for \"532facdb7bb16ac3c0177c240797cc89423938c8a498abbdd01179b1c49a2615\"" Jul 10 00:20:46.224046 containerd[1524]: time="2025-07-10T00:20:46.223984734Z" level=info msg="connecting to shim 532facdb7bb16ac3c0177c240797cc89423938c8a498abbdd01179b1c49a2615" address="unix:///run/containerd/s/e724bd42c920d332e8de26c3f0dfc1f64d73bb939d19dd20102cf098eaf86c82" protocol=ttrpc version=3 Jul 10 00:20:46.326286 systemd[1]: Started cri-containerd-532facdb7bb16ac3c0177c240797cc89423938c8a498abbdd01179b1c49a2615.scope - libcontainer container 532facdb7bb16ac3c0177c240797cc89423938c8a498abbdd01179b1c49a2615. Jul 10 00:20:46.414162 containerd[1524]: time="2025-07-10T00:20:46.414047877Z" level=info msg="StartContainer for \"532facdb7bb16ac3c0177c240797cc89423938c8a498abbdd01179b1c49a2615\" returns successfully" Jul 10 00:20:46.522716 kubelet[2739]: I0710 00:20:46.522373 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c6cfbdf59-42j5g" podStartSLOduration=29.928343577 podStartE2EDuration="41.522348937s" podCreationTimestamp="2025-07-10 00:20:05 +0000 UTC" firstStartedPulling="2025-07-10 00:20:34.521129906 +0000 UTC m=+48.994617988" lastFinishedPulling="2025-07-10 00:20:46.115135254 +0000 UTC m=+60.588623348" observedRunningTime="2025-07-10 00:20:46.517988303 +0000 UTC m=+60.991476415" watchObservedRunningTime="2025-07-10 00:20:46.522348937 +0000 UTC m=+60.995837083" Jul 10 00:20:46.624251 containerd[1524]: time="2025-07-10T00:20:46.623068944Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:46.624251 containerd[1524]: time="2025-07-10T00:20:46.624066956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:20:46.628110 containerd[1524]: time="2025-07-10T00:20:46.627883056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 479.11292ms" Jul 10 00:20:46.628290 containerd[1524]: time="2025-07-10T00:20:46.628137199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:20:46.630886 containerd[1524]: time="2025-07-10T00:20:46.630431960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:20:46.638674 containerd[1524]: time="2025-07-10T00:20:46.638618082Z" level=info msg="CreateContainer within sandbox \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:20:46.647334 containerd[1524]: time="2025-07-10T00:20:46.647267947Z" level=info msg="Container a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:46.668386 containerd[1524]: time="2025-07-10T00:20:46.668269802Z" level=info msg="CreateContainer within sandbox \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\"" Jul 10 00:20:46.672414 containerd[1524]: time="2025-07-10T00:20:46.672335353Z" level=info msg="StartContainer for \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\"" Jul 10 00:20:46.677427 containerd[1524]: time="2025-07-10T00:20:46.677360840Z" level=info msg="connecting to shim a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020" address="unix:///run/containerd/s/68e3d21efbe5f1278745e4461a517dd5cf75083ba49af85bd1b71866fd8747dd" protocol=ttrpc version=3 Jul 10 00:20:46.718437 systemd[1]: Started cri-containerd-a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020.scope - libcontainer container a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020. Jul 10 00:20:46.809627 containerd[1524]: time="2025-07-10T00:20:46.809470275Z" level=info msg="StartContainer for \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" returns successfully" Jul 10 00:20:47.014973 containerd[1524]: time="2025-07-10T00:20:47.013257336Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:47.014973 containerd[1524]: time="2025-07-10T00:20:47.013377738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:20:47.022037 containerd[1524]: time="2025-07-10T00:20:47.021202810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 390.719302ms" Jul 10 00:20:47.022308 containerd[1524]: time="2025-07-10T00:20:47.022225957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:20:47.025336 containerd[1524]: time="2025-07-10T00:20:47.025250924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:20:47.032962 containerd[1524]: time="2025-07-10T00:20:47.032471246Z" level=info msg="CreateContainer within sandbox \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:20:47.045989 containerd[1524]: time="2025-07-10T00:20:47.044192410Z" level=info msg="Container 95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:47.068719 containerd[1524]: time="2025-07-10T00:20:47.068639305Z" level=info msg="CreateContainer within sandbox \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\"" Jul 10 00:20:47.072536 containerd[1524]: time="2025-07-10T00:20:47.072465842Z" level=info msg="StartContainer for \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\"" Jul 10 00:20:47.078912 containerd[1524]: time="2025-07-10T00:20:47.078850296Z" level=info msg="connecting to shim 95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67" address="unix:///run/containerd/s/2d00badbb72257a51383a1433b532c473c68f399e5ab09662c3ed7ba00fb28ca" protocol=ttrpc version=3 Jul 10 00:20:47.145608 systemd[1]: Started cri-containerd-95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67.scope - libcontainer container 95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67. Jul 10 00:20:47.301531 containerd[1524]: time="2025-07-10T00:20:47.301371719Z" level=info msg="StartContainer for \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" returns successfully" Jul 10 00:20:47.510961 kubelet[2739]: I0710 00:20:47.510688 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:20:47.535996 kubelet[2739]: I0710 00:20:47.535793 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54c749cb9c-w2k2c" podStartSLOduration=32.505128608 podStartE2EDuration="44.535767349s" podCreationTimestamp="2025-07-10 00:20:03 +0000 UTC" firstStartedPulling="2025-07-10 00:20:34.599216674 +0000 UTC m=+49.072704756" lastFinishedPulling="2025-07-10 00:20:46.629855416 +0000 UTC m=+61.103343497" observedRunningTime="2025-07-10 00:20:47.514980023 +0000 UTC m=+61.988468141" watchObservedRunningTime="2025-07-10 00:20:47.535767349 +0000 UTC m=+62.009255454" Jul 10 00:20:47.538864 kubelet[2739]: I0710 00:20:47.536894 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54c749cb9c-7bmzr" podStartSLOduration=35.473305573 podStartE2EDuration="44.5368711s" podCreationTimestamp="2025-07-10 00:20:03 +0000 UTC" firstStartedPulling="2025-07-10 00:20:37.961054765 +0000 UTC m=+52.434542845" lastFinishedPulling="2025-07-10 00:20:47.024620277 +0000 UTC m=+61.498108372" observedRunningTime="2025-07-10 00:20:47.535099999 +0000 UTC m=+62.008588123" watchObservedRunningTime="2025-07-10 00:20:47.5368711 +0000 UTC m=+62.010359202" Jul 10 00:20:48.503142 kubelet[2739]: I0710 00:20:48.502994 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:20:48.694060 containerd[1524]: time="2025-07-10T00:20:48.693634381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:48.695623 containerd[1524]: time="2025-07-10T00:20:48.695222320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 00:20:48.697418 containerd[1524]: time="2025-07-10T00:20:48.696702346Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:48.701083 containerd[1524]: time="2025-07-10T00:20:48.701021407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:48.701381 containerd[1524]: time="2025-07-10T00:20:48.701342407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.676009214s" Jul 10 00:20:48.701471 containerd[1524]: time="2025-07-10T00:20:48.701385358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 00:20:48.705790 containerd[1524]: time="2025-07-10T00:20:48.705666208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:20:48.711445 containerd[1524]: time="2025-07-10T00:20:48.711389119Z" level=info msg="CreateContainer within sandbox \"a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:20:48.740967 containerd[1524]: time="2025-07-10T00:20:48.740148599Z" level=info msg="Container 9b0686478eb76243f7e5d477af08b760549504fd1371ad5693dd8acc114ba038: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:48.780051 containerd[1524]: time="2025-07-10T00:20:48.779211085Z" level=info msg="CreateContainer within sandbox \"a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b0686478eb76243f7e5d477af08b760549504fd1371ad5693dd8acc114ba038\"" Jul 10 00:20:48.781638 containerd[1524]: time="2025-07-10T00:20:48.781397751Z" level=info msg="StartContainer for \"9b0686478eb76243f7e5d477af08b760549504fd1371ad5693dd8acc114ba038\"" Jul 10 00:20:48.789867 containerd[1524]: time="2025-07-10T00:20:48.787634375Z" level=info msg="connecting to shim 9b0686478eb76243f7e5d477af08b760549504fd1371ad5693dd8acc114ba038" address="unix:///run/containerd/s/ee393317fd0ce1f34d04f2559896af483a97ff1020d9569baa541ee3f2bebb6e" protocol=ttrpc version=3 Jul 10 00:20:48.852620 systemd[1]: Started cri-containerd-9b0686478eb76243f7e5d477af08b760549504fd1371ad5693dd8acc114ba038.scope - libcontainer container 9b0686478eb76243f7e5d477af08b760549504fd1371ad5693dd8acc114ba038. Jul 10 00:20:48.974157 containerd[1524]: time="2025-07-10T00:20:48.974032704Z" level=info msg="StartContainer for \"9b0686478eb76243f7e5d477af08b760549504fd1371ad5693dd8acc114ba038\" returns successfully" Jul 10 00:20:50.819779 systemd[1]: Started sshd@11-164.90.146.220:22-147.75.109.163:57364.service - OpenSSH per-connection server daemon (147.75.109.163:57364). Jul 10 00:20:51.203604 sshd[5255]: Accepted publickey for core from 147.75.109.163 port 57364 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:20:51.211829 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:20:51.236501 systemd-logind[1496]: New session 10 of user core. Jul 10 00:20:51.242556 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:20:52.258238 sshd[5257]: Connection closed by 147.75.109.163 port 57364 Jul 10 00:20:52.260845 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Jul 10 00:20:52.294534 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:20:52.296359 systemd[1]: sshd@11-164.90.146.220:22-147.75.109.163:57364.service: Deactivated successfully. Jul 10 00:20:52.307403 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:20:52.322342 systemd-logind[1496]: Removed session 10. Jul 10 00:20:54.553451 containerd[1524]: time="2025-07-10T00:20:54.553370923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:54.554245 containerd[1524]: time="2025-07-10T00:20:54.554208288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 00:20:54.555950 containerd[1524]: time="2025-07-10T00:20:54.555862057Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:54.558089 containerd[1524]: time="2025-07-10T00:20:54.557998248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:54.559309 containerd[1524]: time="2025-07-10T00:20:54.559108105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.852890592s" Jul 10 00:20:54.559309 containerd[1524]: time="2025-07-10T00:20:54.559161508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 00:20:54.561440 containerd[1524]: time="2025-07-10T00:20:54.560974493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:20:54.705692 containerd[1524]: time="2025-07-10T00:20:54.705387166Z" level=info msg="CreateContainer within sandbox \"4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:20:54.772365 containerd[1524]: time="2025-07-10T00:20:54.772305600Z" level=info msg="Container e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:54.961016 containerd[1524]: time="2025-07-10T00:20:54.960584313Z" level=info msg="CreateContainer within sandbox \"4875b52d75652a3877dba2488ed52a0edc9fc9cf47d9087d55de95d11fad3de4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951\"" Jul 10 00:20:54.963817 containerd[1524]: time="2025-07-10T00:20:54.963653644Z" level=info msg="StartContainer for \"e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951\"" Jul 10 00:20:54.984194 containerd[1524]: time="2025-07-10T00:20:54.984108807Z" level=info msg="connecting to shim e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951" address="unix:///run/containerd/s/9dd62a7683234ef3df0982b202ebdc0f968894e93e62bad4ffd21d53a1fadb5f" protocol=ttrpc version=3 Jul 10 00:20:55.165641 systemd[1]: Started cri-containerd-e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951.scope - libcontainer container e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951. Jul 10 00:20:55.303860 containerd[1524]: time="2025-07-10T00:20:55.303659926Z" level=info msg="StartContainer for \"e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951\" returns successfully" Jul 10 00:20:55.792532 kubelet[2739]: I0710 00:20:55.792444 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9db756b89-q5p8k" podStartSLOduration=31.296864211 podStartE2EDuration="46.792416442s" podCreationTimestamp="2025-07-10 00:20:09 +0000 UTC" firstStartedPulling="2025-07-10 00:20:39.065140763 +0000 UTC m=+53.538628843" lastFinishedPulling="2025-07-10 00:20:54.560692992 +0000 UTC m=+69.034181074" observedRunningTime="2025-07-10 00:20:55.758136526 +0000 UTC m=+70.231624644" watchObservedRunningTime="2025-07-10 00:20:55.792416442 +0000 UTC m=+70.265904543" Jul 10 00:20:56.001260 containerd[1524]: time="2025-07-10T00:20:56.000812591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951\" id:\"fb1d3499afa5a601dba8ca8884fb8d5c8d7c18f42129904cce11f7f306d6d84b\" pid:5334 exited_at:{seconds:1752106855 nanos:935540898}" Jul 10 00:20:56.818351 containerd[1524]: time="2025-07-10T00:20:56.818269387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:56.819626 containerd[1524]: time="2025-07-10T00:20:56.819283567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 00:20:56.821177 containerd[1524]: time="2025-07-10T00:20:56.821126445Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:56.824135 containerd[1524]: time="2025-07-10T00:20:56.824057320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:20:56.826306 containerd[1524]: time="2025-07-10T00:20:56.826130858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.265109842s" Jul 10 00:20:56.826306 containerd[1524]: time="2025-07-10T00:20:56.826188597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 00:20:56.906815 containerd[1524]: time="2025-07-10T00:20:56.906739695Z" level=info msg="CreateContainer within sandbox \"a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:20:56.921299 containerd[1524]: time="2025-07-10T00:20:56.921222411Z" level=info msg="Container d6e3ce75d61899b2107d67650dbfe73ead5d7b1a9bac118854061c51f4787de1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:20:56.956371 containerd[1524]: time="2025-07-10T00:20:56.956296449Z" level=info msg="CreateContainer within sandbox \"a4e30807ccdfd65fe2191210047b8452de10077151f107a0bfc1f282a74db7ac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d6e3ce75d61899b2107d67650dbfe73ead5d7b1a9bac118854061c51f4787de1\"" Jul 10 00:20:56.983606 containerd[1524]: time="2025-07-10T00:20:56.983523812Z" level=info msg="StartContainer for \"d6e3ce75d61899b2107d67650dbfe73ead5d7b1a9bac118854061c51f4787de1\"" Jul 10 00:20:56.987466 containerd[1524]: time="2025-07-10T00:20:56.987393536Z" level=info msg="connecting to shim d6e3ce75d61899b2107d67650dbfe73ead5d7b1a9bac118854061c51f4787de1" address="unix:///run/containerd/s/ee393317fd0ce1f34d04f2559896af483a97ff1020d9569baa541ee3f2bebb6e" protocol=ttrpc version=3 Jul 10 00:20:57.053654 systemd[1]: Started cri-containerd-d6e3ce75d61899b2107d67650dbfe73ead5d7b1a9bac118854061c51f4787de1.scope - libcontainer container d6e3ce75d61899b2107d67650dbfe73ead5d7b1a9bac118854061c51f4787de1. Jul 10 00:20:57.133332 containerd[1524]: time="2025-07-10T00:20:57.132745224Z" level=info msg="StartContainer for \"d6e3ce75d61899b2107d67650dbfe73ead5d7b1a9bac118854061c51f4787de1\" returns successfully" Jul 10 00:20:57.292310 systemd[1]: Started sshd@12-164.90.146.220:22-147.75.109.163:47870.service - OpenSSH per-connection server daemon (147.75.109.163:47870). Jul 10 00:20:57.582157 sshd[5382]: Accepted publickey for core from 147.75.109.163 port 47870 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:20:57.591243 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:20:57.609404 systemd-logind[1496]: New session 11 of user core. Jul 10 00:20:57.614265 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:20:58.286784 kubelet[2739]: I0710 00:20:58.282840 2739 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:20:58.306679 kubelet[2739]: I0710 00:20:58.306485 2739 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:20:58.862346 sshd[5386]: Connection closed by 147.75.109.163 port 47870 Jul 10 00:20:58.865551 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Jul 10 00:20:58.874466 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:20:58.875797 systemd[1]: sshd@12-164.90.146.220:22-147.75.109.163:47870.service: Deactivated successfully. Jul 10 00:20:58.878707 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:20:58.883137 systemd-logind[1496]: Removed session 11. Jul 10 00:21:01.703142 containerd[1524]: time="2025-07-10T00:21:01.703009721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e\" id:\"8da201209a871691ab08c4778bbf5ccda14dd6d64c61f214ff694ff4211e85e7\" pid:5418 exited_at:{seconds:1752106861 nanos:699595461}" Jul 10 00:21:01.841013 kubelet[2739]: I0710 00:21:01.836668 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h245p" podStartSLOduration=34.071784791 podStartE2EDuration="52.801060834s" podCreationTimestamp="2025-07-10 00:20:09 +0000 UTC" firstStartedPulling="2025-07-10 00:20:38.146858853 +0000 UTC m=+52.620346946" lastFinishedPulling="2025-07-10 00:20:56.876134906 +0000 UTC m=+71.349622989" observedRunningTime="2025-07-10 00:20:57.931359125 +0000 UTC m=+72.404847256" watchObservedRunningTime="2025-07-10 00:21:01.801060834 +0000 UTC m=+76.274548947" Jul 10 00:21:03.881677 systemd[1]: Started sshd@13-164.90.146.220:22-147.75.109.163:47878.service - OpenSSH per-connection server daemon (147.75.109.163:47878). Jul 10 00:21:04.039869 sshd[5430]: Accepted publickey for core from 147.75.109.163 port 47878 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:04.043414 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:04.051815 systemd-logind[1496]: New session 12 of user core. Jul 10 00:21:04.056204 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:21:04.457571 sshd[5432]: Connection closed by 147.75.109.163 port 47878 Jul 10 00:21:04.458560 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:04.473156 systemd[1]: sshd@13-164.90.146.220:22-147.75.109.163:47878.service: Deactivated successfully. Jul 10 00:21:04.476748 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:21:04.480119 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:21:04.485173 systemd[1]: Started sshd@14-164.90.146.220:22-147.75.109.163:47882.service - OpenSSH per-connection server daemon (147.75.109.163:47882). Jul 10 00:21:04.488449 systemd-logind[1496]: Removed session 12. Jul 10 00:21:04.563396 sshd[5444]: Accepted publickey for core from 147.75.109.163 port 47882 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:04.565685 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:04.574103 systemd-logind[1496]: New session 13 of user core. Jul 10 00:21:04.585309 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:21:04.902108 sshd[5446]: Connection closed by 147.75.109.163 port 47882 Jul 10 00:21:04.904642 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:04.917848 systemd[1]: sshd@14-164.90.146.220:22-147.75.109.163:47882.service: Deactivated successfully. Jul 10 00:21:04.926045 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:21:04.930305 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:21:04.939089 systemd[1]: Started sshd@15-164.90.146.220:22-147.75.109.163:47894.service - OpenSSH per-connection server daemon (147.75.109.163:47894). Jul 10 00:21:04.942640 systemd-logind[1496]: Removed session 13. Jul 10 00:21:05.055018 sshd[5456]: Accepted publickey for core from 147.75.109.163 port 47894 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:05.056844 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:05.065826 systemd-logind[1496]: New session 14 of user core. Jul 10 00:21:05.071397 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:21:05.309755 sshd[5458]: Connection closed by 147.75.109.163 port 47894 Jul 10 00:21:05.309232 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:05.319033 systemd[1]: sshd@15-164.90.146.220:22-147.75.109.163:47894.service: Deactivated successfully. Jul 10 00:21:05.324343 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:21:05.328158 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:21:05.333383 systemd-logind[1496]: Removed session 14. Jul 10 00:21:07.618084 kubelet[2739]: I0710 00:21:07.617485 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:21:07.743205 kubelet[2739]: I0710 00:21:07.743144 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:21:07.770970 kubelet[2739]: E0710 00:21:07.769065 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:07.850334 containerd[1524]: time="2025-07-10T00:21:07.850154153Z" level=info msg="StopContainer for \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" with timeout 30 (s)" Jul 10 00:21:07.861968 containerd[1524]: time="2025-07-10T00:21:07.861040347Z" level=info msg="Stop container \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" with signal terminated" Jul 10 00:21:07.966791 systemd[1]: cri-containerd-95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67.scope: Deactivated successfully. Jul 10 00:21:07.967944 systemd[1]: cri-containerd-95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67.scope: Consumed 1.086s CPU time, 45.8M memory peak, 374K read from disk. Jul 10 00:21:07.976672 containerd[1524]: time="2025-07-10T00:21:07.976587427Z" level=info msg="received exit event container_id:\"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" id:\"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" pid:5180 exit_status:1 exited_at:{seconds:1752106867 nanos:975904111}" Jul 10 00:21:07.977183 containerd[1524]: time="2025-07-10T00:21:07.976775337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" id:\"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" pid:5180 exit_status:1 exited_at:{seconds:1752106867 nanos:975904111}" Jul 10 00:21:08.075988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67-rootfs.mount: Deactivated successfully. Jul 10 00:21:08.153975 containerd[1524]: time="2025-07-10T00:21:08.153867805Z" level=info msg="StopContainer for \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" returns successfully" Jul 10 00:21:08.164952 containerd[1524]: time="2025-07-10T00:21:08.164855559Z" level=info msg="StopPodSandbox for \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\"" Jul 10 00:21:08.181967 containerd[1524]: time="2025-07-10T00:21:08.181618637Z" level=info msg="Container to stop \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:21:08.207297 systemd[1]: cri-containerd-11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa.scope: Deactivated successfully. Jul 10 00:21:08.213126 containerd[1524]: time="2025-07-10T00:21:08.213051595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" id:\"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" pid:4688 exit_status:137 exited_at:{seconds:1752106868 nanos:206825554}" Jul 10 00:21:08.316923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa-rootfs.mount: Deactivated successfully. Jul 10 00:21:08.329053 containerd[1524]: time="2025-07-10T00:21:08.328961713Z" level=info msg="shim disconnected" id=11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa namespace=k8s.io Jul 10 00:21:08.330363 containerd[1524]: time="2025-07-10T00:21:08.330065430Z" level=warning msg="cleaning up after shim disconnected" id=11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa namespace=k8s.io Jul 10 00:21:08.359701 containerd[1524]: time="2025-07-10T00:21:08.330221240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:21:08.563445 containerd[1524]: time="2025-07-10T00:21:08.562351556Z" level=info msg="received exit event sandbox_id:\"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" exit_status:137 exited_at:{seconds:1752106868 nanos:206825554}" Jul 10 00:21:08.575286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa-shm.mount: Deactivated successfully. Jul 10 00:21:08.995068 kubelet[2739]: I0710 00:21:08.988506 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:09.007242 systemd-networkd[1455]: cali1e8cea7a198: Link DOWN Jul 10 00:21:09.007253 systemd-networkd[1455]: cali1e8cea7a198: Lost carrier Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:08.947 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:08.954 [INFO][5542] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" iface="eth0" netns="/var/run/netns/cni-a718b721-7a5b-dccd-d157-6bf9dc99c732" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:08.956 [INFO][5542] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" iface="eth0" netns="/var/run/netns/cni-a718b721-7a5b-dccd-d157-6bf9dc99c732" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.002 [INFO][5542] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" after=47.070563ms iface="eth0" netns="/var/run/netns/cni-a718b721-7a5b-dccd-d157-6bf9dc99c732" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.002 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.002 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.238 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.242 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.243 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.320 [INFO][5552] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.321 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.325 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:21:09.335099 containerd[1524]: 2025-07-10 00:21:09.330 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:09.341562 containerd[1524]: time="2025-07-10T00:21:09.338181827Z" level=info msg="TearDown network for sandbox \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" successfully" Jul 10 00:21:09.341562 containerd[1524]: time="2025-07-10T00:21:09.338233488Z" level=info msg="StopPodSandbox for \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" returns successfully" Jul 10 00:21:09.350157 systemd[1]: run-netns-cni\x2da718b721\x2d7a5b\x2ddccd\x2dd157\x2d6bf9dc99c732.mount: Deactivated successfully. Jul 10 00:21:09.475461 kubelet[2739]: I0710 00:21:09.475079 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fdfb1967-1ea6-4004-9de6-57714a08b7b9-calico-apiserver-certs\") pod \"fdfb1967-1ea6-4004-9de6-57714a08b7b9\" (UID: \"fdfb1967-1ea6-4004-9de6-57714a08b7b9\") " Jul 10 00:21:09.475461 kubelet[2739]: I0710 00:21:09.475235 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5prp4\" (UniqueName: \"kubernetes.io/projected/fdfb1967-1ea6-4004-9de6-57714a08b7b9-kube-api-access-5prp4\") pod \"fdfb1967-1ea6-4004-9de6-57714a08b7b9\" (UID: \"fdfb1967-1ea6-4004-9de6-57714a08b7b9\") " Jul 10 00:21:09.557871 systemd[1]: var-lib-kubelet-pods-fdfb1967\x2d1ea6\x2d4004\x2d9de6\x2d57714a08b7b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5prp4.mount: Deactivated successfully. Jul 10 00:21:09.574970 kubelet[2739]: I0710 00:21:09.559496 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfb1967-1ea6-4004-9de6-57714a08b7b9-kube-api-access-5prp4" (OuterVolumeSpecName: "kube-api-access-5prp4") pod "fdfb1967-1ea6-4004-9de6-57714a08b7b9" (UID: "fdfb1967-1ea6-4004-9de6-57714a08b7b9"). InnerVolumeSpecName "kube-api-access-5prp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:21:09.574307 systemd[1]: var-lib-kubelet-pods-fdfb1967\x2d1ea6\x2d4004\x2d9de6\x2d57714a08b7b9-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 00:21:09.576628 kubelet[2739]: I0710 00:21:09.576180 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5prp4\" (UniqueName: \"kubernetes.io/projected/fdfb1967-1ea6-4004-9de6-57714a08b7b9-kube-api-access-5prp4\") on node \"ci-4344.1.1-n-5827fce73f\" DevicePath \"\"" Jul 10 00:21:09.576893 kubelet[2739]: I0710 00:21:09.562992 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfb1967-1ea6-4004-9de6-57714a08b7b9-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "fdfb1967-1ea6-4004-9de6-57714a08b7b9" (UID: "fdfb1967-1ea6-4004-9de6-57714a08b7b9"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:21:09.677334 kubelet[2739]: I0710 00:21:09.677185 2739 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fdfb1967-1ea6-4004-9de6-57714a08b7b9-calico-apiserver-certs\") on node \"ci-4344.1.1-n-5827fce73f\" DevicePath \"\"" Jul 10 00:21:09.825015 systemd[1]: Removed slice kubepods-besteffort-podfdfb1967_1ea6_4004_9de6_57714a08b7b9.slice - libcontainer container kubepods-besteffort-podfdfb1967_1ea6_4004_9de6_57714a08b7b9.slice. Jul 10 00:21:09.825250 systemd[1]: kubepods-besteffort-podfdfb1967_1ea6_4004_9de6_57714a08b7b9.slice: Consumed 1.143s CPU time, 46M memory peak, 374K read from disk. Jul 10 00:21:10.332185 systemd[1]: Started sshd@16-164.90.146.220:22-147.75.109.163:57728.service - OpenSSH per-connection server daemon (147.75.109.163:57728). Jul 10 00:21:10.499993 sshd[5570]: Accepted publickey for core from 147.75.109.163 port 57728 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:10.504500 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:10.518049 systemd-logind[1496]: New session 15 of user core. Jul 10 00:21:10.522652 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:21:10.759054 kubelet[2739]: E0710 00:21:10.758417 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:10.760951 kubelet[2739]: E0710 00:21:10.760798 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:11.064857 sshd[5577]: Connection closed by 147.75.109.163 port 57728 Jul 10 00:21:11.067463 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:11.079425 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:21:11.080784 systemd[1]: sshd@16-164.90.146.220:22-147.75.109.163:57728.service: Deactivated successfully. Jul 10 00:21:11.088011 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:21:11.093798 systemd-logind[1496]: Removed session 15. Jul 10 00:21:11.784949 kubelet[2739]: I0710 00:21:11.784723 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfb1967-1ea6-4004-9de6-57714a08b7b9" path="/var/lib/kubelet/pods/fdfb1967-1ea6-4004-9de6-57714a08b7b9/volumes" Jul 10 00:21:13.967342 containerd[1524]: time="2025-07-10T00:21:13.967284112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\" id:\"d03dafa19ecb17df281d24aa79299b4b7fda6036cdb76cca146129fe2d4da0ef\" pid:5602 exited_at:{seconds:1752106873 nanos:966303944}" Jul 10 00:21:16.086583 systemd[1]: Started sshd@17-164.90.146.220:22-147.75.109.163:54346.service - OpenSSH per-connection server daemon (147.75.109.163:54346). Jul 10 00:21:16.310493 sshd[5618]: Accepted publickey for core from 147.75.109.163 port 54346 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:16.317346 sshd-session[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:16.333092 systemd-logind[1496]: New session 16 of user core. Jul 10 00:21:16.337196 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:21:17.107272 sshd[5620]: Connection closed by 147.75.109.163 port 54346 Jul 10 00:21:17.108496 sshd-session[5618]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:17.116600 systemd[1]: sshd@17-164.90.146.220:22-147.75.109.163:54346.service: Deactivated successfully. Jul 10 00:21:17.118405 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:21:17.120813 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:21:17.127209 systemd-logind[1496]: Removed session 16. Jul 10 00:21:22.129357 systemd[1]: Started sshd@18-164.90.146.220:22-147.75.109.163:54350.service - OpenSSH per-connection server daemon (147.75.109.163:54350). Jul 10 00:21:22.254632 sshd[5640]: Accepted publickey for core from 147.75.109.163 port 54350 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:22.257299 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:22.265768 systemd-logind[1496]: New session 17 of user core. Jul 10 00:21:22.272272 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:21:22.851952 sshd[5642]: Connection closed by 147.75.109.163 port 54350 Jul 10 00:21:22.853228 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:22.864580 systemd[1]: sshd@18-164.90.146.220:22-147.75.109.163:54350.service: Deactivated successfully. Jul 10 00:21:22.869799 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:21:22.871771 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:21:22.874411 systemd-logind[1496]: Removed session 17. Jul 10 00:21:25.728602 containerd[1524]: time="2025-07-10T00:21:25.728509764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951\" id:\"556df3edee00815518b03640f4d07b7ab831ee06b94f64d7a0852fdb44cc8cc5\" pid:5667 exited_at:{seconds:1752106885 nanos:718790618}" Jul 10 00:21:26.759184 kubelet[2739]: E0710 00:21:26.759126 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:27.874415 systemd[1]: Started sshd@19-164.90.146.220:22-147.75.109.163:58690.service - OpenSSH per-connection server daemon (147.75.109.163:58690). Jul 10 00:21:27.950985 sshd[5677]: Accepted publickey for core from 147.75.109.163 port 58690 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:27.953487 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:27.962673 systemd-logind[1496]: New session 18 of user core. Jul 10 00:21:27.975408 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:21:28.182963 sshd[5679]: Connection closed by 147.75.109.163 port 58690 Jul 10 00:21:28.184626 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:28.201335 systemd[1]: sshd@19-164.90.146.220:22-147.75.109.163:58690.service: Deactivated successfully. Jul 10 00:21:28.206284 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:21:28.210011 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:21:28.214284 systemd-logind[1496]: Removed session 18. Jul 10 00:21:28.218302 systemd[1]: Started sshd@20-164.90.146.220:22-147.75.109.163:58696.service - OpenSSH per-connection server daemon (147.75.109.163:58696). Jul 10 00:21:28.313908 sshd[5691]: Accepted publickey for core from 147.75.109.163 port 58696 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:28.317732 sshd-session[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:28.327066 systemd-logind[1496]: New session 19 of user core. Jul 10 00:21:28.334692 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:21:28.783910 sshd[5693]: Connection closed by 147.75.109.163 port 58696 Jul 10 00:21:28.789021 sshd-session[5691]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:28.800430 systemd[1]: sshd@20-164.90.146.220:22-147.75.109.163:58696.service: Deactivated successfully. Jul 10 00:21:28.804827 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:21:28.808203 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:21:28.812311 systemd[1]: Started sshd@21-164.90.146.220:22-147.75.109.163:58702.service - OpenSSH per-connection server daemon (147.75.109.163:58702). Jul 10 00:21:28.815430 systemd-logind[1496]: Removed session 19. Jul 10 00:21:28.922130 sshd[5704]: Accepted publickey for core from 147.75.109.163 port 58702 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:28.925667 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:28.934956 systemd-logind[1496]: New session 20 of user core. Jul 10 00:21:28.950413 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:21:30.096716 sshd[5706]: Connection closed by 147.75.109.163 port 58702 Jul 10 00:21:30.098584 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:30.122164 systemd[1]: sshd@21-164.90.146.220:22-147.75.109.163:58702.service: Deactivated successfully. Jul 10 00:21:30.130099 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:21:30.132380 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:21:30.146555 systemd[1]: Started sshd@22-164.90.146.220:22-147.75.109.163:58708.service - OpenSSH per-connection server daemon (147.75.109.163:58708). Jul 10 00:21:30.149784 systemd-logind[1496]: Removed session 20. Jul 10 00:21:30.297648 sshd[5723]: Accepted publickey for core from 147.75.109.163 port 58708 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:30.301720 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:30.315055 systemd-logind[1496]: New session 21 of user core. Jul 10 00:21:30.318514 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:21:31.084603 sshd[5727]: Connection closed by 147.75.109.163 port 58708 Jul 10 00:21:31.086080 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:31.106541 systemd[1]: sshd@22-164.90.146.220:22-147.75.109.163:58708.service: Deactivated successfully. Jul 10 00:21:31.112371 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:21:31.114732 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:21:31.124348 systemd-logind[1496]: Removed session 21. Jul 10 00:21:31.128069 systemd[1]: Started sshd@23-164.90.146.220:22-147.75.109.163:58722.service - OpenSSH per-connection server daemon (147.75.109.163:58722). Jul 10 00:21:31.210138 sshd[5736]: Accepted publickey for core from 147.75.109.163 port 58722 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:31.214897 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:31.226030 systemd-logind[1496]: New session 22 of user core. Jul 10 00:21:31.234447 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:21:31.565755 sshd[5750]: Connection closed by 147.75.109.163 port 58722 Jul 10 00:21:31.568056 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:31.588152 systemd[1]: sshd@23-164.90.146.220:22-147.75.109.163:58722.service: Deactivated successfully. Jul 10 00:21:31.593914 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:21:31.599227 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:21:31.604728 systemd-logind[1496]: Removed session 22. Jul 10 00:21:31.616752 containerd[1524]: time="2025-07-10T00:21:31.616520138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0380277decbcc23452914867d1e6782fc2bfc6cefa8ba92e8808a3b7e6658b2e\" id:\"6c4e318ade088d873581079876c35575439aebfecdb0c4787999c243962dfd9d\" pid:5749 exited_at:{seconds:1752106891 nanos:615007798}" Jul 10 00:21:33.187724 containerd[1524]: time="2025-07-10T00:21:33.187597997Z" level=info msg="StopContainer for \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" with timeout 30 (s)" Jul 10 00:21:33.198499 containerd[1524]: time="2025-07-10T00:21:33.198291591Z" level=info msg="Stop container \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" with signal terminated" Jul 10 00:21:33.217471 systemd[1]: cri-containerd-a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020.scope: Deactivated successfully. Jul 10 00:21:33.218750 systemd[1]: cri-containerd-a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020.scope: Consumed 1.302s CPU time, 76.4M memory peak, 16.2M read from disk. Jul 10 00:21:33.231185 containerd[1524]: time="2025-07-10T00:21:33.230876698Z" level=info msg="received exit event container_id:\"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" id:\"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" pid:5142 exit_status:1 exited_at:{seconds:1752106893 nanos:229986923}" Jul 10 00:21:33.232130 containerd[1524]: time="2025-07-10T00:21:33.231730911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" id:\"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" pid:5142 exit_status:1 exited_at:{seconds:1752106893 nanos:229986923}" Jul 10 00:21:33.293550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020-rootfs.mount: Deactivated successfully. Jul 10 00:21:33.320487 containerd[1524]: time="2025-07-10T00:21:33.320400191Z" level=info msg="StopContainer for \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" returns successfully" Jul 10 00:21:33.347523 containerd[1524]: time="2025-07-10T00:21:33.347388024Z" level=info msg="StopPodSandbox for \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\"" Jul 10 00:21:33.348265 containerd[1524]: time="2025-07-10T00:21:33.348109386Z" level=info msg="Container to stop \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:21:33.368735 systemd[1]: cri-containerd-f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd.scope: Deactivated successfully. Jul 10 00:21:33.377812 containerd[1524]: time="2025-07-10T00:21:33.377620547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" id:\"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" pid:4379 exit_status:137 exited_at:{seconds:1752106893 nanos:377261517}" Jul 10 00:21:33.435331 containerd[1524]: time="2025-07-10T00:21:33.435079463Z" level=info msg="shim disconnected" id=f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd namespace=k8s.io Jul 10 00:21:33.435331 containerd[1524]: time="2025-07-10T00:21:33.435152518Z" level=warning msg="cleaning up after shim disconnected" id=f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd namespace=k8s.io Jul 10 00:21:33.435331 containerd[1524]: time="2025-07-10T00:21:33.435163651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:21:33.436306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd-rootfs.mount: Deactivated successfully. Jul 10 00:21:33.520057 containerd[1524]: time="2025-07-10T00:21:33.519992126Z" level=info msg="received exit event sandbox_id:\"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" exit_status:137 exited_at:{seconds:1752106893 nanos:377261517}" Jul 10 00:21:33.528549 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd-shm.mount: Deactivated successfully. Jul 10 00:21:33.807803 systemd-networkd[1455]: cali8b9df0de19c: Link DOWN Jul 10 00:21:33.807814 systemd-networkd[1455]: cali8b9df0de19c: Lost carrier Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:33.797 [INFO][5848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:33.801 [INFO][5848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" iface="eth0" netns="/var/run/netns/cni-ef61e52d-5612-34f9-a7d2-9c7c6dcbdad8" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:33.801 [INFO][5848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" iface="eth0" netns="/var/run/netns/cni-ef61e52d-5612-34f9-a7d2-9c7c6dcbdad8" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:33.812 [INFO][5848] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" after=10.759418ms iface="eth0" netns="/var/run/netns/cni-ef61e52d-5612-34f9-a7d2-9c7c6dcbdad8" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:33.812 [INFO][5848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:33.812 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:34.007 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:34.010 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:34.010 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:34.075 [INFO][5856] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:34.075 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:34.080 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:21:34.089577 containerd[1524]: 2025-07-10 00:21:34.084 [INFO][5848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:34.093559 containerd[1524]: time="2025-07-10T00:21:34.091002857Z" level=info msg="TearDown network for sandbox \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" successfully" Jul 10 00:21:34.093559 containerd[1524]: time="2025-07-10T00:21:34.091042427Z" level=info msg="StopPodSandbox for \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" returns successfully" Jul 10 00:21:34.101223 systemd[1]: run-netns-cni\x2def61e52d\x2d5612\x2d34f9\x2da7d2\x2d9c7c6dcbdad8.mount: Deactivated successfully. Jul 10 00:21:34.228670 kubelet[2739]: I0710 00:21:34.228585 2739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:34.322114 kubelet[2739]: I0710 00:21:34.321844 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m94p\" (UniqueName: \"kubernetes.io/projected/7b32e0ba-807d-4e45-b247-49987273641d-kube-api-access-9m94p\") pod \"7b32e0ba-807d-4e45-b247-49987273641d\" (UID: \"7b32e0ba-807d-4e45-b247-49987273641d\") " Jul 10 00:21:34.322114 kubelet[2739]: I0710 00:21:34.321962 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b32e0ba-807d-4e45-b247-49987273641d-calico-apiserver-certs\") pod \"7b32e0ba-807d-4e45-b247-49987273641d\" (UID: \"7b32e0ba-807d-4e45-b247-49987273641d\") " Jul 10 00:21:34.375655 systemd[1]: var-lib-kubelet-pods-7b32e0ba\x2d807d\x2d4e45\x2db247\x2d49987273641d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9m94p.mount: Deactivated successfully. Jul 10 00:21:34.378889 kubelet[2739]: I0710 00:21:34.378827 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b32e0ba-807d-4e45-b247-49987273641d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7b32e0ba-807d-4e45-b247-49987273641d" (UID: "7b32e0ba-807d-4e45-b247-49987273641d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:21:34.384164 systemd[1]: var-lib-kubelet-pods-7b32e0ba\x2d807d\x2d4e45\x2db247\x2d49987273641d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 00:21:34.392104 kubelet[2739]: I0710 00:21:34.392003 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b32e0ba-807d-4e45-b247-49987273641d-kube-api-access-9m94p" (OuterVolumeSpecName: "kube-api-access-9m94p") pod "7b32e0ba-807d-4e45-b247-49987273641d" (UID: "7b32e0ba-807d-4e45-b247-49987273641d"). InnerVolumeSpecName "kube-api-access-9m94p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:21:34.423349 kubelet[2739]: I0710 00:21:34.423281 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9m94p\" (UniqueName: \"kubernetes.io/projected/7b32e0ba-807d-4e45-b247-49987273641d-kube-api-access-9m94p\") on node \"ci-4344.1.1-n-5827fce73f\" DevicePath \"\"" Jul 10 00:21:34.423555 kubelet[2739]: I0710 00:21:34.423372 2739 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b32e0ba-807d-4e45-b247-49987273641d-calico-apiserver-certs\") on node \"ci-4344.1.1-n-5827fce73f\" DevicePath \"\"" Jul 10 00:21:35.143990 containerd[1524]: time="2025-07-10T00:21:35.143871456Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1752106893 nanos:377261517}" Jul 10 00:21:35.214973 systemd[1]: Removed slice kubepods-besteffort-pod7b32e0ba_807d_4e45_b247_49987273641d.slice - libcontainer container kubepods-besteffort-pod7b32e0ba_807d_4e45_b247_49987273641d.slice. Jul 10 00:21:35.216224 systemd[1]: kubepods-besteffort-pod7b32e0ba_807d_4e45_b247_49987273641d.slice: Consumed 1.348s CPU time, 76.7M memory peak, 16.5M read from disk. Jul 10 00:21:35.773623 kubelet[2739]: I0710 00:21:35.773551 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b32e0ba-807d-4e45-b247-49987273641d" path="/var/lib/kubelet/pods/7b32e0ba-807d-4e45-b247-49987273641d/volumes" Jul 10 00:21:36.580886 systemd[1]: Started sshd@24-164.90.146.220:22-147.75.109.163:40098.service - OpenSSH per-connection server daemon (147.75.109.163:40098). Jul 10 00:21:36.718798 sshd[5874]: Accepted publickey for core from 147.75.109.163 port 40098 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:36.724395 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:36.736236 systemd-logind[1496]: New session 23 of user core. Jul 10 00:21:36.739243 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:21:36.760904 kubelet[2739]: E0710 00:21:36.760630 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:37.539409 sshd[5876]: Connection closed by 147.75.109.163 port 40098 Jul 10 00:21:37.541573 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:37.553898 systemd[1]: sshd@24-164.90.146.220:22-147.75.109.163:40098.service: Deactivated successfully. Jul 10 00:21:37.558200 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:21:37.560521 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:21:37.564038 systemd-logind[1496]: Removed session 23. Jul 10 00:21:40.760386 kubelet[2739]: E0710 00:21:40.760255 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:40.973504 containerd[1524]: time="2025-07-10T00:21:40.973342612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\" id:\"1c9ac6c9cddb78b469f0018f89675153b8c831f24c33bfc208d61c1672410c84\" pid:5902 exited_at:{seconds:1752106900 nanos:972606409}" Jul 10 00:21:42.573341 systemd[1]: Started sshd@25-164.90.146.220:22-147.75.109.163:40104.service - OpenSSH per-connection server daemon (147.75.109.163:40104). Jul 10 00:21:42.661816 sshd[5914]: Accepted publickey for core from 147.75.109.163 port 40104 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:42.665272 sshd-session[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:42.674223 systemd-logind[1496]: New session 24 of user core. Jul 10 00:21:42.684282 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:21:43.112644 sshd[5916]: Connection closed by 147.75.109.163 port 40104 Jul 10 00:21:43.114448 sshd-session[5914]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:43.123450 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:21:43.123596 systemd[1]: sshd@25-164.90.146.220:22-147.75.109.163:40104.service: Deactivated successfully. Jul 10 00:21:43.129270 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:21:43.135345 systemd-logind[1496]: Removed session 24. Jul 10 00:21:43.585118 containerd[1524]: time="2025-07-10T00:21:43.585040393Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eee3523cdcddaba348aff4bb6fe05d9a23305942069bde70b540248aa7e80f4\" id:\"1e9f433e3f87e346585e3ea8783a44afbc7fca111a2605ebb6c9840a699f3286\" pid:5939 exited_at:{seconds:1752106903 nanos:582871856}" Jul 10 00:21:44.022739 containerd[1524]: time="2025-07-10T00:21:44.022230403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8771adf83dc3cc5fc3329fbb618b98107dc87816d1fa14dedbfd96e3a864951\" id:\"c24e0aeeb7fde53fa2f69ec88b01b133df2d7dc8dadced9de83dd0fe0468ff8a\" pid:5964 exited_at:{seconds:1752106904 nanos:21577318}" Jul 10 00:21:46.008027 kubelet[2739]: I0710 00:21:46.007714 2739 scope.go:117] "RemoveContainer" containerID="a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020" Jul 10 00:21:46.050708 containerd[1524]: time="2025-07-10T00:21:46.050630234Z" level=info msg="RemoveContainer for \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\"" Jul 10 00:21:46.087516 containerd[1524]: time="2025-07-10T00:21:46.087386927Z" level=info msg="RemoveContainer for \"a30111521fde0e3d9662e6863a4f792594b89bddc7d4c64e19af0acf85fbe020\" returns successfully" Jul 10 00:21:46.095807 kubelet[2739]: I0710 00:21:46.095735 2739 scope.go:117] "RemoveContainer" containerID="95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67" Jul 10 00:21:46.099726 containerd[1524]: time="2025-07-10T00:21:46.099491472Z" level=info msg="RemoveContainer for \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\"" Jul 10 00:21:46.145331 containerd[1524]: time="2025-07-10T00:21:46.145251110Z" level=info msg="RemoveContainer for \"95af4b372d55b77edbf0b78ccc91ba8933c90369d64859cb7d1bbd6238d1af67\" returns successfully" Jul 10 00:21:46.148992 containerd[1524]: time="2025-07-10T00:21:46.148836029Z" level=info msg="StopPodSandbox for \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\"" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.227 [WARNING][5983] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.228 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.228 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" iface="eth0" netns="" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.228 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.228 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.269 [INFO][5991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.269 [INFO][5991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.269 [INFO][5991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.280 [WARNING][5991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.280 [INFO][5991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.284 [INFO][5991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:21:46.296106 containerd[1524]: 2025-07-10 00:21:46.290 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:46.297156 containerd[1524]: time="2025-07-10T00:21:46.296912196Z" level=info msg="TearDown network for sandbox \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" successfully" Jul 10 00:21:46.297156 containerd[1524]: time="2025-07-10T00:21:46.297020564Z" level=info msg="StopPodSandbox for \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" returns successfully" Jul 10 00:21:46.300447 containerd[1524]: time="2025-07-10T00:21:46.300292361Z" level=info msg="RemovePodSandbox for \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\"" Jul 10 00:21:46.308508 containerd[1524]: time="2025-07-10T00:21:46.308250502Z" level=info msg="Forcibly stopping sandbox \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\"" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.372 [WARNING][6005] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.372 [INFO][6005] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.372 [INFO][6005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" iface="eth0" netns="" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.372 [INFO][6005] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.373 [INFO][6005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.414 [INFO][6012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.414 [INFO][6012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.414 [INFO][6012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.425 [WARNING][6012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.426 [INFO][6012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" HandleID="k8s-pod-network.11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--7bmzr-eth0" Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.428 [INFO][6012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:21:46.434029 containerd[1524]: 2025-07-10 00:21:46.430 [INFO][6005] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa" Jul 10 00:21:46.434616 containerd[1524]: time="2025-07-10T00:21:46.434229739Z" level=info msg="TearDown network for sandbox \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" successfully" Jul 10 00:21:46.449960 containerd[1524]: time="2025-07-10T00:21:46.449838760Z" level=info msg="Ensure that sandbox 11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa in task-service has been cleanup successfully" Jul 10 00:21:46.454081 containerd[1524]: time="2025-07-10T00:21:46.453985107Z" level=info msg="RemovePodSandbox \"11fa5ee0bf1a7ebe5fe32fdd32fccbed17f269c76fc3c0a38e26bcb641fb27fa\" returns successfully" Jul 10 00:21:46.462462 containerd[1524]: time="2025-07-10T00:21:46.462365616Z" level=info msg="StopPodSandbox for \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\"" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.519 [WARNING][6026] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.520 [INFO][6026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.520 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" iface="eth0" netns="" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.520 [INFO][6026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.520 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.561 [INFO][6033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.561 [INFO][6033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.561 [INFO][6033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.572 [WARNING][6033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.572 [INFO][6033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.577 [INFO][6033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:21:46.583621 containerd[1524]: 2025-07-10 00:21:46.580 [INFO][6026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:46.584597 containerd[1524]: time="2025-07-10T00:21:46.583676539Z" level=info msg="TearDown network for sandbox \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" successfully" Jul 10 00:21:46.584597 containerd[1524]: time="2025-07-10T00:21:46.583703360Z" level=info msg="StopPodSandbox for \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" returns successfully" Jul 10 00:21:46.584597 containerd[1524]: time="2025-07-10T00:21:46.584279591Z" level=info msg="RemovePodSandbox for \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\"" Jul 10 00:21:46.584597 containerd[1524]: time="2025-07-10T00:21:46.584319705Z" level=info msg="Forcibly stopping sandbox \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\"" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.644 [WARNING][6048] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" WorkloadEndpoint="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.644 [INFO][6048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.644 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" iface="eth0" netns="" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.644 [INFO][6048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.644 [INFO][6048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.682 [INFO][6055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.682 [INFO][6055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.682 [INFO][6055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.694 [WARNING][6055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.694 [INFO][6055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" HandleID="k8s-pod-network.f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Workload="ci--4344.1.1--n--5827fce73f-k8s-calico--apiserver--54c749cb9c--w2k2c-eth0" Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.698 [INFO][6055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:21:46.705438 containerd[1524]: 2025-07-10 00:21:46.701 [INFO][6048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd" Jul 10 00:21:46.706555 containerd[1524]: time="2025-07-10T00:21:46.705492207Z" level=info msg="TearDown network for sandbox \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" successfully" Jul 10 00:21:46.709076 containerd[1524]: time="2025-07-10T00:21:46.708508333Z" level=info msg="Ensure that sandbox f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd in task-service has been cleanup successfully" Jul 10 00:21:46.712417 containerd[1524]: time="2025-07-10T00:21:46.712352065Z" level=info msg="RemovePodSandbox \"f38e1fd10cc83b99d284c2d6ebc325e2352e21574bbbf7c3107a193e21057dfd\" returns successfully" Jul 10 00:21:47.760055 kubelet[2739]: E0710 00:21:47.759041 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 10 00:21:48.134904 systemd[1]: Started sshd@26-164.90.146.220:22-147.75.109.163:33556.service - OpenSSH per-connection server daemon (147.75.109.163:33556). Jul 10 00:21:48.313990 sshd[6062]: Accepted publickey for core from 147.75.109.163 port 33556 ssh2: RSA SHA256:JFFmWBr9XY5X+oC1eVKpXV8si1NkwkWIVGbS7Vy1uBE Jul 10 00:21:48.318652 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:48.331435 systemd-logind[1496]: New session 25 of user core. Jul 10 00:21:48.339266 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:21:49.506287 sshd[6064]: Connection closed by 147.75.109.163 port 33556 Jul 10 00:21:49.508215 sshd-session[6062]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:49.517842 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:21:49.520081 systemd[1]: sshd@26-164.90.146.220:22-147.75.109.163:33556.service: Deactivated successfully. Jul 10 00:21:49.526801 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:21:49.535420 systemd-logind[1496]: Removed session 25.